As a developer, rarely has my approach been put into question. If it works, it ships. The definition of “works” is usually well defined. Does it look like the design composition? When people click on the button, does it send the data where it needs to go?
As a designer, rarely has my approach not been questioned. I can’t even count how many times I’ve heard “have you tested this with users” as a way of shutting down any design direction I might want to take things.
In the beginning
I should give some background…
Many of the projects I worked on had a public-facing component (the site) and a client-facing component (the admin). Designers would design the site and I would build the site. I would design and build the admin to go along with it. Rarely did I receive any direction on what the admin should look like. I designed what I thought would work.
For 5 years, through 3 agencies, this was my job. I built content management systems that salespeople sold for thousands of dollars and clients used for years—nearly a decade at one particular organization. (I’m proud that I designed and built something that lasted that long.)
I leapt into freelance to have more freedom to do design. For 5 years, through dozens of client projects, I had the opportunity—and freedom—to design and develop. Being trusted for your expertise is a wonderful feeling.
In the last couple years, however, doubt has crept in. Design after design has been rejected. Interactions have been questioned. And that phrase is thrown at me. Have you tested this with users? Build a prototype, put it in front of users, and prove that this is the way to do things.
In this way, I’m reminded of Doug Bowman’s experience at Google:
Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data. Data in your favour? Ok, launch it. Data shows negative effects? Back to the drawing board. And that data eventually becomes a crutch for every decision, paralyzing the company and preventing it from making any daring design decisions.
Thing is, it’s hard to argue with data. Why do I resist building prototypes and testing my ideas? Wouldn’t I want to know whether my ideas were any good?
<insert lengthy soul searching>
When I design, I create a solution for the problem at hand. I’ve gone through the work of understanding the use cases, done the work of prioritizing, and put out something that I believe solves the problem.
Of course, the way I solve that problem is going to be different than how someone else might solve it. There are a million ways to solve these problems. And often times, the issues of a design aren’t uncovered in a cursory review from co-workers or users.
A quick aside: I find most peer reviews focus on inane details rather than how the solution does or does not actually solve the use cases it’s designed to solve. Mostly because peers haven’t thought about the problem space and usually can’t provide meaningful input on the design. But I digress…
So what does it mean to test a design as a prototype? What quantifies it as a success? Am I building a prototype so you can understand things in a way that you couldn’t understand in a Sketch or Photoshop file?
If we look at the process of a design sprint, there are 5 major steps: Understand, Diverge, Converge, Prototype, and Test. The prototyping phase appears near the end once there has been a convergence of ideas and then those ideas are validated with users.
However, in those times I’ve been asked to build a prototype, we’ve still been working between the divergence and convergence phases. The prototyping and testing becomes a way to decide in which direction to converge. It’s like saying “I don’t believe you. Prove it.” It feels like a pissing match between designers.
Or maybe I just am not a good designer. Maybe I’m a difficult prima donna who doesn’t like to be questioned. Which, I guess, leads me back to doubt.