MM

[draft] Getting the most out of informal user testing — field notes

We give code to machines hoping to help them do something. If machines still can’t do it, we modify the code, give it again, and check if that made things better. Hundreds of times per day.

We give software to users hoping to help them do something. If users still can’t do it, we modify the software, give it again… or not. Especially in the early stages, we might code for weeks without checking if that made things better.

The success of our counterparts, whether human or mechanical, depends on how much we learn about them. While we may not be able to increase the amount of learning iterations with users (code needs a save and reload, users need at least some calendar alignment and willingness to hop on a video call), we can do something to increase the density of learning iterations — the amount and quality of insight we extract from each of them.

1. “Problem/solution” bucket vs “execution” bucket #

As the user goes through your product’s screens, thinking aloud, you’re either scanning for signs that the value proposition is good (the user does experience the pain points, and your strategy does address them), or that the journey is good (onboarding isn’t overloading, a crucial call to action is properly highlighted, etc), or both. One is about problem/solution, the other about execution.

Be sure to file them separately, even if just mentally, because the cost of treating one as if it were the other is high.

2. Don’t explain (unless it’s to recover from a blind alley) #

A modal dialog pops up. Your user’s cursor lingers on the message for long, excruciating seconds, much longer than you anticipated… Resist the temptation to jump in and explain. You’re trying to collect feedback, and the length of time required to figure the next step is your feedback here.

Exception: if pause becomes stuckness, thirty seconds more being stuck don’t change the signal. Note that down (twice underlined), bring the user back to the point of detour, and set them back on the right track.

3. Encourage to say more… by saying nothing #

After reading the pitch on the new landing page, my friend, who knows and uses the product already, remarks: “Yeah, that sounds about right. Obviously, there’s more that could be said.” I say nothing. “Yeah, it’s clear enough, I think. And concise.” I still say nothing. “And I hadn’t thought about this aspect…” It goes on for a few seconds more, then silence. I still say nothing. Whatever hides behind that “there’s more that could be said” will eventually come up, I’m sure. “The thing is, a big part of the value for me so far has been that…” and then I receive one hell of a lesson on what I had considered a minor benefit. Something that definitely belongs to the pitch.

Silence can be the best tool for peeking behind subtle feedback such as “There’s more that could be said…”

4. Fact or hypothesis? #

When you hear “You should add feature X”, do they mean “…because it would make for a better product”, or do they mean “…because it would make me use the product”?

“It would make for a better product” is hypothesis, like the many you’ve had to make on the way to creating a product, and with the same need for testing. Don’t dismiss it, because it comes with the benefit of fresh eyes and a different background; don’t worship it, because it’s based on ten seconds of thinking about the problem vs the ten million you’ve had.

“It would make me use the product” is still hypothesis, but one the user is making about him or herself — a much more familiar field than “the market”. You won’t come across that much, but when you do, it’s a precious signal.

How do you tell them apart? Signals from the rest of the testing will tell you (did they look like they had an “aha” moment? did you then hear disappointment when they found they couldn’t do X?). If you can muster the bluntness, just ask: “I’d like to make sure I understand: do you mean that, if it had X, you would use the product?”

5. Fact interpretations? (When you’re the one giving feedback) #

When you’re offering feedback, you can help your counterpart stay on track through clearly demarcated comments.

“I didn’t think of dragging those widgets into that box because it doesn’t look like a container” mixes hard fact (you didn’t think of dragging those widgets into that box) with interpretation (the box doesn’t look like a container).

A hasty listener might fail to separate the two and jump to “ah, the boxes should look like containers”, and never take the time to do his or her interpretation, eventually overlooking simple solutions like a one-time message that says “now drag the widget into the box”.

It’s fine to offer interpretation, just help them see the difference. Placing a “…and my guess is, it’s because…” is often all that’s needed.