posttitle = Why giving humans bug reports is easier than giving feature suggestions titleClass =title-long len =71

Why giving humans bug reports is easier than giving feature suggestions

I spent this past weekend at a case study competition called UW Apprentice, which was unique among events I’ve attended in two ways. One is that the cases were fresh from real startups that came in and explained the challenge they were experiencing, and who were all set to act on the best advice. The other was that you gave and received feedback with each of your teammates after each cases, and so you could review it all immediately. In theory, this could let you update your behaviour for the next case to be a more valuable team member, although I think in practice the schedule was too rushed for much reflection to occur.

Anyway, I noticed something interesting while filling out the “needs improvement” section at one point. The team member I was giving feedback to didn’t have any obvious shortcomings, and I found myself at a bit of a loss for what to say. Obviously they weren’t perfect, but they were totally generally “good” across the board. I wrote something general that was related to my sense of why we hadn’t won that round.

Today, I thought of this again when I was doing the final edits on a peer letter of recommendation for a fellowship program my friend was applying to. I had written last week in the draft: “It’s hard for me to think of a really good suggestion for an area of improvement for Tessa—” …today I added “—I’ve noticed it’s much easier to recommend bugfixes than features, for people.”

In this blog post, I figured I’d reflect a bit more on…

  • what the difference is
  • why feature suggestions are harder
  • some strategies for feature suggestions

It might be kind of rough, and I might find future!me disagreeing with current!me about this pretty soon, in which case I may edit it.

Before that: what am I not talking about?

Is it just the difference between negative and positive feedback? Nope. Negative feedback has the structure of “that thing you did—don’t do that [as often]”, while positive feedback has the structure of “that thing you did—keep doing it [and maybe do it more]”. The bug report / feature suggestion thing is more subtle.

My sense is that negative and positive feedback are in response to single or repeated behaviours. They’re noticing a behaviour, and either trying to reinforce it or discourage it—amplify or dampen, in control theory language. Negative and positive feedback can be incredibly powerful—consider this pigeon who was taught via only rewards how to respond in two distinct english words “peck” and “turn.”

An animation: a pigeon pecks at the word "PECK" then receives food. The word "PECK" becomes "TURN" and the pigeon turns around.

Super cool. But that feedback there was operating on a much lower level than the bug report / feature suggestion distinction. With respect to a bird, a bug report might be “it can’t fly”, whereas the very notion of teaching a bird to peck only the word “peck” is the addition of a feature. But you can’t give a bird a bug report or a feature request, because those are both conceptual. You can conceive of one but you can’t communicate it.

So really, what’s the difference between human bug report vs feature suggestions?

Intuitively, the difference I think is pretty clear. A bug report is “this isn’t working”, whereas a feature suggestion is “it would be really cool if…”

From a more technical perspective, it’s about expectations. Bug reports often come in the form “the _______ is broken.” Of course, that’s practically useless as feedback. As the person trying to fix the bug, what you want to know are the answers to three questions:

  1. What did you do?
  2. What happened as a result?
  3. What were you expecting?

It’s #3—the violation of expectation—that makes it a bug report and not a feature request. Unless, perhaps, the “what happened as a result” was exactly what was designed to happen as a result, in which case perhaps the user is expecting the app to have a feature it does not.

I think it’s worth talking about the distinction between errors and bugs here too. There’s an exceptional post by Sarah Constantin about this, called Errors vs. Bugs and the End of Stupidity. Sarah writes:

A common mental model for performance is what I’ll call the “error model.” In the error model, a person’s performance of a musical piece (or performance on a test) is a perfect performance plus some random error. […]

But we could also consider the “bug model” of errors. A person taking a test or playing a piece of music is executing a program, a deterministic procedure. If your program has a bug, then you’ll get a whole class of problems wrong…

So with basic negative feedback, you’re just identifying an error. This can still be helpful. Maybe you don’t have enough context to see the pattern that is the bug. But humans often have the same problems for the same reasons, so if you’re familiar with humans then you’ll often be able to say something about the general pattern you saw, even if it’s not a particularly insightful something.

I think that the reason feature suggestions are so valuable is that they have the potential to take you from good to great. Satisfactory to outstanding.

Why are feature suggestions harder to give to humans than bug reports?

For the same reason that good feature suggestions are way harder to give about products than are bug reports.

It’s often quite easy for a user to try out a few different things using the product, and to see which violate their expectations and which do not. Many don’t bother, which is usually fine because the bugfixer can usually do this themselves. All that’s required to give a good bug report (for products or for humans) is

  • have expectations
  • notice patterns

These are two things that not only are people pretty good at doing them, they’re bad at not doing them. (Random lexicon fact: the word “apophenia” refers to the (human) tendency to notice patterns where there are none, such as seeing stars automatically as constellations.)

Note that it’s pretty easy to give a bad feature suggestion. Here’s one that’s a bad idea for almost any product: “add a button that replaces all of the text with gibberish.” Then there are suggestions that are bad for the specific product they’re being recommended for, although they’d be great ideas elsewhere. “Make it into an app!” might be one of them.

What’s required to give a good feature suggestion?

You need a better understanding…

  • of the person
  • of the context they’re in
  • of the challenges/opportunities they’ll be facing

This kind of reminds me of Harry Potter (and the Methods of Rationality) where Dumbledore recommends Harry learn Occlumency (how to prevent your mind from being read) based on his situation. It also reminds me of a bunch of other things from HPMoR, which I won’t mention specifically because spoilers.

In order to give truly large feature suggestions, you need something even greater: a vision for who/what the person/product could become. In order for the feedback to be well-received, the intent of that vision may need to be fairly aligned with the intent of the feedback’s recipient. For example, I consider myself to be the very model of a modern major generalist, and don’t take kindly to suggestions that I drop everything else and specialize in one thing.

How to give feature suggestions despite them being hard

I’ll be honest, I don’t think I have the best answers to this. I have some thoughts though.

As I watched a gorgeous temple burn down in the middle of the desert last august, I decided to write a kind of manifesto/mantra. The idea was to shift the way I thought about myself. Rather than identifying with my current self, and imagining a future self I might become once I added more features, I wanted to identify with my future self, and see my current shortcomings as bugs to be fixed. As part of what I wrote, I decided I wanted to do that with other people too:

I hold space for the emergence of others

  • as they deeply are
  • and also as they want to be

The “as they want to be” connects to the point about having a vision. What I wrote here was inspired by a random quotation I recall from some now-otherwise-forgotten work of fiction: “What I really admire about you, [name], is that you see people not just as they are but as they could be.” Potential.

So one part of how to give better feature suggestions is to learn more about the person’s specific mission, on whatever timescale. To some extent (depending on the person) you can do this by asking them about it. “What are you trying to achieve?” or the classic “Where do you see yourself in 5 years?” Then imagine: how might they be able to become a slightly different person in order to achieve that?

I’m realizing that almost all of what I’ve written in this section so far is in part about shifting your expectations about someone, so as to generate feature suggestions with the same mental processes that normally spot bugs. By upping your expectations, patterns of behaviour that might have been reasonably effective before now appear totally insufficient to meet the new threshold.

A brief note on expectation, then I’ll wrap this up:

Expectation is often used to refer to two totally distinct things: entitlement and anticipation. My basic opinion is that entitlement is a rather counterproductive mental stance to have, while anticipations are really helpful for improving your model of the world. (EDIT: I’ve now written a whole blog post about this).

To explain the difference, consider a mother who says to her daughter: “I expect you to be home by midnight.” The mother does not anticipate her daughter being home on time (probably even after having made this passive-aggressive statement). It’s instead a demand, or a statement of entitlement.

This is relevant because you don’t want your attempts at feature suggestion generation (or bug reports, for that matter) to cause you to feel entitled to someone being a different way. That kind of approach tends to be anti-motivational, for reasons related to psychological reactance (related post here). But you also don’t want to naïvely anticipate unrealistic new behaviours from someone.

Instead, what you want to do is to imagine a future version of them that has succeeded at their mission, and then simulate the kinds of behaviours that you’d anticipate that future version performing.

What do you think?

My thoughts on this exact subject are pretty fresh. I’d love to hear yours. Leave a comment below or on facebook.

If you found this thought-provoking, I invite you to subscribe:    
About Malcolm

Constantly consciously expanding the boundaries of thoughtspace and actionspace. Creator of Intend, a system for improvisationally & creatively staying in touch with what's most important to you, and taking action towards it.



2 Comments

hamnox » 18 Mar 2015 » Reply

I want to remember that manifesto. The best help comes from people with that mindset, and I think I’ll model my friends better with it in mind. Thank you as always for sharing your thoughts!

You’re right on point, by the way, when you point out the potential to conflate expectation and anticipation. I’ve suffered before for the inability to separate my friends and family anticipating that I’d do great things vs. expecting that I would.

Have your say!

Leave a Reply to hamnox Cancel reply

Message

Name *

Email *