Common reasons for not acting on user research results 

and how to address them

As a user researcher, doing the research, analyzing, and feeding back the results is actually not all you need to do. The hardest part can be getting your actionable recommendations turned into actual reality.
 

As this role and this step in any development process is still relatively new, chances are that you may be working together with people that do not know a lot about user research and how it is done properly or may encounter already existing misunderstandings or faulty habits. (Also appreciating the value of the research exercise in itself isn't a given. In his book Don't make me think, Steve Krug speaks of converting people to "believe" in user research and its findings.) 


There are many reasons not to make changes: finding time to do the extra work, reluctance to throw out older work, other priorities or misinterpretation of what results mean. However, what gets discussed instead is a placeholder topic, which really just is an excuse why it would be a good idea not to act on the results (right now). I was confronted with them countless times and in various contexts so I want to provide here some answers for other researchers in the same situation.

1) "We need a bigger number of tests in order to generate any meaningful findings."

2) "How often did this happen? Just that once? It was probably just an exception."

3) "Okay, we can change this, but please let's do that after we're done testing together with the other changes."

4) "Let's test more before we make any decisions."

1) "We need a bigger number of tests in order to generate any meaningful findings."

Created with Sketch.

The most common confusion I have found about user research is based on the fact that a typical usability test is a qualitative method and not a quantitative one (click here for an explanation of the difference). Quantitative methodologies are much easier when it comes to discussing results: If you have a questionnaire, you can easily ask a few hundred persons a set of questions, e.g. let them rate how easy your website is to use. Looking at the results can generate a great story: You can calculate means, check whether groups of users differ from one another (in a statistical significant way, which basically proofs that this difference is not caused by chance), and can even look if certain variables influence these results, e.g. if a different version of a website generates different ratings. In the end, you get an objective truth here which leaves only little room for interpretation and the need to address results may be much clearer than with qualitative measures. All of this is not true for qualitative measures. Still, people tend to ask for numbers and percentages for qualitative tests.


In theory, qualitative results could be quantified (e.g. by counting how often a given usability issue arises) and with the right number of cases you could make similar statements than with a quantitative methodology. However, as we are speaking about a method which requires a deep level of information, it would take very long to come to a similar number of cases. Let's take 120, a number that is considered a solid minimum size for a questionnaire sample. A questionnaire with this number of people could be realized and analyzed in a matter of days, depending on how the recruitment goes. What would this mean in comparison to 120 usability tests?

Each usability test can take a full-time researcher a day or more. Recruitment may take up to 2 hours per person, as it involves scheduling, often via email, consent forms etc. Conducting the test (let's say of an 1-hour session) will take 2 hours at least as there is preparation work, the participant may be late or there may be a technical issue, travel to the participant's location could be involved and the test may overrun. Analyzing the test easily takes four or more hours, especially if a full transcript should be produced and the analysis cannot happen on the same day, which requires listening to the recording again. Thus just running the tests and analyzing them individually would take around 120 days. Also, an overall analysis would need to be conducted which requires more time the more data was collected. Thus, doing research like this would require a researcher to spend around 7-9 months just for one research iteration. This is simply too long for agile development. Typically, 3-8 tests would be done in one iteration, which would take 1-3 weeks.


The best answer: "With this type of research, a small number of cases already is considered meaningful, so we don't need bigger numbers. Qualitative tests are meant to identify the issues and not to find out how many users will have them."

2) "How often did this happen? Just that once? It was probably just an exception."

Created with Sketch.

This one is a difficult one because it seems like a very logical argument. And actually, it may be indeed the case that this is rather an exception. However, most of the time things that have only occurred once but have been presented as one of the major issues, are grave ones. They would do real harm if they occurred in even a small number of percentages of the users, e.g. if they don't understand how to start an app. Or it is obvious that this will also happen to others with equal circumstances (same devices / backgrounds / internet connections etc.) even without testing this assumption.


Thus, it is best to answer: "We only tested with five persons in total. And even in this small sample we already found one person with this problem. So it is likely that others will have this problem as well if we keep on testing."


Also, this problem is related to Reason 1, and finding something grave even just once is already meaningful.

3) "Okay, we can change this, but please let's do that after we're done testing together with the other changes."

Created with Sketch.

This is somebody playing for time because they may be too busy at the moment to do something about it. Or it may be somebody who hopes that this problem in the end will seem minor because not that many people ended up having it and so they don't have to change it after all. However, what happens in the mean time is the following: Let's say 3 out of 5 initial persons don't understand that they can scroll on the website because of a design problem right at the fold. The week after we have another 2 and the week after that another 3 persons with the same problem. However, two other issues came up that start to get mentioned over and over. Although you as the researcher have heard the issues several times, for the participant it is the first time that they come across it and they may want to spend a few minutes talking about it. Thus, your hour with them reveals less and less new issues and you're unlikely to see any of the deeper level usability issues.

What to say: "This issue is likely to come up all the time. A timely fix would increase the value of the remaining tests immensely as at the moment, we only create more of the same findings. Iterative design literature proposes that a short cycle between research and development is actually highly beneficial in agile settings as it improves the product sooner rather than later which is often easier with regard to development."

4) "Let's test more before we make any decisions."

Created with Sketch.

After proving the value of research people may get a little too overenthusiastic about doing research and many times I have found others stagnating with making changes without doing research about every single aspect. While in general the idea is not wrong, as described above, research means a lot of effort and so it may not be possible or beneficial to test every single thing that needs deciding. Delaying important choices because you are only 90% certain can stall progress and feed into the issue described at reason 3.

There are mainly two cases in which a decision can be made without doing research first: 1) Previous research may give strong hints about how the decision needs to be. While testing would be of course always better it does require a lot of time and effort and sometimes it would be wiser not to test if it is already clear what will come of it. 2) Research is really helpful with regard to certain things like usability and other issues, getting ideas about preferences or whether the content or functionalities fits the user's expectations. However, there are also some things that won't result in interesting findings because they don't matter to the user. Like a non-significant empirical finding there may not be a right or wrong choice and other aspects could be given more consideration instead (such as costs or effort).

While it may be hard to determine what needs and what doesn't need testing (and actually it may also be a risk not to test), I would propose to say something like this: "Our previous research suggests that users will be likely to lean towards this choice and also it has the advantage of being the cheaper option. What about we decide this among ourselves for now and see in our next round of regular testing whether or not it comes up as an issue at all?"