<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0"><channel><title>disambiguity</title><description>understanding humans</description><managingEditor>noemail@noemail.org (Leisa Reichelt)</managingEditor><pubDate>Wed, 6 Jan 2021 07:24:04 GMT</pubDate><generator>WordPress https://wordpress.org/</generator><link>https://disambiguity.com</link><language>en-us</language><itunes:explicit>no</itunes:explicit><itunes:keywords>design,interactiondesign,informationarchitecture,usability,userexperience,customer,experience</itunes:keywords><itunes:summary>Chatting with smart people about design, interaction design, information architecture, user experience, usability and other cool designy stuff</itunes:summary><itunes:subtitle>Chatting with smart people about design, interaction design, information architecture, user experience, usability and other cool designy stuff</itunes:subtitle><itunes:category text="Technology"/><itunes:author>Leisa Reichelt</itunes:author><itunes:owner><itunes:email>leisa.reichelt@gmail.com</itunes:email><itunes:name>Leisa Reichelt</itunes:name></itunes:owner><item><title>A template for more deliberate 1:1 meetings (v2)</title><link>https://disambiguity.com/a-template-for-more-deliberate-11-meetings-v2/</link><category>Leadership</category><category>management</category><pubDate>Wed, 6 Jan 2021 07:24:04 GMT</pubDate><guid isPermaLink="false">https://disambiguity.com/?p=2003</guid><content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[
<p>I mentioned <a href="https://twitter.com/leisa/status/1346236242803924992?s=20">on Twitter recently</a> my intention to do better 1:1 meetings with my direct reports and stakeholders in 2021 and promised to share how I was approaching it. </p>



<p>This is not the first time I&#8217;ve made this resolution and it often tails off into adhoc-ery all too quickly but I&#8217;m hoping that being intentional about <strong><em>why</em></strong> I am doing this will help it stick more.</p>



<p>To that end there are a few things that I&#8217;m changing in how I do my 1:1s this year, including:</p>



<ol class="wp-block-list"><li><strong>Diarising time each week on Monday to prepare for my 1:1s</strong>. Here I need to admit that usually I&#8217;ve been collecting topics on post it notes throughout the week and then winging it through the 1:1 meeting, basically triaging on the fly the most important topics that I or my team member are bringing in that day. This means that we tend to deal with urgent things but not always important things. </li><li><strong>Creating a template for each of us to complete </strong>in preparation for the meeting, and to help ensure that we regularly touch on important topics that might otherwise be overlooked by urgent things. More on this below.</li><li><strong>Taking proper notes and actions in the meeting</strong>, and making sure the actions get acted on. Basic stuff really,  the former wasn&#8217;t really happening and the latter could be a little hit and miss. Using the template as a way to build accountability for the actions.</li></ol>



<p>With that in mind, here is my current iteration of the template I&#8217;m planning to use. I&#8217;ve used it only twice so far and already iterated it a little (hence the v2 in the heading) and I expect I will continue to iterate it more and more over time. So far its been pretty positively received, but it is still far, far from perfect. Feedback welcome.</p>



<p>The idea is that each week in preparation for the 1:1 my direct reports, who are mostly Research Managers, fill out this template and take the time to reflect on each of these items. I also have a section I need to fill in. We both contribute topics in addition to the standard items. </p>



<p><strong>Template for my 1:1 Meetings</strong></p>



<p><strong><em>Date of the 1:1 Meeting</em></strong></p>



<figure class="wp-block-table"><table class="has-subtle-light-gray-background-color has-fixed-layout has-background"><tbody><tr><td><strong>Actions from our last 1:1</strong> </td><td>Captured from the previous 1:1<br />&#8211; action 1<br />&#8211; action 2 etc.</td></tr><tr><td><strong>Emoji of the week:</strong></td><td> meaning discussed in the meeting</td></tr><tr><td><strong>Win</strong> &#8211;<em> what wins did you have last week?</em></td><td>My managers fill out these sections of the template BEFORE the 1:1 meeting</td></tr><tr><td><strong>Frustration</strong> &#8211;<em>what was your biggest frustration last week?</em></td><td></td></tr><tr><td><strong>Focus</strong> &#8211;<em>what is your focus this week?</em> <em>Pick one</em> <em>thing</em> </td><td></td></tr><tr><td><strong>Growth plan focus &#8211; </strong><em>what aspect of <strong>your</strong> growth plan are you currently working on?</em></td><td></td></tr><tr><td><strong>Project Status &#8211; </strong><em>what are your <strong>top</strong> three (or fewer) projects right now and how are they tracking?</em></td><td>1.<br />2.<br />3.</td></tr><tr><td><strong>Team Health</strong> &#8211; <em>anything remarkable to report re: people doing really well or poorly?</em></td><td></td></tr><tr><td><strong>Stakeholder Health</strong> &#8211; <em>anything remarkable to report re: relationships going well or poorly?</em></td><td></td></tr><tr><td><strong>Reflections and/or feedback from Leisa</strong>  </td><td>This section is for ME to fill out each week. Needs to be personal feedback, not just feedback / opinions on work, ideas, questions.</td></tr><tr><td><strong></strong><strong>Items for this week:</strong><br />&#8211; topic 1<br />&#8211; topic 2<br />&#8211; topic 3</td><td>Take tonnes of notes.</td></tr></tbody></table></figure>



<p><em>(My actual template is in Confluence and look lots better as it has lots of colourful emojis all over it. Annoyingly WordPress won&#8217;t play nicely for me with emojis here so I&#8217;ve prioritised making this accessible over giving a screenshot of the pretty one instead)</em></p>



<p>During the meeting I will then capture a tonne of notes as the meeting progresses (my background as a qual researcher has prepared me well for this!). These notes are shared on a Confluence page so that both of us can add more, annotate etc.</p>



<p>As soon as the 1:1 meeting is completed, I update the template with next weeks items, and the agreed actions captured.</p>



<p>I am a little bit concerned that the extensive &#8216;structured&#8217; section might not leave enough time to focus on the more &#8216;urgent&#8217; topics &#8211; especially given these currently tend to take up the entire time allowed right now. But, that is also somewhat deliberate, so perhaps that&#8217;s not a bad thing.</p>



<p><strong>The 4Ls &#8211; an alternative approach I might use on a monthly basis.</strong></p>



<p>When I was talking about this approach with my colleague <a href="https://domprice.me/">Dom Price</a>, he shared another 1:1 format that he likes to use. I&#8217;m not sure it would work quite so well for me on a weekly basis but I might experiment with using this every 4-6 weeks as I think it does provide a really different perspective on how people are doing in their work and how you might be able to help them be more successful.</p>



<p>Dom&#8217;s approach asks people to reflect on these four categories:</p>



<ul class="wp-block-list"><li><strong>Loved</strong> &#8211; what I loved doing this month</li><li><strong>Longed for</strong> &#8211; what I longed to be doing but was unable to find the time/etc</li><li><strong>Loathed</strong> &#8211; what I really did not enjoy doing this month</li><li><strong>Learned</strong> &#8211; what I learned this month</li></ul>



<p>The idea is to try to support the person you are managing to increase the loved and learned, enable the longed for and remove the loathed. </p>



<p>I&#8217;m sure there are many more great frameworks from people who have given this a good deal more thought than I have. I&#8217;d love to hear what&#8217;s worked well for you and what you&#8217;d recommend.</p>
]]></content:encoded><description>I mentioned on Twitter recently my intention to do better 1:1 meetings with my direct reports and stakeholders in 2021 and promised to share how I was approaching it. This is not the first time I&amp;#8217;ve made this resolution and it often tails off into adhoc-ery all too quickly but I&amp;#8217;m hoping that being intentional&amp;#8230; &lt;a class="more-link" href="https://disambiguity.com/a-template-for-more-deliberate-11-meetings-v2/"&gt;Continue reading &lt;span class="screen-reader-text"&gt;A template for more deliberate 1:1 meetings (v2)&lt;/span&gt; &lt;span class="meta-nav" aria-hidden="true"&gt;&amp;#8594;&lt;/span&gt;&lt;/a&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">1</thr:total><author>leisa.reichelt@gmail.com (Leisa Reichelt)</author></item><item><title>Ambient Reassurance</title><link>https://disambiguity.com/ambient-reassurance/</link><category>social &amp; community</category><pubDate>Thu, 15 Oct 2020 11:00:38 GMT</pubDate><guid isPermaLink="false">https://disambiguity.com/?p=1990</guid><content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[
<p>A long time ago, in 2007, I wrote about <a href="https://www.disambiguity.com/ambient-intimacy/">ambient intimacy</a>, a name for a new kind of experience that came about as a result of the emergence of social media, in particular Twitter.</p>



<p>Over the last seven months I have been working from home, remotely from my team. It has just been in the past couple of weeks that I&#8217;ve been able to come up with a way of describing a particular kind of <em>lack</em> that I&#8217;ve been feeling. </p>



<p>There are many things we lack (and gain) in working remotely, but this is one I&#8217;ve not considered before, and I don&#8217;t hear other people talking about it either. </p>



<p>I call it <strong>ambient reassurance</strong>. (Almost certainly the organisational psychologists have another term for it but I can&#8217;t find it!)</p>



<p>Ambient reassurance is the experience of small, unplanned moments of interaction with colleagues that provide reassurance that you&#8217;re on the right track. They provide encouragement and they help us to maintain self belief in those moments where we are liable to lapse into unproductive self doubt or imposter syndrome.</p>



<p>In hindsight I realise, these moments flowed naturally in an office environment.</p>



<p>Sometimes we seek them out in an ad hoc way &#8211; a conversation in the hallway about the thing you&#8217;re working on right now, a request for someone to quickly look at something and give a tiny bit of feedback, a tiny moan about something you&#8217;re struggling with.</p>



<p>Sometimes they are completely unintended &#8211; someone looks over your shoulder at something you&#8217;re working on, or gives you a few encouraging words as you enter or leave a tough meeting, or just happens to comment positively on something they saw you do recently.</p>



<p>It is possible for these to happen when we are all remote, but it takes more effort and intentionality. As a result, I think we experience much less of this ambient reassurance when we work remotely. </p>



<p>Concerned about disrupting people&#8217;s flow with messaging, we&#8217;re much less likely to send that tiny message of encouragement or positivity. Without the visibility of whether people are in focus mode or have a moment of availability for a small interaction, we keep things to ourselves. We only reach out and demand someones attention if it feels sufficiently important or well thought out. </p>



<p>So many of our interactions now are textual. More visible, audit-able, traceable. Interactions that make us think twice. Far from a reassuring smile across the room or a secret thumbs up from the audience.</p>



<p>Getting the balance right is hard. Protecting our colleagues work time flexibility and their focus time helps deliver some of the advantages of working remotely. </p>



<p>And yet, in the absence of these tiny, human interactions, we&#8217;re more dependent on our own, individual self assurance. I never realised, until COVID and this long stretch of remote work, how dependent my self assurance was on ambient reassurance from others. In its absence, the natural peaks and troughs we experience &#8211;  from confidence in our abilities to despair that we will never be good enough &#8211;  feel more frequent and more extreme.</p>



<p>So knowing this, I experiment. With reaching out and sharing more than I might otherwise. Both about what I&#8217;m working on and how I feel about it, and also, with micro reassurance for others. I worry about the extra load I might be placing on others. And I ponder how our tools might take this new (to me) need into account as well. </p>



<p>Meanwhile, we muddle through. And I wonder if you&#8217;re experiencing this absence of ambient reassurance as well?<br /><br />So, here&#8217;s a little reassurance for you right now &#8211; whatever you&#8217;re working on right now &#8211; you&#8217;re almost certainly doing better than you think you are. Don&#8217;t be so hard on yourself. Don&#8217;t be afraid to reach out for some feedback or just plain reassurance. Keep going! and stay safe.</p>



<p><em>If you found this interesting, you might also be interested in some research my team at Atlassian recently shared on what makes a difference to how people are experiencing remote work during the pandemic. <a rel="noreferrer noopener" href="https://www.atlassian.com/blog/teamwork/new-research-covid-19-remote-work-impact" data-type="URL" data-id="https://www.atlassian.com/blog/teamwork/new-research-covid-19-remote-work-impact" target="_blank">Read more here</a>.</em></p>
]]></content:encoded><description>A long time ago, in 2007, I wrote about ambient intimacy, a name for a new kind of experience that came about as a result of the emergence of social media, in particular Twitter. Over the last seven months I have been working from home, remotely from my team. It has just been in the&amp;#8230; &lt;a class="more-link" href="https://disambiguity.com/ambient-reassurance/"&gt;Continue reading &lt;span class="screen-reader-text"&gt;Ambient Reassurance&lt;/span&gt; &lt;span class="meta-nav" aria-hidden="true"&gt;&amp;#8594;&lt;/span&gt;&lt;/a&gt;</description><author>leisa.reichelt@gmail.com (Leisa Reichelt)</author></item><item><title>The Benefits of an Open User Research Practice</title><link>https://disambiguity.com/the-benefits-of-an-open-user-research-practice/</link><category>research</category><pubDate>Thu, 14 Nov 2019 10:41:43 GMT</pubDate><guid isPermaLink="false">https://disambiguity.com/?p=1985</guid><content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p><img fetchpriority="high" decoding="async" class="alignnone" src="https://userresearch.blog.gov.uk/wp-content/uploads/sites/102/2014/07/User-research-is-a-team-sport.jpg" alt="User Research is a team sport poster" width="1500" height="1981" /></p>
<p>I never really loved mathematics. I am much more of a big picture person than a tiny detail person. But I usually did ok in maths tests because you got marks not just for the answer but for showing all the thinking you did to get there. I may not always get the answer right, usually as a result of a simple mistake along the way, but you can see how I am thinking and understand where to intervene to correct. We both learn.</p>
<p>I apply the same approach to research practice, especially when working with teams who may not have a particularly strong understanding of how and why we do things as we do. An open research practice has multiple benefits including:</p>
<ul>
<li>learning about how and why you think about and make decisions and actions at each stage</li>
<li>understanding what tradeoffs are being made and the impact this has (there are always tradeoffs)</li>
<li>understanding how we move from data to insight and deeply understanding and trusting what we have learned.</li>
</ul>
<p>Openness in research requires the willingness to adapt, to not always be right and perfect, to go slower than you want to (and often far slower than people expect) and as a result requires decent amount of bravery.</p>
<p>There are three crucial stages for openness in research practice:</p>
<ul>
<li>study design</li>
<li>fieldwork</li>
<li>analysis and synthesis</li>
</ul>
<h3>Openness in study design</h3>
<p>Being open in the study design phase means bringing your team into the process about considering who we want to talk to (and, as a result who we choose not to include), and what we want to talk with the about. In particular, the consideration of <strong>what kinds of differences matter</strong> in our audience base is an important one to have and one that thinking about research recruitment can help to facilitate.</p>
<h3>Openness in fieldwork</h3>
<p>Being open during fieldwork refers to a researcher&#8217;s willingness to have team mates observe the research as it happens. There are many different ways that you can enable this, and different levels of interactivity that your team might have with the participant during the study. Being comfortable with having your team observe as you conduct research can be really challenging for researchers at first. Once this becomes standard practice though, it quickly becomes an essential part of our practice and help us to demonstrate the<a href="https://medium.com/mule-design/research-questions-are-not-interview-questions-7f90602eb533"> differences between the research questions we want to answer and the questions we need to ask participants</a> in order to answer those questions.</p>
<p>Many researchers are concerned that participants will observe one session and run off to change the entire product based on a single data point &#8211; although this is a commonly voiced concern it is usually easily managed by clearly setting expectations that everyone on the team is required to observe at least two sessions before being allowed to participate in the analysis process (from which the findings emerge). We often use <a href="https://articles.uie.com/user_exposure_hours/">UIE&#8217;s Exposure Hours</a> requirement for at least 2hrs of observation every 6 weeks as a metric that helps encourage team mates to experience more than a single session in any one research study.</p>
<h3>Openness in analysis and field work</h3>
<p>While giving team mates the opportunity to observe their customers and users first hand has obvious benefits, allowing them to participate in the analysis process is arguably even more important. This is where we truly pull back the curtain and show the hardest work of research which is making sense of all the stories we have heard and things we have observed.</p>
<p>Robust analysis and synthesis is probably one of the most overlooked aspects of the research process &#8211; all too often we see examples of people observing a number of sessions, taking a few bullet point notes et voila &#8211; the findings immediately emerge.</p>
<p>If only it were really that simple. Analysis and synthesis is hard, time consuming work when done properly. Doing it properly is essential if you want to do the work required to rid yourselves of as many of those annoying cognitive biases as possible &#8211; in particular the <a href="https://en.wikipedia.org/wiki/Confirmation_bias">confirmation bias</a> and <a href="https://en.wikipedia.org/wiki/Serial-position_effect#Recency_effect">recency effect</a>.</p>
<p>Allowing and encouraging team mates to participate in research analysis gives them an opportunity to get much closer to more of the data, but it also helps them to understand the way that we process that data in order to make sense of it and draw conclusions. It allows them to challenge the ways we are forming narratives about what we believe that data means and demonstrates the traceability of those claims back to the original source data.</p>
<p><a href="https://userresearch.blog.gov.uk/2014/06/05/how-we-do-research-analysis-in-agile/">This blog post and video</a> describe how I&#8217;ve done collaborative analysis successfully with teams.</p>
<h3>Open research is challenging but worthwhile</h3>
<p>It is beyond dispute that working in an open way &#8211; research as a team sport &#8211; is slower and more painful for researchers than putting our heads down and getting through the work alone. It is, on the surface, less efficient and more annoying. Nonetheless, if you want to grow understanding and respect for the research craft in your organisation, it is very much worth the overhead to take the time and effort to open up your processes to your team and invite them in to participate actively. Overtime, the increased Research IQ in your team will pay dividends and your ability to be impactful more efficiently will increase.</p>
<p>It is important to remember that the most important thing is not the time to deliver the report, but the impact our research has on our teams ability to make good decisions for our customers and our users. Let&#8217;s make sure we&#8217;re optimising for efficiency to  the right outcome.</p>
]]></content:encoded><description>I never really loved mathematics. I am much more of a big picture person than a tiny detail person. But I usually did ok in maths tests because you got marks not just for the answer but for showing all the thinking you did to get there. I may not always get the answer right,&amp;#8230; &lt;a class="more-link" href="https://disambiguity.com/the-benefits-of-an-open-user-research-practice/"&gt;Continue reading &lt;span class="screen-reader-text"&gt;The Benefits of an Open User Research Practice&lt;/span&gt; &lt;span class="meta-nav" aria-hidden="true"&gt;&amp;#8594;&lt;/span&gt;&lt;/a&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">1</thr:total><author>leisa.reichelt@gmail.com (Leisa Reichelt)</author></item><item><title>Five dysfunctions of ‘democratised’ research. Part 5 – Stunted capability</title><link>https://disambiguity.com/five-dysfunctions-of-democratised-research-part-5-stunted-capability/</link><category>user experience</category><pubDate>Sat, 9 Nov 2019 03:36:13 GMT</pubDate><guid isPermaLink="false">https://disambiguity.com/?p=1975</guid><content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p>This is the fifth and final in a series of posts examining some of the most common and most problematic problems we need to consider when looking to scale research in organisations. You can start with the first post in this series <a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-1-speed-trumps-validity/">here</a>.</p>
<p>Here are five common dysfunctions that we are contending with.</p>
<ol>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-1-speed-trumps-validity/"><span class="s1">Teams are incentivised to move quickly and ship, care less about reliable and valid research</span></a></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-2-researching-in-our-silos-leads-to-false-positives/"><span class="s1">Researching within our silos leads to false positives</span></a></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-3-research-as-a-weapon/"><span class="s1">Research as a weapon (validate or die)</span></a></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-4-quantitative-fallacies/"><span class="s1">Quantitative fallacies</span></a></li>
<li><span class="s1">Stunted capability</span></li>
</ol>
<p>In this post, we’re looking at what happens when the research practice in an organisation fails to mature.</p>
<h2>A great first step</h2>
<blockquote>
<p class="p1"><span class="s1">Testing one user is 100 percent better than testing none &#8211; Steve Krug, Don’t Make Me Think</span></p>
</blockquote>
<p class="p1"><span class="s1">Many organisations get started doing research with customers and users off the back of encouragement from people like <a href="http://sensible.com/">Steve Krug</a> and his classic book <a href="http://sensible.com/dmmt.html">‘Don’t Make Me Think’</a>. In this and other books Steve makes simple usability testing accessible and achievable to almost anyone. </span></p>
<p class="p1"><span class="s1">Steve and others like him are evangelists reaching out to those companies who are afraid to engage with their customers to understand opportunities for them to improve. This is important work. Their message is usually that talking to customers is not hard or scary, and that we’ll be better for doing a bit of it, even not perfectly, than not doing it at all.</span></p>
<h2 class="p1"><span class="s1">The first step can be scary</span></h2>
<p class="p1"><span class="s1">And they are right. Having anyone in the company talking to just one user (and hopefully some more) is a fabulous first step. But it is intended to be just that &#8211; a <b>first</b> step. An encouragement to realise the benefits of involving people outside our offices in the process of designing and developing products and services. And help to overcome the fear of engaging with customers and users and an opportunity to experience how beneficial this can be. </span></p>
<p class="p1"><span class="s1">For those of us who work with research participants on a regular basis, it may be hard to recall exactly how terrifying those first few research sessions felt. Even trained and experienced researchers continue to experience some background fear (or exhilaration?)<span class="Apple-converted-space">  </span>of all the things that could go wrong in the research study &#8211; and there are plenty!</span></p>
<p class="p1"><span class="s1">The thing about first steps, though, is that they are usually intended to be followed by second steps. Once we break through the fear (or in some cases, just lack of awareness), the idea is that we continue to increase the maturity of our practice.</span></p>
<p class="p1"><span class="s1">And this is where many organisations seem to hit a roadblock. More and more people in the organisation might be out and eagerly involving customers in the process of shaping their products, but they often don’t invest in either improving their own skills in research or investing in hiring people who have training and experience doing research. </span></p>
<h2 class="p1"><span class="s1">Talking to users is not research</span></h2>
<p class="p1"><span class="s1">One important realisation we need to have on the path to maturity is recognising that ‘talking to customers’ is actually not the same thing as doing research. Talking to customers or watching customers use our products and services has many benefits &#8211; in particular it can increase our empathy for our customers and users, it can help expose us to scenarios of use that are dramatically different to our own and what we would expect, and it can provide clues as to where the biggest problems may like. All of these are good outcomes.</span></p>
<p class="p1"><span class="s1">If we want to use research as evidence for decision making &#8211; either for product strategy or design decisions &#8211; then we need to be able to do more to ensure that the insights we are gleaning are <em>sufficiently</em> reliable and valid.</span></p>
<h2 class="p1"><span class="s1">Research doesn’t need to be ‘perfect’, just valid and reliable.</span></h2>
<blockquote>
<p class="p1"><span class="s1"> ‘I don’t need the research to be perfect, I just need enough to help me make a decision’.</span></p>
</blockquote>
<p class="p1"><span class="s1">Often this is said in response to the suggestion that the research we should be doing will take longer or be more difficult and expensive than our speaker would like. In this situation, there is often a pre-existing &#8216;hunch&#8217; and they are looking to users for validation. Or perhaps they are stuck between two options and seeks a tie breaker. </span></p>
<p class="p1"><span class="s1">Any specialist researcher has almost certainly had their recommended approach discredited as ‘too academic’, </span><span class="s1">and sometimes it is true. Sometimes the research methodology <em>is</em> overdone for the question the business is seeking to answer. But what often follows is a bit of a race to the bottom where considered <a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-1-speed-trumps-validity/">sample design and appropriate methodology are quickly discarded in favour of whatever is fastest and easiest</a>.</span></p>
<p class="p1"><span class="s1">Without the right experience and training, all too often interviewers ‘cut to the chase’ so we get more or less directly to the topic at hand. </span><span class="s1">Somewhere in the world right now a product manager under pressure to make a decision is asking questions like these in a customer interview:</span></p>
<blockquote>
<p class="p1"><span class="s1">‘here’s what we’re thinking of making, what do you think about it?’</span></p>
</blockquote>
<p>or, perhaps worse&#8230;</p>
<blockquote><p><span class="s1">‘if we made this, would you pay for it’</span></p></blockquote>
<p>It can be easy and tempting &#8211; so much faster and often quantitative &#8211; to <a href="https://medium.com/mule-design/research-questions-are-not-interview-questions-7f90602eb533">mistake the research question for the interview question.</a></p>
<p class="p1"><span class="s1">Even with training, it seems that the <a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-4-quantitative-fallacies/">urge to be able to say that 10 out of 12 people said they would pay for it is almost irresistible</a>. &#8216;Beating around the bush’ to get the question answered seems like a waste of everyone’s time, in this time where the bias to action and desire to ship at velocity is most valued. </span></p>
<p>(It shouldn&#8217;t really be a surprise that lack of research capability maturity exposes us to the previous four dysfunctions).</p>
<h2 class="p1"><span class="s1">Matching methodology to risk</span></h2>
<p class="p1"><span class="s1">Whilst we should have plenty of sympathy for this desire for lightweight research and simplicity, it is important to ensure that the methods employed are matched to the <strong>risk</strong> involved in the decision, rather than the most compressed timeframe.</span></p>
<p class="p1"><span class="s1">As our organisations grow, the decisions we take using evidence from our customers can become more and more substantial &#8211; the gains of getting it right are greater and the risks of getting it wrong get uglier.</span></p>
<p class="p1"><span class="s1">In the same way, our research maturity needs to continue to grow so that we can continue to match the size of the risk of getting it wrong.</span></p>
<p class="p1"><span class="s1">This is not to say that mature organisations only ever do serious, time consuming research. Rather, that we invest where the risk is highest. </span></p>
<p class="p1"><span class="s1">Investment might look like hiring trained researchers who can design and recruit the right sample and conduct the research in a way that reduces bias. Or investment might look like iterative research with an every increasing number of increasingly diverse participants, sprint after sprint &#8211; allowing the team to continue to learn, This can work beautifully when the team is able to be responsive to that learning over time. </span></p>
<h2>Investing too much</h2>
<p class="p1"><span class="s1">Conversely, there are situations where the investment in research is far too high for the decision being made. This often happens where the organisations design process has broken down, or where designers have entirely lost confidence in being able to make relatively conventional design decisions. In these situation we design complex studies to ‘validate’ one micro design treatment over another. In this case,  the mismatch in risk to research investment can result in large quantities of what I would consider to be wasteful and often <a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-2-researching-in-our-silos-leads-to-false-positives/">unreliable</a> research. </span></p>
<h2 class="p1"><span class="s1">Beware Dunning Kruger</span></h2>
<p class="p1"><img decoding="async" class="alignnone" src="https://www.businesstimes.com.sg/sites/default/files/styles/article_img/public/image/2019/05/18/BT_20190518_LLDKEP1_3784237_0.jpg?itok=ol6MYUvM" alt="Dunning Kruger graph of confidence vs expertise" width="440" height="293" /></p>
<p class="p1"><span class="s1">User Research is particularly susceptible to <a href="https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect">Dunning Kruger syndrome</a>, wherein a relatively small amount of knowledge can result in an excess of confidence. Many people claim a &#8216;background in research&#8217; when they could mean they watched someone else do a bunch of usability studies in their last job, or they did a research based degree at university. </span></p>
<p class="p1"><span class="s1">Many designers and product managers are entirely happy with the outcomes they get from research and how it enables their practice &#8211; and often loudly object to the suggestion that anyone could get a better result from the research than they do. </span></p>
<p class="p1"><span class="s1">Yet, at the same time, the harsh reality is that the work that is done is often resulting in misleading outcomes that can put their product and their organisation at risk. </span></p>
<p class="p1"><span class="s1">It also undermines the reputation of research in the organisation when people claim when a ‘researched’ product goes into the world and doesn’t succeed as expected. ‘We did research before and it didn’t work’.</span></p>
<p class="p1"><span class="s1">In the same way that often both design and product management capabilities require an engineering led organisation to move through the stages from <a href="https://en.wikipedia.org/wiki/Four_stages_of_competence">unconscious incompetence through to conscious competence</a> ,<span class="Apple-converted-space"> </span> the very same is true for the research capability. </span></p>
<h2 class="p1">Achieving research maturity</h2>
<p>And so, at the end of our five dysfunctions, what can be done to help provoke an organisation to not only involve users in the process of creating products and services but to start and continue to grow their ability to do so by revealing the important insights that are both reliable and valid?</p>
<p>Here are some things that have worked for me.</p>
<p>Perhaps through improving business fluency. <a href="https://disambiguity.com/guerrilla-empathy/">By talking less about empathy</a> and more about the risk to the business of getting it wrong. Talking less about customer obsession and more about the reliability and validity of the different types of evidence we can use to make decisions. And by running an open research practice &#8211; getting out of the black box, removing any mystery about our work, showing our workings and involving others in the process.</p>
<p>Make use of existing momentum &#8211; bringing new shape and substance to whatever your organisation is using to bring its attention to its customers &#8211; whether its an NPS survey, a customer convention, a feedback form, or a guerilla research practice &#8211; start by shaping the existing connections into something more insightful, more reliable and valid.</p>
<p>Be brave, but be patient and we&#8217;ll get there.</p>
]]></content:encoded><description>This is the fifth and final in a series of posts examining some of the most common and most problematic problems we need to consider when looking to scale research in organisations. You can start with the first post in this series here. Here are five common dysfunctions that we are contending with. Teams are incentivised&amp;#8230; &lt;a class="more-link" href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-5-stunted-capability/"&gt;Continue reading &lt;span class="screen-reader-text"&gt;Five dysfunctions of ‘democratised’ research. Part 5 – Stunted capability&lt;/span&gt; &lt;span class="meta-nav" aria-hidden="true"&gt;&amp;#8594;&lt;/span&gt;&lt;/a&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">5</thr:total><author>leisa.reichelt@gmail.com (Leisa Reichelt)</author></item><item><title>Five dysfunctions of ‘democratised’ research. Part 4 – Quantitative fallacies</title><link>https://disambiguity.com/five-dysfunctions-of-democratised-research-part-4-quantitative-fallacies/</link><category>research</category><pubDate>Tue, 1 Oct 2019 11:46:19 GMT</pubDate><guid isPermaLink="false">https://disambiguity.com/?p=1960</guid><content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p>This is the fourth in a series of posts examining some of the most common and most problematic problems we need to consider when looking to scale research in organisations. You can start with the first post in this series <a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-1-speed-trumps-validity/">here</a>.</p>
<p>Here are five common dysfunctions that we are contending with.</p>
<ol>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-1-speed-trumps-validity/"><span class="s1">Teams are incentivised to move quickly and ship, care less about reliable and valid research</span></a></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-2-researching-in-our-silos-leads-to-false-positives/"><span class="s1">Researching within our silos leads to false positives</span></a></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-3-research-as-a-weapon/"><span class="s1">Research as a weapon (validate or die)</span></a></li>
<li><span class="s1">Quantitative fallacies</span></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-5-stunted-capability/"><span class="s1">Stunted capability</span></a></li>
</ol>
<p>In this post, we’re looking at what happens when research is &#8216;weaponised&#8217; in teams.</p>
<h2 class="p1"><span class="s1"><b>Dysfunction #4 &#8211;<span class="Apple-converted-space">  </span>Quantitative fallacies</b></span></h2>
<blockquote>
<p class="p2"><span class="s1">I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be. &#8211; Lord Kelvin</span></p>
</blockquote>
<p class="p2"><span class="s1">I fear many people feel like Lord Kelvin. </span></p>
<p class="p2"><span class="s1">There seems to be an intuition that knowledge unable to be expressed numerically is less satisfactory than knowledge that fits into a graph. In order for an assertion to be worth considering as serious, it must have a number associated with it. Everything else is anecdote. </span></p>
<p class="p2"><span class="s1">Perhaps we have inherited this from finance. Finance are, after all, the masters of presenting future fictions with bold numbers and graphs. Finance whose authority appears to be rarely challenged.</span></p>
<p class="p2"><span class="s1">Organisations love quantitative research because it is fast and feels definitive.</span></p>
<p class="p2"><span class="s1">Smash out a survey, launch an experiment, categorise customer feedback by keyword, look at the product analytics. Somehow, numbers just feel more reliable. More trustworthy.</span></p>
<blockquote>
<p class="p3"><span class="s1">The <b>McNamara fallacy</b> (also known as <b>quantitative fallacy</b>), named for Robert McNamara, the US Scretary of Defense from 1961 to 1968, involves making a decision based solely on quantitative observations (or metrics) and ignoring all others. The reason given is often that these other observations cannot be proven.</span></p>
<p>&#8211;  Daniel Yankelovich &#8220;Corporate Priorities: A continuing study of the new demands on business.&#8221; (1972)<i><br />
</i></p></blockquote>
<p class="p2"><span class="s1">In my experience, presenting a number boldly is much less likely to be challenged than any assertion backed up by more qualitative evidence. Yet surprisingly few people seem to be inclined (or able) to ensure that the work done to establish that number has any rigour. </span></p>
<p class="p2"><span class="s1">Take surveys. How many organisations the the time to do cognitive interviewing to ensure that the data collected in the survey is valid and reliable? Very few. Most don&#8217;t know it is even something you should do, and the others don&#8217;t want to spend the time.</span></p>
<p class="p2"><span class="s1">Do we just have blind faith that our survey respondents will make sense of the questions the same way as us? Or do we actually not really care so much about the validity? We just want an answer. A definitive sounding answer. Some data to show that we are evidenced based.</span></p>
<p class="p2"><span class="s1">How many teams when A/B testing their two versions of the design using unmoderated research watch the videos to make sure that people did <i>really</i> complete the task in a way that could be considered an adequate user experience? To check that the people who undertook the research have any resemblance to who they said they were in the screener? To ensure that they things they say and the scores they give make sense when compared to the experience they actually had? </span></p>
<p class="p2"><span class="s1">All sounds a bit time consuming doesn’t it, when all you really want is data to tell you what to do. To take the decision out of your hands.</span></p>
<p class="p2"><span class="s1">We’ve managed to convince ourselves with a large enough volume of respondents, these problems go away. But the fact is, these numbers can easily be completely misleading. People don’t understand the survey question and answer anyway. To get the incentive. To find out what other questions you’re asking, because some of us are completists.</span></p>
<p class="p2"><span class="s1">Recently my team recently did some survey testing &#8211; we were testing a feature prioritisation survey (not my favourite). We observed people who told us they didn&#8217;t understand what a features as described in the survey. Regardless, it sounded cool and they then went on to prioritise it highly against other features in the survey regardless. </span></p>
<p class="p2"><span class="s1">How often does this happen? No one knows.</span></p>
<blockquote>
<p class="p5"><span class="s1">The first step is to measure whatever can be easily measured. This is OK as far as it goes.</span></p>
<p class="p5"><span class="s1">The second step is to disregard that which can&#8217;t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. </span></p>
<p class="p5"><span class="s1">The third step is to presume that what can&#8217;t be measured easily really isn&#8217;t important. This is blindness. The fourth step is to say that what can&#8217;t be easily measured really doesn&#8217;t exist. </span></p>
<p class="p5"><span class="s1">This is suicide.</span></p>
<p class="p5"><span class="s1">&#8211;  Daniel Yankelovich &#8220;Corporate Priorities: A continuing study of the new demands on business.&#8221; (1972)</span></p>
</blockquote>
<p class="p2"><span class="s1">There are multiple, related quantitative fallacies. </span></p>
<p class="p2"><span class="s1">Some like McNamara and Lord Kelvin, believe that quantitative data is superior. But others are more complex &#8211; they trust that the trade off for speed and convenience does not have a dangerous impact to validity and reliability. Other fallacies result from absence of experience and ability in defending qualitative data and critiquing quantitative methods. </span></p>
<p class="p2"><span class="s1">The fastest and most ‘definitive’ sounding methodologies (and the tools that enable them) have never been more popular. While it is encouraging that more and more people are keen to take a more human centred approach to product design,<span class="Apple-converted-space">  </span>experienced researchers need to intervene to make sure that these methods are being used, and critiqued, appropriately. </span></p>
<p class="p2"><span class="s1">We need to ensure that our organisations don’t over index to the rapid, quantitative methods because they play well with senior leadership. And when we do use these methods , we need to ensure that we maintain a high enough quality standard that we can genuinely stand behind the numbers and believe they have some reliability and validity.</span></p>
<p>You can read about the next dysfunction <a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-5-stunted-capability/">here</a>.</p>
]]></content:encoded><description>This is the fourth in a series of posts examining some of the most common and most problematic problems we need to consider when looking to scale research in organisations. You can start with the first post in this series here. Here are five common dysfunctions that we are contending with. Teams are incentivised to move&amp;#8230; &lt;a class="more-link" href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-4-quantitative-fallacies/"&gt;Continue reading &lt;span class="screen-reader-text"&gt;Five dysfunctions of ‘democratised’ research. Part 4 – Quantitative fallacies&lt;/span&gt; &lt;span class="meta-nav" aria-hidden="true"&gt;&amp;#8594;&lt;/span&gt;&lt;/a&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">3</thr:total><author>leisa.reichelt@gmail.com (Leisa Reichelt)</author></item><item><title>Five dysfunctions of ‘democratised’ research. Part 3 &amp;#8211; Research as a weapon</title><link>https://disambiguity.com/five-dysfunctions-of-democratised-research-part-3-research-as-a-weapon/</link><category>user experience</category><pubDate>Sat, 14 Sep 2019 10:40:30 GMT</pubDate><guid isPermaLink="false">https://disambiguity.com/?p=1952</guid><content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p>This is the third in a series of posts examining some of the most common and most problematic problems we need to consider when looking to scale research in organisations. You can start with the first post in this series <a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-1-speed-trumps-validity/">here</a>.</p>
<p>Here are five common dysfunctions that we are contending with.</p>
<ol>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-1-speed-trumps-validity/"><span class="s1">Teams are incentivised to move quickly and ship, care less about reliable and valid research</span></a></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-2-researching-in-our-silos-leads-to-false-positives/"><span class="s1">Researching within our silos leads to false positives</span></a></li>
<li><span class="s1">Research as a weapon (validate or die)</span></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-4-quantitative-fallacies/"><span class="s1">Quantitative fallacies</span></a></li>
<li><span class="s1"><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-5-stunted-capability/">Stunted capability</a></span></li>
</ol>
<p>In this post, we’re looking at what happens when research is &#8216;weaponised&#8217; in teams.</p>
<h2>Dysfunction #3 – Research as a weapon (validate or die)</h2>
<p class="p1"><span class="s1">Over reliance on research, without care to the quality level of the research, can also be a symptom of another problem in our organisations &#8211; lack of trust between disciplines in a cross functional team. </span></p>
<p class="p1"><span class="s1">In particular the relationship between design and product management can have a substantial impact on the way that research is used in product teams. If the relationship is strong, aligned and productive research is often used to support real learning in team. But where the relationship is less healthy, it is not uncommon to see research emerge as a form of weaponry. </span></p>
<p><figure id="attachment_1953" aria-describedby="caption-attachment-1953" style="width: 425px" class="wp-caption alignnone"><img decoding="async" class="wp-image-1953 size-full" title="©XKCD" src="https://disambiguity.com/wp-content/uploads/xkcd-graphs.png" alt="comic about how relationship has declined because partner graphs everything" width="425" height="241" srcset="https://disambiguity.com/wp-content/uploads/xkcd-graphs.png 425w, https://disambiguity.com/wp-content/uploads/xkcd-graphs-300x170.png 300w, https://disambiguity.com/wp-content/uploads/xkcd-graphs-400x227.png 400w" sizes="(max-width: 425px) 100vw, 425px" /><figcaption id="caption-attachment-1953" class="wp-caption-text">©XKCD</figcaption></figure></p>
<h3>Winning wars with research</h3>
<p class="p1"><span class="s1">How does research become weaponry? When it is being used primarily for the purpose of winning the argument in the team. </span></p>
<p class="p1"><span class="s1">Using research as evidence for decision making is good practice, but as we have observed in earlier dysfunctions, the framing of the research is crucial to ensuring that the evidence is reliable and valid. Research that is being done to &#8216;prove&#8217; or &#8216;validate&#8217; can often have the same risk of false positives that comes from the <a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-2-researching-in-our-silos-leads-to-false-positives/">silo dysfunction.</a><br />
</span></p>
<p>This is because the research will often be too tightly focussed on the solution in question and there is little or no interest from the team around the broader context. This lack of realistic context can result in teams believing that solutions are more successful than they will ultimately turn out to be in the realistic context of use.</p>
<h3>Data as a crutch for design communications</h3>
<p>Another reason to see research being used as weaponry is to compensate for a lack of confidence or ability in discussing the design decisions that have been made. <a href="https://twitter.com/jeniferv">Jen Vandagriff</a>, who I&#8217;m very fortunate to work with at Atlassian, refers to this as having a &#8216;Leaky Design Gut&#8217;.</p>
<p class="p1"><span class="s1">Here we see research &#8216;data&#8217; being used instead of (not as well as) the designer being able to explain why they have made the design decisions they have made. Much as I love research, it is foolish to believe that every design decision needs to be evidenced with primary research conducted specifically for this purpose. Much is already known about design decisions that can enhance or detract from the usability of a system, for example. </span></p>
<p class="p1"><span class="s1">In a team where the designer is able to articulate the rationale and objectives for their design decisions, and there is trust and respect amongst team members, the need to &#8216;test and prove&#8217; every decision is reduced.</span></p>
<h3 class="p1"><span class="s1">Validation can stunt learning<br />
</span></h3>
<p class="p1"><span class="s1">Feeling the need to &#8216;prove&#8217; every design decision quickly leads to a  validation mindset &#8211; thinking, &#8216;I must demonstrate that what I am proposing is the right thing, the best thing. I must win arguments in my team with &#8216;data&#8221;. .</span></p>
<p>Before going straight to research as validation&#8217;, it is worth considering whether supporting designers to grow on their ability to be more deliberate in how they make and communicate their design decisions could be a more efficient way to resolve this challenge.</p>
<p>Sometimes it is entirely the right thing to run research to help understand whether a proposed approach is successful or not. The challenge is to ensure that we avoid our other dysfunctions as we do this research. And to make sure that this doesn&#8217;t become the primary role of research in the team &#8211; to validate and settle arguments. Rather, it should be part of a &#8216;balanced diet&#8217; of research in the team.</p>
<p class="p1"><span class="s1">If we focus entirely on validation and &#8216;proof&#8217;, we risk moving away from a learning, discovery mindset. We prefer the leanest and <em>apparently</em> definitive practices. A/B testing prototypes and the creation of scorecards are common outputs that result from this mindset. We&#8217;re incentivised to ignore any flaws in the validity of the method if we&#8217;re able to generate data that proves our point. </span></p>
<h3>Alignment over evidence</h3>
<p class="p1"><span class="s1">Often this behaviour comes from a good place. A place where teams are frustrated with constant wheel spinning based on everyone having an opinion. Where the team is trying to move away from opinion based decision making, where either the loudest voice always wins or the team feels frustrated by their inability to make decisions to move forward. Using research as a method to address these frustration does make sense and should be encouraged.</span></p>
<p>Validation research can provide short term results to help move teams forward, but it can reinforce a combative relationship between designers and product managers. Often this relationship comes from a lack of alignment around the real problems that the team are setting out to solve. Investing more in more &#8216;discovery&#8217; research, done collaboratively, as a &#8216;team sport&#8217; can be incredibly powerful in helping create a shared purpose across the team that can help promote a more constructive and supporting teamwork environment.</p>
<p class="p1"><span class="s1">Support from an experienced researcher with sufficient seniority can help the team avoid the common pitfalls of seeking the fastest and most definitive &#8216;result&#8217;, but to achieve a shared understanding of both the problem and the preferred solution. Here the practice of research, done collaborative as a team, can help not only to inform the situation to achieve more confident decision making, but also to heal some tensions in the team, by bringing the team together around a shared purpose &#8211; solving real problems for their customers or users.</span></p>
<p><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-4-quantitative-fallacies/">You can read about the fourth dysfunction here.</a></p>
]]></content:encoded><description>This is the third in a series of posts examining some of the most common and most problematic problems we need to consider when looking to scale research in organisations. You can start with the first post in this series here. Here are five common dysfunctions that we are contending with. Teams are incentivised to move&amp;#8230; &lt;a class="more-link" href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-3-research-as-a-weapon/"&gt;Continue reading &lt;span class="screen-reader-text"&gt;Five dysfunctions of ‘democratised’ research. Part 3 &amp;#8211; Research as a weapon&lt;/span&gt; &lt;span class="meta-nav" aria-hidden="true"&gt;&amp;#8594;&lt;/span&gt;&lt;/a&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">3</thr:total><author>leisa.reichelt@gmail.com (Leisa Reichelt)</author></item><item><title>Five dysfunctions of ‘democratised’ research. Part 2 &amp;#8211; Researching in our silos leads to false positives</title><link>https://disambiguity.com/five-dysfunctions-of-democratised-research-part-2-researching-in-our-silos-leads-to-false-positives/</link><category>research</category><pubDate>Fri, 13 Sep 2019 18:10:25 GMT</pubDate><guid isPermaLink="false">https://disambiguity.com/?p=1947</guid><content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p>This is the second in a series of posts examining some of the systemic problems that organisations tend to rub up against as they seek to &#8216;scale&#8217; research activity in their organisation. We are looking particularly at &#8216;dysfunctions&#8217; that can result in at best, ineffective work and at worst, misleading and risky outcomes. You can start with the first post in this series <a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-1-speed-trumps-validity/">here</a>.</p>
<p>Here are five common dysfunctions that we are contending with.</p>
<ol>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-1-speed-trumps-validity/"><span class="s1">Teams are incentivised to move quickly and ship, care less about reliable and valid research</span></a></li>
<li><span class="s1">Researching within our silos leads to false positives</span></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-3-research-as-a-weapon/"><span class="s1">Research as a weapon (validate or die)</span></a></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-4-quantitative-fallacies/"><span class="s1">Quantitative fallacies</span></a></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-5-stunted-capability/"><span class="s1">Stunted capability</span></a></li>
</ol>
<p>In this post, we&#8217;re looking at the impact of our organisation structure on research outcomes.</p>
<h2>Dysfunction #2 &#8211; Researching within our silos leads to false positives</h2>
<div id="quote-content">
<blockquote><p>Always design a thing by considering it in its next larger context – a chair in a room, a room in a house, a house in an environment, an environment in a city plan. &#8211; Eliel Saarinen</p></blockquote>
</div>
<p class="p1"><span class="s1">The larger the organisation, the more fragmentation and dependencies you tend to get across teams. Teams are organised by product or platform, and then often by the feature set they work on. Occasionally teams are organised by a user type, and very rarely you find some arranged by user journey.</span></p>
<p class="p1"><span class="s1">Even in this complex ecosystem of teams where dependencies are rife, the desire for autonomy in teams remains. Between teams, we tend to seek to avoid reliance other teams where possible. We don&#8217;t want our own team velocity or ability to ship to be decreased by anyone else. In this environment, collaboration between teams tough. It can be hard to coordinate, there&#8217;s no incentive to take this time and trouble. And this leads to greater focus, which, in theory is great, except&#8230;.</span></p>
<h3>Beware the Query Effect</h3>
<p class="p1"><span class="s1">When it comes to research, we know how critical getting the right research question is. Getting the &#8216;framing&#8217; of the research right is crucial because, as the <a href="https://www.nngroup.com/articles/interviewing-users/"><span class="s2">Query Effect</span></a> tells us (and as we know from our own personal experience) you can ask people any question you like and you&#8217;ll very likely get data in return.</span></p>
<p class="p1"><span class="s1">Whenever you do ask users for their opinions, watch out for the query effect: </span></p>
<blockquote>
<p class="p1"><span class="s1">People can <b>make up an opinion about anything</b>, and they&#8217;ll do so if asked. You can thus get users to comment at great length about something that doesn&#8217;t matter, and which they wouldn&#8217;t have given a second thought to if left to their own devices. &#8211; Jakob Nielsen</span></p>
</blockquote>
<p class="p1"><span class="s1">By focussing our research around the specific thing our team is responsible for, we increase our vulnerability to the query effect.  That little feature is everything to our product team and we want to understand <i>everything</i> our users might think or feel about it, but are we perhaps less inclined to question our team&#8217;s own existence in our research?</span></p>
<p class="p1"><span class="s1">Researchers are encouraged to keep the focus tight, to not concern themselves with questions or context that the team cannot control or influence.</span></p>
<p class="p1"><span class="s1">I like to use this visual illustration of what that is problematic. Take a quick look at the image below. What strange sea creature do we have here do you think? Looks quite scary, right?</span></p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1937 size-large" src="https://disambiguity.com/wp-content/uploads/DuckShadow-1024x593.png" alt="Scary looking shadow in water" width="663" height="384" srcset="https://disambiguity.com/wp-content/uploads/DuckShadow-1024x593.png 1024w, https://disambiguity.com/wp-content/uploads/DuckShadow-300x174.png 300w, https://disambiguity.com/wp-content/uploads/DuckShadow-768x444.png 768w, https://disambiguity.com/wp-content/uploads/DuckShadow-982x568.png 982w, https://disambiguity.com/wp-content/uploads/DuckShadow-400x231.png 400w, https://disambiguity.com/wp-content/uploads/DuckShadow.png 1220w" sizes="auto, (max-width: 663px) 100vw, 663px" /></p>
<p class="p1"><span class="s1">Oh but wait, when you pull back just a little more you realise the story is completely different, and all we have here is a little duck, off for a swim, nothing to worry us at all.</span></p>
<p><img loading="lazy" decoding="async" class="size-large wp-image-1936 aligncenter" src="https://disambiguity.com/wp-content/uploads/ShadowWithDuck-1024x679.png" alt="Duck swimming in water with shadow (no longer scary) below" width="663" height="440" srcset="https://disambiguity.com/wp-content/uploads/ShadowWithDuck-1024x679.png 1024w, https://disambiguity.com/wp-content/uploads/ShadowWithDuck-300x199.png 300w, https://disambiguity.com/wp-content/uploads/ShadowWithDuck-768x509.png 768w, https://disambiguity.com/wp-content/uploads/ShadowWithDuck-982x651.png 982w, https://disambiguity.com/wp-content/uploads/ShadowWithDuck-400x265.png 400w, https://disambiguity.com/wp-content/uploads/ShadowWithDuck.png 1552w" sizes="auto, (max-width: 663px) 100vw, 663px" /></p>
<p class="p1"><span class="s1">How often is our research so tightly framed on the feature our team is interested in that we make this mistake? </span></p>
<p class="p1"><span class="s1">We think something is important when in actually, in proper context of the real user need, it is not so important at all? Or conversely, we focus so tightly on something we think is important when what our users care about is just out of frame. Just outside the questions we are asking, that they are so busy now, helpfully answering. Even though it is not the important thing.</span></p>
<p class="p1"><span class="s1">I fear this is one of the most common dysfunctions that we see in product teams doing research in the absence of people who are sufficiently experienced and with seniority and confidence to encourage teams to reshape their thinking.</span></p>
<h3>What is the risk?</h3>
<p>Research that is focussed too tightly on a product or a feature increases the risk of a false positive result. A false positive is a research result which wrongly indicates that a particular condition or attribute is present.</p>
<p>False positives are problematic for at least two reasons. Firstly they can lead teams to believe that there is a greater success rate or demand for the product or feature they are researching than is actually the case when experienced in a more realistic context. And secondly, they can lead to a lack of trust in research &#8211; teams are frustrated because they have done all this research and it didn&#8217;t help them to succeed. This is not a good outcome for anyone.</p>
<p class="p1"><span class="s1">The role of the trained and experienced researcher is to not only have expertise in methodology but also to help guide teams to set focus at the right level, to avoid misleading ourselves with data. To ensure we not only gather data, but we are confident we are gathering data on the things that really matter. Even if that requires us to do research on things our team doesn&#8217;t own and cannot fix or to collaborate with others in our organisation. In many cases, the additional scope and effort can be essential to achieving a valid outcome from research that teams can trust to use to move forward.</span></p>
<p><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-3-research-as-a-weapon/">You can read about the third dysfunction here.</a></p>
]]></content:encoded><description>This is the second in a series of posts examining some of the systemic problems that organisations tend to rub up against as they seek to &amp;#8216;scale&amp;#8217; research activity in their organisation. We are looking particularly at &amp;#8216;dysfunctions&amp;#8217; that can result in at best, ineffective work and at worst, misleading and risky outcomes. You can&amp;#8230; &lt;a class="more-link" href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-2-researching-in-our-silos-leads-to-false-positives/"&gt;Continue reading &lt;span class="screen-reader-text"&gt;Five dysfunctions of ‘democratised’ research. Part 2 &amp;#8211; Researching in our silos leads to false positives&lt;/span&gt; &lt;span class="meta-nav" aria-hidden="true"&gt;&amp;#8594;&lt;/span&gt;&lt;/a&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">5</thr:total><author>leisa.reichelt@gmail.com (Leisa Reichelt)</author></item><item><title>Five dysfunctions of ‘democratised’ research. Part 1 &amp;#8211; Speed trumps validity</title><link>https://disambiguity.com/five-dysfunctions-of-democratised-research-part-1-speed-trumps-validity/</link><category>research</category><pubDate>Sun, 8 Sep 2019 16:18:30 GMT</pubDate><guid isPermaLink="false">https://disambiguity.com/?p=1927</guid><content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p>The good news is that more and more organisations are embracing research in product teams. Whether it is product managers doing customer interviews or designers doing usability tests, and everything in between &#8211; it is now fairly simple to come up with a compelling argument that research is a thing we should probably be doing.</p>
<p>So we move on to the second generation question. How do we scale this user centred behaviour?</p>
<p>Depending on where in the world you are &#8211; and your access to resources &#8211; your answer is usually to hire more researchers and/or to have other people in the team (often designers and product managers) to do the research. This is often known as &#8216;democratising research&#8217;.</p>
<p>Almost certainly this is the time that an organisation starts looking to hire designers and product managers with a ‘background in research’ and to establish some research training programs, interview and report templates and common ways of working.</p>
<p>This all sounds eminently sensible, but there are some fairly structural issues in how we work that can undermine our best intentions. At best, it can render our research wasteful and inefficient, and at worst it can introduce significant risks in the decision making that our teams make.</p>
<p>Each of these are systemic issues and anyone doing research is likely to be impacted when working as part of a cross functional product team.</p>
<p>So, let’s assume that people doing research have had adequate training on basic generative and evaluative research methods &#8211; here are five common dysfunctions that we will need to contend with.</p>
<ol>
<li><span class="s1">Teams are incentivised to move quickly and ship, care less about reliable and valid research</span></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-2-researching-in-our-silos-leads-to-false-positives/"><span class="s1">Researching within our silos leads to false positives</span></a></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-3-research-as-a-weapon/"><span class="s1">Research as a weapon (validate or die)</span></a></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-4-quantitative-fallacies/"><span class="s1">Quantitative fallacies</span></a></li>
<li><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-5-stunted-capability/"><span class="s1">Stunted capability</span></a></li>
</ol>
<p>Here we will start with the first, which is one that many will find familiar.</p>
<h2>Dysfunction #1.<br />
Teams are incentivised to move quickly and ship, care less about reliable and valid research</h2>
<p>The most popular research tools are not the ones that promise the most reliable or valid research outcomes, but those that promise the fastest turnaround. One well known solution promises:</p>
<blockquote><p>Make high-confidence decisions based on real customer insight, without delaying the project. You don’t have to be a trained researcher, and there’s no need to watch hours of video.</p></blockquote>
<p>It sounds so appealing and it is a promise that a lot of teams want to buy. Speed to ship or velocity is often a key performance indicator for teams. It&#8217;s not a coincidence that people usually start with &#8216;build&#8217; and rushing to MVP when talking about the &#8216;learn, build, measure&#8217; cycle.</p>
<h3>Recruitment trades offs made for speed</h3>
<p>The challenge is that doing research at pace requires us to trade off characteristics  are important to the reliability and validity of research.</p>
<p>One of the most time consuming aspects of research is to recruit participants who represent the different attributes that are important for understanding user needs the product seeks to meet. The validity of the research is constrained by the quality of the participant recruitment.</p>
<p>What do we mean by validity? In the simplest terms, it is the measure of how well our research understands what we <strong>intend</strong> for it to understand.</p>
<p>Most of the speedy research methods &#8211; whether that’s guerrilla research at the coffee shop or using an online tool &#8211; tend to compromise on participant recruitment. Either you just take whoever you can get from the coffee shop that morning, or you recruit from a panel of participants online and trust that they are who they say they are and that they won’t just tell you nice things so you don’t give them a low star rating and they get to keep this income source.</p>
<p>There are many kinds of shortcuts to be taken around recruiting &#8211; diversity of participants, ‘realistic-ness’ of participants or number of participants being a few. Expect to see some or all of these short cuts in operation in product teams where speed to ship is the primary goal.</p>
<p>Being fast and scrappy can be a great way to do some research work, but in many teams the <strong>only</strong> kind of research they are doing is whatever is fastest. This is like eating McDonalds for every meal because you&#8217;re optimising for speed&#8230; and we all know how that works out.</p>
<p>Teams are trading off research validity for speed every day. Everyone in the organisation understands the value of getting something shipped, and this is often measured and rewarded. Not so many people understand risks associated with making speed related trade offs in research.</p>
<h3>What is the risk?</h3>
<p>Misleading insights from the research work can send a team in the wrong direction. That can direct a team to spend time creating and shipping work that does not improve their users experience or meet their users needs. That does not increase the desirability or necessity of their product, and thereby negatively impacts their productivity and the profitability of their organisation.</p>
<p>Does this mean that speed to ship is bad? Should all research be of an &#8216;academic standard&#8217;?</p>
<p>No.</p>
<p>Testing to identify some of the larger usability issues can often be done with small participant numbers and less care to find &#8216;realistic&#8217; respondents. But if the work that results from your research findings is going to take more than one person more than a week to implement, it might be worth increasing the robustness of your research methodology to increase confidence that this effort is well spent.</p>
<p>People doing research need to be clear with their teams about the level of confidence they have in the research findings (it is fine for some research to result in hunches rather than certainty as long as it is clearly communicated). And teams should plan to ensure they are using a healthy diet of  both fast and more robust research approaches.</p>
<p>Organisations need to ensure they have someone sufficiently senior asking questions (and understanding how to critique the answers)  about not just the existence of data from user research but also looking under the hood to evaluate the trade offs being made, and as a result the level of confidence and trust we should place in the insights and claims made.</p>
<p><a href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-2-researching-in-our-silos-leads-to-false-positives/">You can read about the second dysfunction here.</a></p>
]]></content:encoded><description>The good news is that more and more organisations are embracing research in product teams. Whether it is product managers doing customer interviews or designers doing usability tests, and everything in between &amp;#8211; it is now fairly simple to come up with a compelling argument that research is a thing we should probably be doing.&amp;#8230; &lt;a class="more-link" href="https://disambiguity.com/five-dysfunctions-of-democratised-research-part-1-speed-trumps-validity/"&gt;Continue reading &lt;span class="screen-reader-text"&gt;Five dysfunctions of ‘democratised’ research. Part 1 &amp;#8211; Speed trumps validity&lt;/span&gt; &lt;span class="meta-nav" aria-hidden="true"&gt;&amp;#8594;&lt;/span&gt;&lt;/a&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">9</thr:total><author>leisa.reichelt@gmail.com (Leisa Reichelt)</author></item><item><title>What walls are for</title><link>https://disambiguity.com/what-walls-are-for/</link><category>agile ux</category><category>research</category><pubDate>Fri, 20 Jul 2018 05:38:31 GMT</pubDate><guid isPermaLink="false">https://disambiguity.com/?p=1895</guid><content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p>Earlier this week I did a talk at the <a href="https://www.mindtheproduct.com/">Mind the Product</a> conference in San Francisco. I was talking about research, but now that i work at Atlassian, the examples I gave included some from the Jira team&#8217;s work.</p>
<p>I also showed a slide that was a photo of a team, gathered in a meeting around a wall covered with index cards. No one in the meeting had their laptops out.</p>
<p>This is what people seemed to want to talk about over coffee later in the day. Could it be true that I, spokesperson (one of many) for Jira, would possibly want to see stuff on the wall?  Surely it should all just be in Jira right?</p>
<p>People told me they messaged photos of the slide back to their teams to show them &#8211; &#8216;look, the Atlassian person says it is ok to put things on the walls!&#8217;</p>
<p>omg yes. I love walls with post its and index cards stuck on it and sketches on whiteboards. I like walls for planning, for thinking, for communicating and for analysing. And then you capture it all in a tool, like Jira.</p>
<p>This is why.</p>
<h3>Walls make it easier to iterate</h3>
<p>Digital things look &#8216;finished&#8217; too soon. when something is a work in progress on a wall, it looks unfinished, so you keep working on it. moving things around, reshaping things, connecting things, erasing things, and making them again. Walls make it easier to iterate. Iteration, in my opinion, is massively correlated with quality.</p>
<p>This is why walls are good for sketching out design ideas and processes. This is why they are amazing for research analysis (don&#8217;t care what anyone says, post it notes are still the best tool for research analysis for exactly this reason &#8211; no one ever does three (or more) rounds of synthesis using a digital tool).</p>
<h3>Walls make it easier to collaborate (in a single location)</h3>
<p>There is something about a group of people standing in front of a wall full of sketches, or index cards or post it notes. Its a different kind of collaboration than you get around a table, or in a digital tool. You&#8217;re usually standing up, so you&#8217;re paying attention, you&#8217;re focussed. People physically pick up the card that they are talking about and something about that seems to pull focus even more. Doing a stand up at a physical wall and moving the cards across to done has always felt a lot like the physical act of crossing something off the to do list &#8211; so much more satisfying than updating a status on an issue. The messiness of a room full of post it notes when you&#8217;re doing analysis almost compels you to finish making that sweep through the data&#8230; finding the best place, for now, for every sticky note of data. There is something about the physicality and the embodiment of the work that I have always felt binds teams together more, drives us to do better and more complete thinking about the work we&#8217;re doing. There is no science to this just many years of experience. Walls just work better for me, when I&#8217;m lucky enough to work in the same location as my team. Walls do suck at remote and distributed teams.</p>
<h3>Walls make it easier to communicate</h3>
<p>Sometimes the walls are not for you but for other people. Sometimes walls are to send a message. They can say &#8211; &#8216;look how many things people want us to do, this is insane and someone needs to prioritise this&#8217;, they can say &#8216;look how much we&#8217;ve done this sprint, yay!&#8217; or &#8216;look how much we have left to do, uh oh!&#8217;, they can say &#8216;these things are really important to our team, this is what we believe it&#8217;, or they can say &#8216;here is what we&#8217;re working on at the moment&#8217;.</p>
<p>I&#8217;ve been in, and observed many teams who use walls to communicate either the most important messages to the team in a kind of omnipresent way &#8211; this is what we believe it, or this is what we are focussing on right now, or these are our values, or here is our goal.</p>
<p>Or sometimes they are designed to communicate to bosses and stakeholders &#8211; those walls might say &#8216;we&#8217;d get the things we&#8217;ve promised done if you didn&#8217;t always sneak in all this unplanned work&#8217; (I&#8217;ve seen a few of those).</p>
<p>Some people I&#8217;ve know have had jobs that include keeping the digital tool up to date with the wall. Or the other way around. Its not inefficiency. The wall is doing different things for the team.</p>
<p>And that&#8217;s another great thing about walls, it doesn&#8217;t need to be a zero sum game.</p>
<p>If you&#8217;re using Jira, using a wall makes perfect sense to me. I don&#8217;t know why you&#8217;d do without.</p>
<p>related reading: Alan Cooper &#8211; <a href="https://medium.com/@MrAlanCooper/know-whiteboards-know-design-eb3a362df684">Know whiteboards, know design</a></p>
<p><img loading="lazy" decoding="async" class="aligncenter" src="https://lh3.googleusercontent.com/UDr2UEaupqZiV9tELhUzVHn6zr4a2gwMemmfbDtnPKI-4TMa8lBX8-JxumyC9jjkDsfKvbzBzMx8jQuIa3Zaob1euLb5hMvfatcV-0DRuehzz4nk8EwlvXKyHOsfciw2ebbl6zI08jvSySiwTXnHX8E5cMPR2dsi4vyJZQZpUZotEwKm0kouhnNNf3-8rp_0olHXjJ_I1AG9Qny3tq_0YGYmbGA5-KVc0rtBpYQIUyEpuFJx4KyxaDo2Cc_po0byFlcoOIn82xfEQYr6Qp3WUPYdmibcV3Xsjqk-ZR7lmE5UVMMRaaaTYN8skVoeMVmb37HAxSlBrYmCj3zs4e4WQH0jPlIfRmblUrmhFGNYGS9Hj40dvsbQ657XFOYa6yOUgCmkBPK-t3dGM_ICDgcg8UlpBcrBNUUaiRTMz7EcLSRRsi_h1KnpNGxsxPrwLizAX-PzMtVKn2rKXMkmjr-SlrPOgzd8XIXYqDviNw882GqKgosEvx3fkyWAgRgWVsho7Z0nApFpsqpwJoDEqOlnlR7OL2rxQUrRCJcVBWmngcxUe5Hc5rjMxDv1ASxTPtZtrofvbc3eDo6Mz8YvDkliDoU5hEJNZohcHtWxn8dx=w1872-h1404-no" alt="" width="1872" height="1404" /></p>
]]></content:encoded><description>Earlier this week I did a talk at the Mind the Product conference in San Francisco. I was talking about research, but now that i work at Atlassian, the examples I gave included some from the Jira team&amp;#8217;s work. I also showed a slide that was a photo of a team, gathered in a meeting&amp;#8230; &lt;a class="more-link" href="https://disambiguity.com/what-walls-are-for/"&gt;Continue reading &lt;span class="screen-reader-text"&gt;What walls are for&lt;/span&gt; &lt;span class="meta-nav" aria-hidden="true"&gt;&amp;#8594;&lt;/span&gt;&lt;/a&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">2</thr:total><author>leisa.reichelt@gmail.com (Leisa Reichelt)</author></item><item><title>From insights to actions. Or, what should we do with this research?</title><link>https://disambiguity.com/from-insights-to-actions-or-what-should-we-do-with-this-research/</link><category>research</category><pubDate>Wed, 14 Feb 2018 06:09:50 GMT</pubDate><guid isPermaLink="false">https://disambiguity.com/?p=1889</guid><content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<blockquote><p>So what should we do with this research?</p></blockquote>
<p>This is a question that researchers often hear at the end of a playback session. <span class="hardreadability">Especially one where we&#8217;re sharing findings or insights and not detailed recommendations of what to do next</span>.</p>
<p>Most of the time there are two questions that teams should ask themselves:</p>
<ol>
<li>Which of these problems/opportunities do we care about now? If you were going to prioritise, which are the most pressing? Which might contribute most to the team meeting goals?</li>
<li>What do <span class="qualifier">we think</span> we could do that might make things better for our users? What different things could we do that might address this opportunity?</li>
</ol>
<p><span class="hardreadability">A good researcher can help a team understand what opportunities are available to pursue</span>. They will help you to see a problem in a different way &#8211; to frame the problem from the users point of view.</p>
<p>But you shouldn&#8217;t expect the researcher to come back and &#8216;tell you what to do&#8217;.</p>
<h2 id="Frominsightstoactions.Or,whatshouldwedowiththisresearch?-Frominsightstoactions">From insights to actions</h2>
<p>Getting to actions from insights is a team sport that requires a range of inputs. The researchers role is to make the &#8216;user&#8217; input as rich and insightful as possible. <span class="hardreadability">They should then to work with the team explore and </span><span class="complexword">evaluate</span><span class="hardreadability"> the possibilities that emerge</span>.</p>
<h2 id="Frominsightstoactions.Or,whatshouldwedowiththisresearch?-Whatmakesaninsightactionable?">What makes an insight actionable?</h2>
<p>To make a research insight actionable it must answer two key questions:</p>
<ul>
<li>what is happening?</li>
<li>why is it happening?</li>
</ul>
<p>Research that is <strong>not</strong> actionable answers only the first of these questions. If we don&#8217;t know <strong>why</strong> something is happening, we are not well placed to contemplate what action we should take.</p>
<p><span class="hardreadability">The better the &#8216;why&#8217; explanation, the better equipped a team will be to come up with clear and confident actions in response</span>.</p>
<h2 id="Frominsightstoactions.Or,whatshouldwedowiththisresearch?-Researchalonewon'ttellyouwhattodo">Research alone won&#8217;t tell you what to do</h2>
<p><span class="hardreadability">Sometimes when people say they want the research to be actionable, what they </span><span class="adverb">really</span><span class="hardreadability"> mean is that they want the research to tell them what to do</span>. They want research to answer a third question:</p>
<ul>
<li>what should we do?</li>
</ul>
<p>Sometimes the right course of action is 100% obvious, but often that is not the case. <span class="hardreadability">It would be a foolish or naive researcher who thinks they have the full set of knowledge required to provide this answer</span>.</p>
<p><span class="hardreadability">User research is </span><span class="qualifier">just</span><span class="hardreadability"> one of the pieces of information that product managers or designers need to decide what they should do</span>.</p>
<h2 id="Frominsightstoactions.Or,whatshouldwedowiththisresearch?-Lensesfordecisionmaking">Lenses for decision making</h2>
<p>To make a good decision about what to do next, the team <span class="adverb">really</span> needs to look through at least four lenses:</p>
<ul>
<li>what is the user perspective?</li>
<li>how does this align to our product strategy?</li>
<li>what are the technical (feasibility) issues?</li>
<li>what are the financial/business implications? (cost / revenue)</li>
</ul>
<p>Or, to use a more familiar framework, is the solution desireable, <span class="complexword">feasible</span> and viable.</p>
<p><figure style="width: 550px" class="wp-caption alignnone"><img loading="lazy" decoding="async" class="confluence-embedded-image" src="https://extranet.atlassian.com/download/attachments/3744772170/viabledesirablefeasible2.jpg?version=1&amp;modificationDate=1518584352827&amp;api=v2" alt="" width="550" height="1141" data-image-src="/download/attachments/3744772170/viabledesirablefeasible2.jpg?version=1&amp;modificationDate=1518584352827&amp;api=v2" data-unresolved-comment-count="0" data-linked-resource-id="3744772321" data-linked-resource-version="1" data-linked-resource-type="attachment" data-linked-resource-default-alias="viabledesirablefeasible2.jpg" data-base-url="https://extranet.atlassian.com" data-linked-resource-content-type="image/jpeg" data-linked-resource-container-id="3744772170" data-linked-resource-container-version="2" /><figcaption class="wp-caption-text">Image: Niti Bhan</figcaption></figure></p>
<p>Most of the time, user researchers aren&#8217;t in possession of this full set of information. They will likely have strong and informed views. But don&#8217;t <span class="passivevoice">be disappointed</span> if they can&#8217;t point you straight to the perfect solution.</p>
<p><span class="hardreadability">Designers and product managers are usually much more expert in coming up with and evaluating solutions</span>.</p>
<p><span class="hardreadability">Designers </span><span class="passivevoice">are trained</span><span class="hardreadability"> to take a problem and think about how you might be able to take many different approaches to solving it</span>. <span class="hardreadability">Teams should use the designer to make sure they&#8217;re generating and evaluating lots of possible solutions</span>.</p>
<p><span class="hardreadability">Product Managers tend to be the experts in balancing all the different needs and helping the team to choose the best of the solutions on offer</span>.</p>
<p>Researchers can help represent the end user perspective through out this process. They can play a role in helping design a way of evaluating proposed solutions from the users point of view.</p>
<p>Other team members are also vital in this process.</p>
<p><span class="veryhardreadability">Engineers and technical representatives giving the feasibility perspective (and quite often some pretty amazing possible solutions that the designers might have missed)</span>.</p>
<p><span class="veryhardreadability">Analysts and data scientists providing a different useful data sets to contribute to evaluating solutions</span>. <span class="hardreadability">Sometimes a colleague from legal, or marketing, or other parts of the organisation can be very useful in this process too</span>.</p>
<h2 id="Frominsightstoactions.Or,whatshouldwedowiththisresearch?-Gettingfrominsightstoactionsisateamsport">Getting from insights to actions is a team sport</h2>
<p><span class="hardreadability">Its the responsibility of the researcher to make sure that the insights they bring to the team are useful</span>. They need to explain the <strong>why</strong> and not <span class="qualifier">just</span> the <strong>what</strong>. But moving from insights to actions is a team sport and needs all the players to <span class="complexword">participate</span>.</p>
]]></content:encoded><description>So what should we do with this research? This is a question that researchers often hear at the end of a playback session. Especially one where we&amp;#8217;re sharing findings or insights and not detailed recommendations of what to do next. Most of the time there are two questions that teams should ask themselves: Which of these&amp;#8230; &lt;a class="more-link" href="https://disambiguity.com/from-insights-to-actions-or-what-should-we-do-with-this-research/"&gt;Continue reading &lt;span class="screen-reader-text"&gt;From insights to actions. Or, what should we do with this research?&lt;/span&gt; &lt;span class="meta-nav" aria-hidden="true"&gt;&amp;#8594;&lt;/span&gt;&lt;/a&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">2</thr:total><author>leisa.reichelt@gmail.com (Leisa Reichelt)</author></item></channel></rss>