<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Tester&#039;s Notebook</title>
	<atom:link href="https://testersnotebook.jeremywenisch.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://testersnotebook.jeremywenisch.com</link>
	<description>Writing my way toward clearer thoughts on testing.</description>
	<lastBuildDate>Sat, 12 Aug 2017 18:10:57 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
<site xmlns="com-wordpress:feed-additions:1">85885372</site>	<item>
		<title>A Very Edgy Sequel: Testing on the Edge II</title>
		<link>https://testersnotebook.jeremywenisch.com/2017/08/13/a-very-edgy-sequel-testing-on-the-edge-ii/</link>
					<comments>https://testersnotebook.jeremywenisch.com/2017/08/13/a-very-edgy-sequel-testing-on-the-edge-ii/#comments</comments>
		
		<dc:creator><![CDATA[Jeremy Wenisch]]></dc:creator>
		<pubDate>Sun, 13 Aug 2017 22:00:32 +0000</pubDate>
				<category><![CDATA[Testing]]></category>
		<category><![CDATA[Thinking]]></category>
		<guid isPermaLink="false">https://testersnotebook.jeremywenisch.com/?p=388</guid>

					<description><![CDATA[Fourteen months and two posts ago, I described several ways that I am a tester on the edge; that is, I had noticed several &#8220;tensions within myself while I test software: I tend to teeter on the edge between sets of [&#8230;]]]></description>
										<content:encoded><![CDATA[<p class="p1">Fourteen months and two posts ago, I described several ways that <a href="https://testersnotebook.jeremywenisch.com/2016/06/06/testing-on-the-edge/" target="_blank" rel="noopener">I am a tester on the edge</a>; that is, I had noticed several &#8220;tensions within myself while I test software: I tend to teeter on the edge between sets of two things – tactics, concepts, mindsets, emotions.&#8221; The life of a tester (for me, at least) seems to be a life of balances.</p>
<p class="p1">Since putting a name to the phenomenon, I have noticed even more examples, and I&#8217;ve expanded on a handful below.</p>
<hr />
<p class="p1"><strong>Overreporting vs. Underreporting</strong></p>
<p>Here&#8217;s a feeling I hate: A release goes out, a week later a bug report from a user comes in, and I recognize it immediately. I caught that bug during testing, but I successfully convinced myself that it wasn&#8217;t worth reporting, at least not yet. Maybe we were in a crunch for the release and either I didn&#8217;t think it was critical or I didn&#8217;t think it was new to this release, or I thought it would take too much time to investigate and I didn&#8217;t want to report it without pinning it down, or maybe we weren&#8217;t in a crunch but I didn&#8217;t think it was likely a user would run into it or I didn&#8217;t see a crucial risk in the bug or I didn&#8217;t think it would get fixed. I caught the bug, I didn&#8217;t report it, and it bugged a user.</p>
<p>Here&#8217;s another feeling I hate: I report a bug, and it gets closed as won&#8217;t-fix, or deferred to Someday. Maybe I overvalued how badly it would bug a user or how likely it would be to occur, or maybe I didn&#8217;t uncover or include enough evidence to make the risk claim credible, or maybe the fix would be too invasive or destabilizing, or maybe it came down to aesthetic nit-picking not worth addressing. I caught the bug, I reported it, and it wasn&#8217;t fixed. When this happens too often, credibility with developers takes a dip.</p>
<p class="p1"><span class="s1">So I find myself teetering on the edge between actions that avoid those feelings I hate; between overreporting to avoid missing important bugs and underreporting to avoid losing credibility.</span></p>
<p><strong>Analysis vs. Evidence</strong></p>
<p>Here&#8217;s another way that a tester&#8217;s credibility with developers can suffer: too often taking a guess at the root cause of a bug. This can take several forms, among them:</p>
<ol>
<li>Statement of apparent fact: &#8220;The widget is breaking when I enter a value of zero because the underlying function isn&#8217;t handling divide-by-zero properly.&#8221;</li>
<li>Accusation: &#8220;The widget is breaking when I enter date values in the past because you didn&#8217;t initialize the field correctly.&#8221;</li>
<li>Wild guess hedged with a question mark: &#8220;The widget is breaking when I enter text. I think because the field type is wrong?&#8221;</li>
</ol>
<p>I&#8217;m wary of trying to identify a bug&#8217;s root cause too often, no matter how tactfully I present it, because I am not the developer and I do not know the code like the developer does; I could too easily be wrong in my analysis, and every time I&#8217;m wrong my credibility slips just a bit further.</p>
<p>But it also seems there are valid reasons to try to suggest the cause of a bug in the first place. Maybe the evidence I&#8217;ve provided isn&#8217;t quite enough, but I have good hunch based on experience; maybe I&#8217;ve looked at the code for the most recent fix, and I actually <em>do</em> see the problem, or at least have a good idea of what sort of issue is causing the symptom or symptoms I found. I&#8217;ve hesitantly offered my idea of the root cause of a bug before only to be pleasantly surprised by a &#8220;Thanks, that saved me a lot of time!&#8221; note from the developer.</p>
<p>So, I teeter on the edge between wanting to venture an analysis of a bug&#8217;s root cause to help the developer and wanting to stick to the evidence to avoid looking silly.</p>
<p><strong>Rejecting vs. Accepting &#8220;No user would ever do that&#8221;</strong></p>
<p>Here&#8217;s a scenario: A developer submits a fix for a complex bug and says, &#8220;I fixed it so that when a user does A, the system no longer does Z, but I didn&#8217;t prevent Y from happening when a user does B, because no user would ever do B.&#8221; If you&#8217;re a tester, that&#8217;s a smell, right? The little red critical-thinking light above your head starts spinning and flashing and you start asking questions like, &#8220;How do we know that no user would ever do that? Would a user do something <em>similar</em> to that? Could something similar still lead to unwelcome behavior? Can we find evidence of whether users do things like this? Is there a more severe version of the unwelcome behavior possible? Could a user accidentally do this? Might a mischievous user do it?&#8221;</p>
<p>But here&#8217;s another question: <em>What is this action called B</em>? Is it something like clicking outside of a form where you wouldn&#8217;t expect a user to click? Or entering a value that you wouldn&#8217;t expect a user to enter? <em>Or</em>, is it something like a billion unique users submitting a form at the same precise moment?  Or a user opening the dev tools and modifying the html in a form?</p>
<p>And another question: <em>What is this outcome called Y</em>? Is it something catastrophic, like a server crash or irreparable data loss? Or is it something mild, like a goofy-looking form or a slight delay in loading time?</p>
<p>&#8220;No user would ever do that&#8221; is a smell that there might be more wrong than a developer realizes, but it can also be a smell that a tester might not be prioritizing their time well, perhaps spending too much of it hunting down low-frequency, low-impact issues.</p>
<p>So I stay on the edge between rejecting and accepting &#8220;No user would ever do that.&#8221;</p>
<p><strong>Multi-tasking vs. Flow</strong></p>
<p>In my first <a href="https://testersnotebook.jeremywenisch.com/2016/06/06/testing-on-the-edge/">Testing on the Edge post</a>, I wrote about getting into a flow state in the context of staying on the edge between taking notes and staying in flow:</p>
<blockquote><p>When I test uninterrupted for awhile, I can get into a flow state, where I keep most new information in my brain’s working memory, interacting with the software, asking and answering questions on the fly.</p></blockquote>
<p>Note-taking deals with managing how I spend my time while working on a particular testing task &#8212; testing a feature, testing a bug fix, touring a new app, sense-making, reproducing a mystery bug. But what about managing how I spend my time overall, among many tasks? I find myself teetering on the edge between two approaches.</p>
<p>The first approach is to select a small handful of tasks to work on in my mental &#8220;now&#8221; bucket. Why do this? Say I&#8217;m working on Task A, testing a new feature. I&#8217;ve tested every test idea I can think of. But I&#8217;m not convinced I&#8217;ve thought of everything &#8212; I have that nagging feeling that I&#8217;ve forgotten or overlooked something. If I&#8217;m only working on one task at a time &#8212; finish one, then move on to the next &#8212; then I&#8217;d have two options: (1) keep pushing myself through the mental block until I&#8217;m satisfied I&#8217;m done or (2) declare &#8220;Done!&#8221; and move on. But if I have Task B and Task C sitting in my bucket as well, I can just set Task A back in the bucket, pull out Task B, and be productive. Task B is exploring a redesigned part of the software to find regression bugs. At some point, I find myself dragging through Task B, staring at the screen without really doing anything. But, hey! I think I have another idea for Task A. I&#8217;ll drop Task B back in the bucket and pull out Task A again.</p>
<p>It&#8217;s starting to sound like I&#8217;m presenting a full endorsement of this sort of task management, and not one side of a teeter, but this multi-tasking does come with a price: that &#8220;now&#8221; bucket of tasks constantly occupies mental space. This can be both draining and disruptive. Maybe things are going along great with Task B, and <em>that&#8217;s</em> when my new idea for Task A (which I didn&#8217;t declare as &#8220;Done!&#8221;) decides to pop up. Now I have to spend mental energy making a decision: do I drop Task B to tend to Task A, or do I keep my flow and risk losing that Task A idea?</p>
<p>The second approach is to limit the &#8220;now&#8221; bucket to one task at a time. Keep flow as much as possible. When things drag or I think I&#8217;m done but have the nagging feeling of not-done, then I pick up something unproductive, like a puzzle, or go for a walk &#8212; but I don&#8217;t clutter the mental space with additional tasks.</p>
<p>Which side I teeter on seems to depend on the nature of the tasks at hand, and on my mood.</p>
<p><strong>Goal vs. Deadline</strong></p>
<p>In my current role, I&#8217;m asked to test the same product most of the time. That product has different customers, though, with different service agreements and contracts, and as a result not every release that I test has the same level of urgency. Some releases have an informal goal date we&#8217;re shooting for, but we can cut the release whenever we feel ready, and some have a hard deadline date when the release will be installed, ready or not.</p>
<p>I&#8217;ve observed that I approach testing differently depending on the level of urgency. When the urgency is low, I allow myself to be more reflective, to think through risks more thoroughly, to chase suspicions and curiosities. When the urgency is high, I more actively prioritize tasks and ideas, I keep myself more focused and dawdle less, I avoid rabbit holes by taking note of potential issues to investigate later rather than immediately.</p>
<p>Neither mode of being is perfect, and each can be beneficial in different ways. When I&#8217;m goal-oriented, I tend to gain a deeper understanding of the product and more often identify patterns over time; when I&#8217;m deadline-oriented, I tend to be more efficient and more often trust my intuition about potential issues rather than fall into a trap of over-thinking.</p>
<p>And so, even when there is no external goal or deadline in place, I&#8217;ve found that I still teeter on the edge between being goal-oriented and deadline-oriented.</p>
<hr />
<p>Once again, I’d love to hear from you. Do you teeter too? In what ways are you a tester on the edge? (Or a even a non-tester on the edge! I was excited when my wife said a few of the examples above resonated with the non-testing work she&#8217;s doing right now.)</p>
]]></content:encoded>
					
					<wfw:commentRss>https://testersnotebook.jeremywenisch.com/2017/08/13/a-very-edgy-sequel-testing-on-the-edge-ii/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">388</post-id>	</item>
		<item>
		<title>Anthropomorphic Intelligence</title>
		<link>https://testersnotebook.jeremywenisch.com/2016/08/17/anthropomorphic-intelligence/</link>
					<comments>https://testersnotebook.jeremywenisch.com/2016/08/17/anthropomorphic-intelligence/#comments</comments>
		
		<dc:creator><![CDATA[Jeremy Wenisch]]></dc:creator>
		<pubDate>Wed, 17 Aug 2016 11:15:02 +0000</pubDate>
				<category><![CDATA[Books]]></category>
		<category><![CDATA[Real World]]></category>
		<category><![CDATA[Testing]]></category>
		<guid isPermaLink="false">https://testersnotebook.jeremywenisch.com/?p=343</guid>

					<description><![CDATA[Recently, I opened Timehop for my daily dose of wibbly-wobbly, timey-wimey memories, and the app told me that I need to sign into Facebook because hey, the Facebook connection isn&#8217;t working anymore. Below the message, there was a huge, brightly [&#8230;]]]></description>
										<content:encoded><![CDATA[<p><img fetchpriority="high" decoding="async" class="alignright wp-image-375 size-medium" src="https://testersnotebook.jeremywenisch.com/wp-content/uploads/2016/08/TimehopFacebookconnect-169x300.png" alt="Timehop" width="169" height="300" srcset="https://testersnotebook.jeremywenisch.com/wp-content/uploads/2016/08/TimehopFacebookconnect-169x300.png 169w, https://testersnotebook.jeremywenisch.com/wp-content/uploads/2016/08/TimehopFacebookconnect.png 577w" sizes="(max-width: 169px) 100vw, 169px" />Recently, I opened <a href="https://timehop.com/about" target="_blank">Timehop</a> for my daily dose of wibbly-wobbly, timey-wimey memories, and the app told me that I need to sign into Facebook because hey, the Facebook connection isn&#8217;t working anymore. Below the message, there was a huge, brightly colored &#8220;Reconnect&#8221; button and below that, a less obvious, smaller &#8220;No, don&#8217;t fix&#8221; link. As I tapped &#8220;No, don&#8217;t fix,&#8221; I said to the app, &#8220;Knock it off, Timehop, I don&#8217;t want to sign into Facebook – I deactivated that account four months ago. Leave me alone!&#8221;</p>
<p>During the five seconds while I had this thought, I was treating the software application like a human person. I reacted to it with emotions: annoyance, anger, indignation. I felt it should know better. I felt it wasn&#8217;t communicating well nor showing any signs that it understood my needs.</p>
<p>After those five seconds passed, the rational part of my brain kicked in, and I thought, &#8220;Oh, yeah. This is software. This is code written by a human person. It&#8217;s been asking me to sign into Facebook regularly for a few weeks now. I can imagine that maybe a scheduled cron job checks for conditions (Facebook connection exists, credentials don&#8217;t work) and runs an alert method. Okay, sure.&#8221;</p>
<p>I suspect this experience is common for those of us close to the technology industry: we have an emotional, human-relational reaction to something a software app does, as though it were a person, but then we remember what the app really is and reason out a theory of how it was programmed to work. But I wonder about the folks who don&#8217;t think about, study, and work with software every day. It seems more likely that their initial five-second reaction remains unchallenged in their minds. Close your eyes and imagine a relative or friend outside the tech industry interacting with a software application and tell me if you hear anything like the following (you can open your eyes to keep reading):</p>
<blockquote><p>&#8220;Why&#8217;d it decide to do that?&#8221;</p>
<p>&#8220;It&#8217;s just being cranky again.&#8221;</p>
<p>&#8220;Tell it to stop acting like that. Why is it so stupid?&#8221;</p>
<p>&#8220;This thing hates me.&#8221;</p>
<p>&#8220;That&#8217;s not what I wanted to do. Why won&#8217;t you listen to me!&#8221;</p></blockquote>
<p>I&#8217;ve been reading <a href="https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow" target="_blank"><em>Thinking, Fast and Slow</em></a> by Daniel Kahneman since writing the original draft of this post, and I now see that the immediate response that personifies software is an act of System 1, while it is System 2 that challenges that reaction by considering the underlying code. In this case, if a person lacks knowledge of and experience with code, their System 2 is going to accept System 1&#8217;s initial understanding as good enough and move on.</p>
<p><strong>Testing for Anthropomorphic Interaction</strong></p>
<p>So why should I care about this as a software tester? Testers should care about this because <em>we can&#8217;t stop users from reacting to software like it is human</em>. It happens. It&#8217;s going to keep happening. Kahneman writes:</p>
<blockquote><p>“Your mind is ready and even eager to identify agents, assign them personality traits and specific intentions, and view their actions as expressing individual propensities.&#8221;</p></blockquote>
<p>If we keep this in mind, I believe it can help us spot threats to quality in the software we test. If I were testing the Timehop app, for example, I might say, &#8220;The system successfully prompts the user to fix a service connection every X number of days that the system cannot connect. Requirement satisfied. Test passed!&#8221; But if I remember that the user will be interacting with the system as though it were human, I allow myself to think, &#8220;Hey, what happens if the user taps &#8216;No, don&#8217;t fix&#8217; after five consecutive prompts? Are they going to get annoyed or angry or confused? Are they going to think the system is an inconsiderate pest?&#8221;</p>
<p>Depending on the context, I would likely report this as a threat to quality. The machine is interacting with the user as though they are another machine (&#8220;if this, then this, loop&#8221;), while the user is interacting with the machine as though it is another person (&#8220;I told you no!&#8221;). I think that we should strive to meet the user&#8217;s expectations whenever reasonable. In this case, I would report that the system doesn&#8217;t change its behavior after repeated &#8220;No, don&#8217;t fix&#8221; clicks, which violates the user&#8217;s expectations of a person-like interaction. If pressed, I might suggest that after three consecutive &#8220;No, don&#8217;t fix&#8221; clicks the application offer to change the user&#8217;s connection settings and turn off the Facebook connection – or at least disable the alerts.</p>
<p>I said that we should strive to meet the user&#8217;s expectations &#8220;whenever reasonable&#8221; – there are of course limits to how far software developers should go to make their applications act more like a person. For example, extreme attempts are likely to fall flat and disappoint users even more so – hello, Siri. For another, there can emerge a feedback loop wherein as software acts more like a person, users expect all software to act more like a person, and become even more disappointed with shortcomings. I&#8217;m concerned that misguided journalists are already feeding this with articles that provide only a shallow understanding of AI, its promise, and its shortcomings.</p>
<p>In the end, we can&#8217;t prevent users from anthropomorphizing software. But as testers we can perhaps anticipate and identify the ways it might threaten quality.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://testersnotebook.jeremywenisch.com/2016/08/17/anthropomorphic-intelligence/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">343</post-id>	</item>
		<item>
		<title>Testing on the Edge</title>
		<link>https://testersnotebook.jeremywenisch.com/2016/06/06/testing-on-the-edge/</link>
					<comments>https://testersnotebook.jeremywenisch.com/2016/06/06/testing-on-the-edge/#comments</comments>
		
		<dc:creator><![CDATA[Jeremy Wenisch]]></dc:creator>
		<pubDate>Tue, 07 Jun 2016 02:23:53 +0000</pubDate>
				<category><![CDATA[Testing]]></category>
		<category><![CDATA[Thinking]]></category>
		<guid isPermaLink="false">https://testersnotebook.jeremywenisch.com/?p=332</guid>

					<description><![CDATA[I am a tester on the edge. For several years, I&#8217;ve noticed tensions within myself while I test software: I tend to teeter on the edge between sets of two things – tactics, concepts, mindsets, emotions. When I&#8217;m aware that [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>I am a tester on the edge.</p>
<p>For several years, I&#8217;ve noticed tensions within myself while I test software: I tend to teeter on the edge between sets of two things – tactics, concepts, mindsets, emotions. When I&#8217;m aware that I&#8217;m testing on the edge in some way, I make note of it. It&#8217;s happened enough now that I&#8217;m convinced it&#8217;s a <em>thing</em>, and worth sharing.</p>
<p>During the years when I collected these examples, I struggled with what word or phrase I could use to describe them in a way that would be memorable. I thought for a time that an appropriate image was balance – maybe walking on a balance beam or a tightrope. But balance isn&#8217;t quite right; as you&#8217;ll see below, the idea isn&#8217;t to seek some perfect mix of each side. More recently I thought, hey, maybe it&#8217;s yin and yang! Borrow from the ancients, right? But yin and yang <span class="s1">represent complementary forces that form a dynamic system; a whole greater than the parts. Again, not quite there.</span></p>
<p>Most recently, I heard James Bach use the word <em>tension</em> during the Rapid Software Testing course; he was describing things like diversification in tension with cost vs. value. I immediately saw a connection to my testing on the edge concept. Nice! (There is also a tangential concept covered on the <a href="http://www.satisfice.com/rst-appendices.pdf">RST Appendices (p. 14)</a> called &#8220;Exploratory Testing Polarities.&#8221;) But something else I learned during the RST course is that if I name things myself I am more likely to remember them. So, awkward as it may seem, I&#8217;m sticking with the refrain that&#8217;s been in my head throughout the years: <em>testing on the edge</em>.</p>
<p>On to the examples.</p>
<hr />
<p><strong>Confidence vs. Self-doubt</strong></p>
<p>As a tester, I find it important to keep on the edge between confidence in my abilities and healthy self-doubt.</p>
<p>I think having self-doubt is the more obviously desirable trait for testers. We are natural-born and well-practiced questioners, and we question not just the product and the project but ourselves as well. Is this a good test? What am I trying to learn by doing this? Am I being efficient? What assumptions am I making? What are my blind spots? My biases? Skilled testing flows from healthy self-doubt.</p>
<p>But too much self-doubt can be crippling. I teeter back to confidence to get things done. I question myself to refine my decisions, but I trust myself to actually make decisions. This is a good enough test. This is an efficient use of time. I&#8217;m making this assumption because it is reasonable. I have practiced, I can do this. Skilled testing flows from confidence.</p>
<p>But too much confidence leads to rashness, conceit, blind spots&#8230; so I teeter. I stay on the edge.</p>
<p><strong>Clean vs. Dirty Test Environments</strong></p>
<p>When I test on my current project, I use the same databases for a long time, often carrying over from release to release. This has the benefit of allowing the test data to become &#8220;dirty&#8221; over time, improving the chance of revealing bugs that only occur in complex scenarios that resemble the real world. For the same reason, I usually avoid deleting test data after testing a specific scenario; by letting data from various tests accumulate over time, I serendipitously stumble into interesting bugs later on (more on serendipity in a bit). Some bugs love the dirt and grime.</p>
<p>Then again, it&#8217;s difficult to see some other bugs through opaque glass. Maybe if things weren&#8217;t so dirty, I could see more. I also try to keep a clean database, with little data, where I clean up after myself after tests. This helps me when I need to see how things work under very specific conditions; when a bug shows up in the clean environment, it&#8217;s much easier to see how it got there and find its critical conditions.</p>
<p>Of course, in practice, I don&#8217;t say, &#8220;Now it&#8217;s time to test in Dirty Database A. Okay, switching to Clean Database B for this.&#8221; I stay on the edge, teetering between dirty- and clean-environment mindsets and habits as I navigate my exploration.</p>
<p><strong>MFAT vs. OFAT</strong></p>
<p>I stay on the edge when it comes to variance of factors while testing: I teeter between varying multiple factors at a time (MFAT) in order to shake out bugs as quickly as possible and varying one factor at a time (OFAT) to make it easier to pin down the critical condition that exposed a found bug. This is a common source of tension in my exploratory testing. Varying conditions one factor at a time, noting each condition as I go, makes it much more likely that, when I encounter a bug, I can say &#8220;Aha, this, this, and this led to that bug.&#8221; But I also know that testing strictly in this manner is time-consuming and can be very boring, even soul-draining. By shaking things up with an MFAT strategy, I increase my chances of brushing against a bug in less time, while keeping my senses alert and interested.</p>
<p><strong>Regression Checking vs. Testing</strong></p>
<p>When it&#8217;s time for me to test for regression bugs in a new release of the software I test, I have a couple of objectives, constrained by limited resources (namely, my time, as I am my team&#8217;s only tester): (1) to cover as much of the same ground as possible, from release to release, to have some confidence that things that were once working are still working; and (2) to test the once-working things with fresh eyes, looking for new issues by investigating in new ways. This means I end up testing on the edge: teetering between regression checking and regression testing.</p>
<p>Regression checking emerges from the part of me that wants to follow a checklist, to feel like I&#8217;m not forgetting anything, to do things in the same way as the past, running checks that I&#8217;ve developed through years of testing. Regression testing emerges from the part of me that wants to explore the &#8220;same old&#8221; software with new eyes, purposely avoiding the temptation to run the same checks. I don&#8217;t want to forget anything important, but sometimes it&#8217;s worth the risk of forgetting one minor thing if it means getting out of a check-focussed rut and letting my mind wander familiar territory in unfamiliar ways. Hence, I tend to the edge.</p>
<p><strong>Meta-thinking vs. Subconscious thinking</strong></p>
<p>I need to be aware of my own thinking: how I am thinking, what my biases are, my emotions, my thought processes; but I can&#8217;t be constantly aware. Too much meta-level thinking can be a hindrance to good testing – I believe I do my best when I also lean on my subconsciousness, that stuff that we usually call instinct. And the more I try to be hyperaware of how that subconsciousness is working, the more (I fear) it will cease to work at all.</p>
<p>For the most part, I believe that self-awareness of how my thinking works should be relegated to when I am not actually testing: to quiet times of reflection. So that if I decide something needs correction of some kind (a bias become too blinding, maybe), I can hopefully let that happen to my subconscious mind, and not try to be aware of it consciously the next time I am testing.</p>
<p>There&#8217;s no easy answer to this, but there is a lot of literature on the subject. I read up, and I keep on the edge.</p>
<p><strong>Notes vs. Flow</strong></p>
<p>I keep lightweight testing notes that serve a few purposes: keep track of what I&#8217;ve tested; new test ideas (expanding the checklist); possible bugs; troubleshooting notes while following up on a bug. This last purpose can be very potent, helping me keep track of conditions I&#8217;ve tried and the results I observed as I uncover a better view of the bug.</p>
<p>But here&#8217;s something else that&#8217;s potent while testing: flow. When I test uninterrupted for awhile, I can get into a flow state, where I keep most new information in my brain&#8217;s working memory, interacting with the software, asking and answering questions on the fly. Stopping to take a note as each new piece of information pops up breaks this flow. Taking a note because I think it&#8217;ll help me keep track of something can actually disrupt my brain&#8217;s natural ability to keep track of things on its own. Have you ever stopped to take a note while testing, returned to the software, and thought, &#8220;Now, wait&#8230; what was I doing?&#8221;</p>
<p>So I teeter. I stay on the edge. My default preference is to keep in a flow as much as possible. What pushes me to take notes most often is an abundance of potential bugs that aren&#8217;t quite relevant to what I&#8217;m trying to learn about at the moment: I can hold things relevant to the current thread of testing in working memory, but I will forget unexpected behavior that bubbles up on the periphery.</p>
<p><strong>Chaos vs. Order</strong></p>
<p>Effective testing is enhanced by the chaos of randomness and chance. I&#8217;m just skimming the surface here, but a great deep dive on the concept of serendipity in testing is Rikard Edgren&#8217;s webinar, &#8220;<a href="https://testhuddle.com/resource/good-testers-are-often-lucky-using-serendipity-in-software-testing/">Testers Are Often Lucky</a>.&#8221;</p>
<p>This idea of chaos also ties into what I said about &#8220;dirty&#8221; test environments above. While I test, I often indulge my brain&#8217;s subconscious impulses: What if I click there? What if I fill these fields with values like this? What if I navigate these screens in this order instead of that? When I ride these impulses without concern for what I&#8217;m actually doing – when I don&#8217;t let chaos be hemmed in by order – I find wholly unexpected bugs in the software.</p>
<p>Yet completely unbounded chaos can be unproductive. Order has its own value in testing. What happens when I find a bug: how do I figure out how to reproduce it after my chaotic flourish? Or what happens after an hour of chaos: how do I know what I&#8217;ve accomplished and keep a sense of coverage?</p>
<p>I like to think of this one as keeping on the edge between Batman and the Joker. I teeter between the order that helps keep track of what&#8217;s been tested, including conditions and variables that may help with reproducibility, and the chaos that stirs up productive serendipity.</p>
<hr />
<p>I&#8217;d love to hear from you. Does this concept resonate with you? I&#8217;ve been the only tester on my team (and company) for the last three years, so I am particularly curious how much of this has to do with wearing all of the tester hats. Do you teeter too? In what ways are you a tester on the edge?</p>
]]></content:encoded>
					
					<wfw:commentRss>https://testersnotebook.jeremywenisch.com/2016/06/06/testing-on-the-edge/feed/</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">332</post-id>	</item>
		<item>
		<title>Jim Halpert on Satisficing and Assumptions</title>
		<link>https://testersnotebook.jeremywenisch.com/2015/07/29/jim-halpert-on-satisficing-and-assumptions/</link>
					<comments>https://testersnotebook.jeremywenisch.com/2015/07/29/jim-halpert-on-satisficing-and-assumptions/#respond</comments>
		
		<dc:creator><![CDATA[Jeremy Wenisch]]></dc:creator>
		<pubDate>Thu, 30 Jul 2015 00:05:14 +0000</pubDate>
				<category><![CDATA[Testing]]></category>
		<guid isPermaLink="false">https://testersnotebook.jeremywenisch.com/?p=291</guid>

					<description><![CDATA[Lately I&#8217;ve been saying something so much that it&#8217;s become a bit of a mantra: &#8220;Just killing Germans any way I can.&#8221; I&#8217;m being the furthest thing from literal, of course. Please don&#8217;t report me to the authorities. I have [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Lately I&#8217;ve been saying something so much that it&#8217;s become a bit of a <a href="https://www.youtube.com/watch?v=oPh59jOoiEs">mantra</a>: &#8220;Just killing Germans any way I can.&#8221;</p>
<p>I&#8217;m being the furthest thing from literal, of course. Please don&#8217;t report me to the authorities. I have German ancestors, and nary a killing bone in my body. But if you are a fan of the American version of <em>The Office</em>, you may recognize this as a line said by Jim Halpert in Season 3, when the Stamford office is playing the video game <em>Call of Duty</em>. Jim, who is brand new to the game, playing with an experienced group, and obviously overwhelmed, reports this line to a coworker when his in-game actions are questioned (more on all of this in a bit).</p>
<p>In the way that other people might have the radio or television just &#8220;on,&#8221; my wife and I very often have <em>The Office</em> just &#8220;on&#8221; via Netflix. There are other shows in our regular rotation, but <em>The Office</em> <a href="http://www.ilisteniwatch.com/mashed-potatoes-and-the-office-tv-comfort-food/">is my wife&#8217;s TV comfort food</a>, and I have zero complaints. And so, in a similar way that TV or radio ad jingles get stuck in people&#8217;s heads, I frequently get lines from shows like <em>The Office</em> stuck in my head when I&#8217;ve heard it enough times. It&#8217;s also not unusual for such a line to get funnier to me the more I use it, in less and less relevant contexts. For example, as I finish the last few bites of a big dinner: &#8220;Just killing Germans any way I can.&#8221; As I take a different route home from the grocery store: &#8220;Just killing Germans any way I can.&#8221; Usually this habit fades away over time and my increasingly mindless repetition delivers the humor in the line a slow, awkward death. But something more interesting happened in this case: as the humor faded, an unexpected usefulness arose, and the line refused to go away. As I strategize how to haul off an endless pile of wood from a felled oak: &#8220;Just killing Germans any way I can.&#8221; As I test a bug fix scenario without the data I may have preferred: &#8220;Just killing Germans any way I can.&#8221;</p>
<p>I realized eventually that &#8220;Just killing Germans any way I can&#8221; had come to mean &#8220;satisfice&#8221; for me (thank you, <a href="http://www.satisfice.com/">James Bach</a>, for first making that term known to me): seeking a solution that gets the job done satisfactorily &#8212; not necessarily optimal, not perfect, but <em>good enough</em> for the current context. As testers, we can&#8217;t always get what we want. We are in service of a stakeholder who seeks information about the product under test: maybe developers, maybe a product manager, maybe customer service, maybe customers themselves. The point is that we don&#8217;t always get to dictate the terms of our service: maybe we don&#8217;t get the perfect QA environment; maybe we don&#8217;t get data quite like production; maybe we don&#8217;t get requirement or specification documents; maybe we don&#8217;t get the time that we&#8217;d like. There are project factors at work that limit every team member&#8217;s resources in some way, not just the testers&#8217;; that&#8217;s life in software development. But we testers still have an obligation to provide timely, useful information to the stakeholder who is making decisions about the product. So we make the most with what we have; which is to say, we just kill Germans any way we can.</p>
<hr />
<p>Beyond being reminded of the power of satisficing, something else happened the more I repeated that line, as I started to play out the rest of that scene from <em>The Office </em>in my mind each time:</p>
<p><iframe title="The Office - Call of duty 02" width="1170" height="878" src="https://www.youtube.com/embed/uAVvSnE9J9o?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<blockquote><p>Andy: &#8220;Why did you do that??&#8221;</p>
<p>Jim: &#8220;Just killing Germans any way I can.&#8221;</p>
<p>Andy: &#8220;We&#8217;re on the German team. Shoot the British.&#8221;</p>
<p>Jim: &#8220;Wait, are we playing teams?&#8221;</p></blockquote>
<p>Jim&#8217;s bewilderment over the basic terms of the game is hilarious, but it&#8217;s also instructional. We&#8217;ve all been there: triumphantly marching along, testing our merry way, just killing Germans any way we can, working under a particular set of assumptions that perhaps we&#8217;re not even aware we&#8217;ve made, and then suddenly &#8212; perhaps we kill a teammate in <em>Call of Duty</em> without realizing they&#8217;re our teammate, perhaps we submit an invalid bug report, perhaps we share some results with a developer that make no sense &#8212; suddenly our unconscious assumptions are tossed in our face, and we&#8217;re forced to finally ask the questions we maybe should have asked before embarking on our adventure. &#8220;Wait, are we playing teams?&#8221; &#8220;Wait, are we not supporting IE8 any longer?&#8221; &#8220;Wait, are the customers who will be using this feature primarily Spanish-speaking? <em>Only</em> Spanish-speaking?&#8221; &#8220;Wait, is this supposed to work with Android? We&#8217;ve <em>never</em> supported iOS?&#8221;</p>
<p>As testers, we&#8217;re well-trained to question the assumptions made by others; it&#8217;s why we love being involved early in projects, so we can ask our many questions and question our team members&#8217; many assumptions, before they become too solidified. But do we remember to question our own assumptions? I think that is much harder to do, especially when we are new to a context, as in Jim Halpert&#8217;s n00b <em>Call of Duty</em> experience. &#8220;Just killing Germans any way I can&#8221; is my reminder to do just that. (Don&#8217;t worry, I&#8217;ve never uttered this mantra aloud to anyone but my wife.)</p>
]]></content:encoded>
					
					<wfw:commentRss>https://testersnotebook.jeremywenisch.com/2015/07/29/jim-halpert-on-satisficing-and-assumptions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">291</post-id>	</item>
		<item>
		<title>Insights from The Black Swan, Part 3 &#8211; The Ludic Fallacy</title>
		<link>https://testersnotebook.jeremywenisch.com/2015/03/05/insights-from-the-black-swan-part-3-the-ludic-fallacy/</link>
					<comments>https://testersnotebook.jeremywenisch.com/2015/03/05/insights-from-the-black-swan-part-3-the-ludic-fallacy/#respond</comments>
		
		<dc:creator><![CDATA[Jeremy Wenisch]]></dc:creator>
		<pubDate>Thu, 05 Mar 2015 12:05:22 +0000</pubDate>
				<category><![CDATA[Books]]></category>
		<category><![CDATA[Testing]]></category>
		<category><![CDATA[Thinking]]></category>
		<category><![CDATA[nassim nicholas taleb]]></category>
		<category><![CDATA[software testing]]></category>
		<category><![CDATA[the black swan]]></category>
		<guid isPermaLink="false">http://testersnotebook.wordpress.com/?p=223</guid>

					<description><![CDATA[Long ago, while reading The Black Swan by Nassim Nicholas Taleb, I began a series of blog posts (here and here) in which I promised to continue &#8220;reflecting here as I encounter insights that excite me as a tester.&#8221; Did you think that [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Long ago, while reading <em><a href="http://www.amazon.com/The-Black-Swan-Improbable-Robustness/dp/081297381X/ref=sr_1_2?ie=UTF8&amp;qid=1364572187&amp;sr=8-2&amp;keywords=black+swan">The Black Swan</a> </em>by Nassim Nicholas Taleb, I began a series of blog posts (<a title="Insights from The Black Swan, Part 1" href="https://testersnotebook.jeremywenisch.com/2013/04/01/insights-from-the-black-swan-part-1/" target="_blank">here</a> and <a title="Insights from The Black Swan, Part 2" href="https://testersnotebook.jeremywenisch.com/2013/04/05/insights-from-the-black-swan-part-2/" target="_blank">here</a>) in which I promised to continue &#8220;reflecting here as I encounter insights that excite me as a tester.&#8221; Did you think that because I&#8217;ve published three non-related posts since then and nearly two years have passed (let&#8217;s try not to think about that rate of posting), I was done reflecting on <em>The Black Swan</em>? Me too. It turns out that I had started a draft of a third post nearly two years ago, but never returned. I&#8217;m on a mission to make good on my many accumulated drafts and notes and thoughts and get into a solid writing habit, so here goes with that third post.</p>
<p><a href="https://testersnotebook.jeremywenisch.com/wp-content/uploads/2015/03/blackswan.png"><img decoding="async" class="alignnone size-full wp-image-263" src="https://testersnotebook.jeremywenisch.com/wp-content/uploads/2015/03/blackswan.png" alt="blackswan" width="595" height="451" srcset="https://testersnotebook.jeremywenisch.com/wp-content/uploads/2015/03/blackswan.png 595w, https://testersnotebook.jeremywenisch.com/wp-content/uploads/2015/03/blackswan-300x227.png 300w" sizes="(max-width: 595px) 100vw, 595px" /></a></p>
<p>In Chapter Nine of <em>The Black Swan</em>, Taleb presents two characters to help illustrate what he calls the <a title="Ludic fallacy - Wikipedia" href="http://en.wikipedia.org/wiki/Ludic_fallacy" target="_blank">ludic fallacy</a>: Fat Tony, a street-smart, slick-talking student of human behavior who &#8220;has this remarkable habit of trying to make a buck effortlessly&#8221;; and Dr. John, an efficient, reasoned, &#8220;former engineer currently working as an actuary for an insurance company.&#8221;</p>
<p>The full picture that Taleb paints of Dr. John is in many ways a spot-on caricature of me. Dr. John is &#8220;thin, wiry, and wears glasses,&#8221; all of which fits my bill. Like Dr. John, I&#8217;m meticulous and habitual. I too know a bit about computers and statistics, although while Dr. John is an engineer-turned-actuary, I am an actuary-turned-software-tester. It all hit a little too close to home, as you&#8217;ll see.</p>
<p>After proper introductions, Taleb proposes a thought exercise in which he poses a question to both Fat Tony and Dr. John:</p>
<blockquote><p>Assume that a coin is fair, i.e., has an equal probability of coming up heads or tails when flipped. I flip it ninety-nine times and get heads each time. What are the odds of my getting tails on my next throw?</p></blockquote>
<p>My eyes light up and my heart races at this. I&#8217;m flashing back to grade school: &#8220;I know! I know! Let me answer!&#8221; My Talebian doppelgänger, Dr. John, expresses my immediate thought:</p>
<blockquote><p>One half, of course, since you are assuming 50 percent odds for each and independence between draws.</p></blockquote>
<p>Of course, of course! I&#8217;m excited because this understanding was a bit of a revelation in my college probability class. Yes, 99 straight heads is eye-popping, but us studious mathematicians must look past our emotions and see that the previous flips have no bearing on the next flip. The odds of 100 straight heads is one number (a very low one), but the odds of a head on the 100th flip is one in two, just like the 99th flip and just like the 1st flip.</p>
<p>Taleb turns the question to Fat Tony, who also says &#8220;of course,&#8221; but gives a different answer: 1%. His reasoning (with Taleb&#8217;s translation)?</p>
<blockquote><p>You are either full of crap or a pure sucker to buy that &#8220;50 pehcent&#8221; business. The coin gotta be loaded. It can&#8217;t be a fair game. (Translation: It is far more likely that your assumptions about the fairness are wrong than the coin delivering ninety-nine heads in ninety-nine throws.)</p></blockquote>
<p>&#8220;Of course,&#8221; indeed. Dr. John and I have fallen for &#8212; and Fat Tony has seen through &#8212; Taleb&#8217;s ludic fallacy: &#8220;the attributes of the uncertainty we face in real life have little connection to the sterilized ones we encounter in exams and games.&#8221;</p>
<p><img loading="lazy" decoding="async" class="alignnone" src="http://upload.wikimedia.org/wikipedia/en/4/4c/No_gambling.PNG" alt="" width="1284" height="969" /></p>
<p>This is an important lesson for anybody, and especially for anybody in several risk-intensive fields, but I take it to heart as a software tester. If I stick to my background in strict mathematical thinking, I put myself in a box, limiting my view of possible reality. Anybody can perform mathematic calculations &#8212; more significantly, any <em>machine</em> can &#8212; but it takes a learned mindset to observe and question assumptions. Assumptions like the fairness of a coin, the stability of a codebase, the behavior of a user class. It also takes a human to assign <em>meaning</em> to observed outcomes. Like the meaning of a coin that hasn&#8217;t turned up tails in 99 tries (even if based on a probabilistic model of coin flipping it&#8217;s just one of many random outcomes), or the meaning of a web form taking three times longer than average to load during the 12 o&#8217;clock hour four days in a row (even if based on a probabilistic model of network behavior it&#8217;s just one of many random outcomes).</p>
<p>As a tester, I&#8217;ve had to learn (and am still learning) to be more like Fat Tony, the human observer, and less like Dr. John, my actuarial spirit father.</p>
<p>Because here&#8217;s the thing: humans write the software. We can&#8217;t say &#8220;these are the odds of a bug happening here, because there are X variables and Y ways to interact and Z paths.&#8221; Humans write the software&#8217;s code, each with their own human tendencies for specific types of bugs and their own human understanding of how the software should work.</p>
<p>And here&#8217;s the other thing: humans use the software. If a machine were to use the software, instead of a human, one that mechanically executed every single possible combination of variables and buttons and paths and configurations and network connections and on and on &#8212; sure, there&#8217;s probably a nice probabilistic model for how common certain types of bugs will be. But humans use the software in specific, meaningful ways. It&#8217;s rigged, like the coin. It&#8217;s the tester&#8217;s job to (1) observe the the 99 heads, (2) understand the meaning of the 99 heads, and (3) use that observation and meaning to uproot any unsound assumptions.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://testersnotebook.jeremywenisch.com/2015/03/05/insights-from-the-black-swan-part-3-the-ludic-fallacy/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">223</post-id>	</item>
		<item>
		<title>Bugs Find A Way: A Tester’s Appreciation of Jurassic Park</title>
		<link>https://testersnotebook.jeremywenisch.com/2015/02/26/bugs-find-a-way-a-testers-appreciation-of-jurassic-park/</link>
					<comments>https://testersnotebook.jeremywenisch.com/2015/02/26/bugs-find-a-way-a-testers-appreciation-of-jurassic-park/#comments</comments>
		
		<dc:creator><![CDATA[Jeremy Wenisch]]></dc:creator>
		<pubDate>Thu, 26 Feb 2015 14:40:23 +0000</pubDate>
				<category><![CDATA[Books]]></category>
		<category><![CDATA[Testing]]></category>
		<category><![CDATA[jurassic park]]></category>
		<category><![CDATA[software testing]]></category>
		<guid isPermaLink="false">https://testersnotebook.jeremywenisch.com/?p=245</guid>

					<description><![CDATA[Like most human adults, I have many selves in me. Two of these selves love Jurassic Park. One, the ten-year-old self who loves dinosaurs, you can read about in my essay, “Hold On To Your Bookmarks: A Nerd’s Love For [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Like most human adults, I have many selves in me. Two of these selves love <i>Jurassic Park</i>. One, the ten-year-old self who loves dinosaurs, you can read about in my essay, <a href="http://www.ilisteniwatch.com/hold-on-to-your-bookmarks-a-nerds-love-for-jurassic-park/" target="_blank">“Hold On To Your Bookmarks: A Nerd’s Love For Jurassic Park”</a>. The other, the professional software tester self who loves a good tale of technological disaster, you’re going to read about right here, right now.</p>
<p>As I explain in greater detail in my <a title="Hold On To Your Bookmarks: A Nerd's Love for Jurassic Park" href="http://www.ilisteniwatch.com/hold-on-to-your-bookmarks-a-nerds-love-for-jurassic-park/" target="_blank"><i>i listen. i watch.</i></a> companion piece, I first read the novel <i>Jurassic Park </i>in 1993 right before seeing the movie in the theater. It was interesting to re-read the novel today, because while I’d grown in recent years to appreciate how well the movie expresses software project concepts, I found that the book is actually jam-packed with fantastic, relevant quotes from the characters on the subject. I’ll share some here, but I encourage you to read the book if you enjoy the movie for the same tester-nerd reasons I do.</p>
<p>The biggest, most basic lesson from <i>Jurassic Park</i> for software testers (and society, I suppose) is that complex systems will fail. Not just that they <b>might</b> fail, which implies that we can comfortably weigh the possibility of failure against the size of the profit. That they <b>will</b> fail, and the bigger and more complex the system, the more unpredictable in nature the failure, and the bigger its consequences.</p>
<p>Another important lesson dovetails off of that: Not to assume that the possible failure itself will be huge, and to therefore only look for big failures to avoid. In <i>Jurassic Park</i>, the failure was as simple as management mistreating a software programmer-consultant (Dennis Nedry), who coded a backdoor to the security system for himself. But the <b>consequences</b> of that small-seeming failure were unpredictable and snowballed spectacularly. Who would have known the T-Rex would test the fence at that time, which was also the time when the other consultants and the owners’ grandchildren were touring the park? (Who, I mean, besides Michael Crichton.)</p>
<figure style="width: 1280px" class="wp-caption alignnone"><img loading="lazy" decoding="async" src="http://1.bp.blogspot.com/-h7TF92nJgBY/UADpNNDAmuI/AAAAAAAAA70/yXwdfDWsH0s/s1600/Jurassic+Park+(1993)4.png" alt="" width="1280" height="688" /><figcaption class="wp-caption-text">A butterfly flaps its wings&#8230;</figcaption></figure>
<p>&nbsp;</p>
<p>The problem for the park wasn’t that there was a failure in the system, it was that the creators had convinced themselves that there couldn’t even be a failure. John Hammond, the eccentric money man who conceived the vision behind the park and pushed it to the very end, says very early in the novel: “Everything on that island is state-of-the-art. You’ll see for yourself, Donald. It’s perfectly wonderful. That’s why this… <b>concern</b>&#8230; is so misplaced. There’s absolutely no problem with the island.” Once you’ve convinced yourself that your system has no problem, you become blind to potential problems; once you’ve built up a scaffolding of assumptions to help create your system, it’s very easy for one false assumption to bring everything crashing down.</p>
<p>So the masterminds who built Jurassic Park made a lot of assumptions and had, as the novel notes, the “deepest perception that the park was fundamentally sound.” And if only they had testers, they would have discovered the problems lurking in the brush ahead. Right?</p>
<figure style="width: 500px" class="wp-caption alignnone"><img loading="lazy" decoding="async" src="http://www.blastr.com/sites/blastr/files/styles/media_gallery_image/public/images/CompJurassicPark.jpg?itok=vRlYPzCI" alt="" width="500" height="281" /><figcaption class="wp-caption-text">Ah ah ah!</figcaption></figure>
<p>&nbsp;</p>
<p>Not exactly, because they <b>did</b> have testers. In fact, the testers were the stars of the movie: Ian Malcolm, Alan Grant, and Ellie Sattler (played by Jeff Goldblum, Sam Neill, and Laura Dern, respectively). These three experts in their fields (mathematics, paleontology, and botany, respectively) were brought in as consultants to act as testers for the park.</p>
<p>Before I get too deep into <i>Jurassic Park </i>lore, I should establish my terms. Or term, at least. What do testers do? They have many different roles and specialties, depending on the project, but in very basic terms, testers help creators find and resolve problems by asking questions and discovering evidence. And, of course, it isn’t always exclusively “testers” who do testing on a project &#8212; the point is to have somebody other than the person responsible for creating the end product examine the product critically, ask questions, and run experiments in order to reveal potential problems.</p>
<figure style="width: 637px" class="wp-caption alignnone"><img loading="lazy" decoding="async" src="http://2.bp.blogspot.com/-Xjj7tCp0w70/TyZJD_DajzI/AAAAAAAAB88/V9vV9r3cG7g/s1600/Jurassic%2BPark-%2BLaura%2BDern%2B%2526%2BJeff%2BGoldblum%2B%2526%2BSam%2BNeill.jpg" alt="" width="637" height="329" /><figcaption class="wp-caption-text">Testers exploring.</figcaption></figure>
<p>&nbsp;</p>
<p>So, yes, Jurassic Park&#8217;s testers were Malcolm, Grant, and Sattler. But there were two main problems with these testers. First, they weren’t brought in until the project was essentially done. The scary scaffolding of assumptions had already been erected by the time they showed up to question it; at that point, the creators won&#8217;t be very willing to deconstruct any part of their creation and start over. As Malcolm points out in one of the best lines in the movie (there is a similar one in the novel): “Your scientists were so preoccupied with whether they could, they didn’t stop to ask whether they should.”</p>
<p>Which brings up the second problem: These testers weren’t brought in to test, really; they were brought in to rubber-stamp the project. Officially, they were requested by the park’s investors to make sure the park was safe before opening to the public, but the response of the lead creators on the project &#8212; people like Hammond, Ray Arnold (the chief engineer), and Henry Wu (the chief geneticist) &#8212; made it clear: we don’t want real questions, we just want your approval. And maybe but definitely fawn a bit over how amazing our creation is.</p>
<p>For example, when questioned in the novel about a fundamental problem like an animal escaping, “Wu found it offensive to think that anyone would believe him capable of contributing to a system where such a thing could happen.” When Grant does his job as a tester by presenting evidence of a problem to Gerald Harding (the chief veterinarian), even that is met only with denial:</p>
<blockquote><p>Harding: “These dinosaurs can’t breed.”</p>
<p>Grant: “All I know is that this is a dinosaur egg.”</p></blockquote>
<figure style="width: 400px" class="wp-caption alignnone"><img loading="lazy" decoding="async" src="http://derekwinnert.com/wp-content/uploads/2013/08/Jurassic-Park1.jpg" alt="" width="400" height="291" /><figcaption class="wp-caption-text">Classic bug report.</figcaption></figure>
<p>&nbsp;</p>
<p>There are enough software-testing-related tidbits between the movie and the novel to write an entire series of blogposts (if only I were so ambitious) &#8212; on the risks of relying too heavily on automation; on project management; on chaos theory, general systems thinking, and the black swan effect; on expectations and probability; and even more on testing assumptions. <i>Jurassic Park</i> is a gold mine for thinking about and discussing these topics. Enough so to make me love the movie even if it weren’t also an exciting story about dinosaurs brought to life and terrorizing a small group of people trapped on an island? Thankfully, I don’t have to answer <i>that </i>question.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://testersnotebook.jeremywenisch.com/2015/02/26/bugs-find-a-way-a-testers-appreciation-of-jurassic-park/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">245</post-id>	</item>
		<item>
		<title>Testing and Editing: An Analogy Analysis</title>
		<link>https://testersnotebook.jeremywenisch.com/2014/04/14/testing-and-editing-an-analogy-analysis/</link>
					<comments>https://testersnotebook.jeremywenisch.com/2014/04/14/testing-and-editing-an-analogy-analysis/#comments</comments>
		
		<dc:creator><![CDATA[Jeremy Wenisch]]></dc:creator>
		<pubDate>Mon, 14 Apr 2014 14:00:42 +0000</pubDate>
				<category><![CDATA[Testing]]></category>
		<category><![CDATA[Writing]]></category>
		<guid isPermaLink="false">http://testersnotebook.wordpress.com/?p=241</guid>

					<description><![CDATA[There&#8217;s a well-known saying: &#8220;Those who can, do. Those who can&#8217;t, teach.&#8221; (George Bernard Shaw) I tend to live by another saying, one that I just made up now: &#8220;Those who can, create. Those who can&#8217;t, analyze.&#8221; I&#8217;ve always been drawn [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>There&#8217;s a well-known saying: &#8220;Those who can, do. Those who can&#8217;t, teach.&#8221; (George Bernard Shaw)</p>
<p>I tend to live by another saying, one that I just made up now: &#8220;Those who can, create. Those who can&#8217;t, analyze.&#8221;</p>
<p>I&#8217;ve always been drawn to writing (never the other way around), and I think I&#8217;m pretty good at it. I&#8217;ve also always been drawn to programming, and I&#8217;m not terrible at it. But where I&#8217;ve always seemed to excel is in analysis of other people&#8217;s creations: editing works of writing and testing software. Perhaps it&#8217;s my detail-obsessiveness and general anal-retentiveness; perhaps it&#8217;s my aversion to decision-making and attraction to questioning. I&#8217;m not sure. What I do know is that the sibling activities of editing and proofreading can be a useful analogy for testing and checking.</p>
<p>The standard definition of editing makes it out to be very similar to (and inclusive of) proofreading, but in my life it has been more helpful to define them as distinct activities. <em>Proofreading</em> is a low-level task &#8212; a hunt for misspellings, grammatical mistakes, punctuation problems, usage issues, and so on. This is the last thing you do to a piece of writing before it is sent off to its final audience. Before proofreading comes one or several rounds of what I personally define as editing. <em>Editing</em> is concerned primarily with high-level issues: structure, point of view, theme, audience, and so on. I usually think of proofreading as a &#8220;corrective&#8221; activity, wherein I make changes directly; but editing is more of a discussion with the author, during which I might make suggestions (&#8220;this paragraph might work better at the beginning&#8221;) and ask questions (&#8220;was it your intent to convey this message here?&#8221;). The goal in proofreading is to fix mistakes; the goal in editing is to help the author work their way to a better version of the piece.</p>
<p>Perhaps you can already see how being familiar with this distinction might help me organize my thoughts about testing and checking. Here are the definitions of testing and checking suggested by <a href="http://www.satisfice.com/blog/archives/856" target="_blank">James Bach and Michael Bolton</a>:</p>
<blockquote><p><strong>Testing</strong> is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modeling, observation and inference.</p>
<p><strong>Checking</strong> is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.</p></blockquote>
<p>The end result of testing and editing is very similar, and may include a report of issues of concern to stakeholders, as well as questions and observations that may lead to the author or developer re-working part of the artifact in a significant way. Likewise, the basis for both checking and proofreading is algorithmic decision rules: Does clicking this button result in that outcome? Is this clause punctuated with that mark?</p>
<p>There are a lot of differences between these pairs of activities, <a href="http://www.developsense.com/blog/2011/04/flawed-analogies/" target="_blank">of course</a>, but one of the more glaring, and useful, is that editing and proofreading are typically separate activities, with proofreading intentionally coming later &#8212; there&#8217;s not much point in proofreading a piece of writing if the process of editing may yet result in significant rewriting. This is different than a software project, in which (1) the reverse is often true, in that a lack of early low level checking can result in broken software that isn&#8217;t worth testing yet, and (2) &#8220;testing&#8221; under the Bach/Bolton definition is a general term that <em>includes</em> checking.</p>
<p>The benefit of this analogy is some extra structure to my thinking whenever I try to distinguish amongst these activities as I do them. For example, when I first started testing several years ago, my experience giving feedback to other writers helped in finding an appropriate tone for bug reports. And now that I have more experience testing, the worlds often channel advice between each other.</p>
<p>What about you? What analogies do you use to make sense of your own testing world?</p>
]]></content:encoded>
					
					<wfw:commentRss>https://testersnotebook.jeremywenisch.com/2014/04/14/testing-and-editing-an-analogy-analysis/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">241</post-id>	</item>
		<item>
		<title>State of Testing Survey</title>
		<link>https://testersnotebook.jeremywenisch.com/2013/12/03/state-of-testing-survey/</link>
					<comments>https://testersnotebook.jeremywenisch.com/2013/12/03/state-of-testing-survey/#respond</comments>
		
		<dc:creator><![CDATA[Jeremy Wenisch]]></dc:creator>
		<pubDate>Tue, 03 Dec 2013 12:59:05 +0000</pubDate>
				<category><![CDATA[Testing]]></category>
		<guid isPermaLink="false">http://testersnotebook.wordpress.com/?p=231</guid>

					<description><![CDATA[You know something that has stood out to me at the two testing conferences I&#8217;ve attended, in BBST online classes, and in conversations in the Twitter testing community? (Oh, hello, blog reader. It&#8217;s been awhile, right? Good to see you, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>You know something that has stood out to me at the two testing conferences I&#8217;ve attended, in BBST online classes, and in conversations in the Twitter testing community?</p>
<p>(Oh, hello, blog reader. It&#8217;s been awhile, right? Good to see you, too.)</p>
<p>It&#8217;s how diverse the experiences and mini-worlds of testers are. For every broad idea from a tester to which I can relate, there are a dozen minor, but significant, details that seem entirely foreign to me. &#8220;Testing software&#8221; doesn&#8217;t paint a picture quite as clear or specific as &#8220;repairing automobiles.&#8221; Some testers work in giant QA departments, some work on small teams, and many work alone; every tester deals with a different palette of platforms, source code, and users; I could go on, but I don&#8217;t need to, because chances are good that you already know exactly what I&#8217;m talking about. Understanding the software tester&#8217;s world tends to be more about understanding a specific tester&#8217;s specific mini-world than it is about understanding &#8220;the software tester&#8221; in a general sense.</p>
<p>But, you know what?</p>
<p><a href="http://qablog.practitest.com/" target="_blank">QA Intelligence</a> and <a href="http://www.teatimewithtesters.com/" target="_blank">Tea Time with Testers</a> are trying to help paint a clearer picture of that general tester: Who we are, what we do, where we&#8217;re heading. The tool they&#8217;ve created is the <a href="http://qablog.practitest.com/state-of-testing/" target="_blank">State of Testing Survey</a>, which they are launching this year and will continue each year going forward.</p>
<p>I encourage you to <a href="http://qablog.practitest.com/state-of-testing/" target="_blank">check out the link</a>, <a href="http://qablog.practitest.com/state-of-testing/help-the-survey/" target="_blank">share it with other testers</a>, and participate in the survey when it&#8217;s released. This is a pretty great opportunity to learn more about the testing industry and about each other. I say we all take advantage of it.</p>
<p>(h/t <a href="http://www.mkltesthead.com/2013/12/a-survey-on-state-of-testing.html" target="_blank">TESTHEAD</a>)</p>
]]></content:encoded>
					
					<wfw:commentRss>https://testersnotebook.jeremywenisch.com/2013/12/03/state-of-testing-survey/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">231</post-id>	</item>
		<item>
		<title>Insights from The Black Swan, Part 2</title>
		<link>https://testersnotebook.jeremywenisch.com/2013/04/05/insights-from-the-black-swan-part-2/</link>
					<comments>https://testersnotebook.jeremywenisch.com/2013/04/05/insights-from-the-black-swan-part-2/#respond</comments>
		
		<dc:creator><![CDATA[Jeremy Wenisch]]></dc:creator>
		<pubDate>Fri, 05 Apr 2013 11:15:04 +0000</pubDate>
				<category><![CDATA[Books]]></category>
		<category><![CDATA[Testing]]></category>
		<category><![CDATA[Thinking]]></category>
		<guid isPermaLink="false">http://testersnotebook.wordpress.com/?p=169</guid>

					<description><![CDATA[I am in the process of reading Nassim Nicholas Taleb&#8217;s The Black Swan and reflecting here as I encounter insights that excite me as a tester. If this news comes as a shock to you, please read this immediately. Now, this: &#8220;&#8230;that [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>I am in the process of reading Nassim Nicholas Taleb&#8217;s <em><a href="http://www.amazon.com/The-Black-Swan-Improbable-Robustness/dp/081297381X/ref=sr_1_2?ie=UTF8&amp;qid=1364572187&amp;sr=8-2&amp;keywords=black+swan">The Black Swan</a> </em>and reflecting here as I encounter insights that excite me as a tester. If this news comes as a shock to you, please <a href="http://testersnotebook.wordpress.com/2013/04/01/insights-from-the-black-swan-part-1/">read this immediately</a>.</p>
<p>Now, this:</p>
<blockquote><p>&#8220;&#8230;that not theorizing is an act&#8211;that theorizing can correspond to the absence of willed activity, the &#8216;default&#8217; option. It takes considerable effort to see facts (and remember them) while withholding judgment and resisting explanations. And this theorizing disease is rarely under our control: it is largely anatomical, part of our biology, so fighting it requires fighting one&#8217;s own self.&#8221;</p></blockquote>
<p>First, it&#8217;s heartening to learn that my constant theorizing (about what is causing a bug, about how to reproduce a bug, about how a feature is or should be working, about how a user might respond to something) might be natural and largely out of my control.</p>
<p>Second, it&#8217;s disconcerting that my efforts to hold back judgment and explanation while collecting observations and information may be largely futile &#8212; and that I may in fact be fooling myself when I think I am succeeding.</p>
<p>Third&#8230; wait, <em>do I</em> actually try to hold back judgment and explanation while testing? Sometimes, yes &#8212; which may explain, according to Taleb, why a bout of intense testing and exploration can be so taxing. But perhaps more often, no, I think that I let my instincts run the show and theorize away. And it gets to be dangerous. When my brain wants to theorize, it&#8217;s like being trapped with a car salesman:</p>
<blockquote><p>Me: &#8220;I want to investigate some factors before I start making any decisions. For example, what&#8217;s the price difference between the LS models and&#8230;&#8221;</p>
<p>Brain: &#8220;Yeah, yeah, sounds good. But wait! I think you&#8217;ll like last year&#8217;s sedans. C&#8217;mon, let&#8217;s take a look together.&#8221;</p>
<p>Me: &#8220;Fine, but then I want to get back to this.&#8221;</p>
<p>Brain: &#8220;No problem.&#8221;</p>
<p>Me: &#8220;Yeah, you know, these sedans look pretty good. I could be persuaded, let me just check the mileage&#8230;&#8221;</p>
<p>Brain: &#8220;OH! You know what you&#8217;d LOVE. This new SUV. C&#8217;mon, let&#8217;s go.&#8221;</p>
<p>Me: &#8220;Shoot, ok, but then I want to revisit these sedans, and also go back to my original questions&#8230;&#8221;</p>
<p>Brain: &#8220;This will only take a second, I SWEAR.&#8221;</p>
<p>Me: &#8220;Oh, you know what, this SUV <em>is</em> nice. Geez, I&#8217;m losing track of&#8230;&#8221;</p>
<p>Brain: &#8220;HEY! Let&#8217;s check your credit score. Super quick.&#8221;</p>
<p>Me: &#8220;Um, ok&#8230; wait, why did I come here again?&#8221;</p></blockquote>
<p>And so it goes in my mind while I&#8217;m fact-collecting and trying to hold multiple theories in my mind, hoping that I don&#8217;t start to drop the threads, or worse, end up with a tangled ball of nonsense to show for my efforts.</p>
<p>Taleb suggests that fighting this natural tendency to theorize may not always be worth the effort. But what I&#8217;ve come to understand through this reflection is that I can at least train myself to simply <em>be aware</em> of it more. And, better yet, <em>take note of</em> the theories and possible explanations as they come to me.</p>
<p>My head is good at doing some things, but terrible at doing at least two things: Storing information and understanding my own thoughts. Paper and computers are far superior at accomplishing the former and enabling the latter. I know that when I jot down my theories as I test and move on, rather than hold them in my head (or fight them off), the results are much more useful and productive. I get to the exploration that I intended, and, not only do I not lose track of the ideas I had earlier, but I can consider them clearly later. I can act on them, expand on them, test them, even destroy them.</p>
<p>So, no, we can&#8217;t avoid spinning theories and explanation for the things we see while testing. But I think something as basic as effective note-taking can get them working for us instead of against us.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://testersnotebook.jeremywenisch.com/2013/04/05/insights-from-the-black-swan-part-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">169</post-id>	</item>
		<item>
		<title>Insights from The Black Swan, Part 1</title>
		<link>https://testersnotebook.jeremywenisch.com/2013/04/01/insights-from-the-black-swan-part-1/</link>
					<comments>https://testersnotebook.jeremywenisch.com/2013/04/01/insights-from-the-black-swan-part-1/#comments</comments>
		
		<dc:creator><![CDATA[Jeremy Wenisch]]></dc:creator>
		<pubDate>Tue, 02 Apr 2013 01:36:32 +0000</pubDate>
				<category><![CDATA[Books]]></category>
		<category><![CDATA[Testing]]></category>
		<category><![CDATA[Thinking]]></category>
		<guid isPermaLink="false">http://testersnotebook.wordpress.com/?p=159</guid>

					<description><![CDATA[I am reading a book. (I&#8217;ll wait for your applause.) (Thank you.) I am reading Nassim Nicholas Taleb&#8217;s The Black Swan right now. I&#8217;m less than a hundred pages in, but I&#8217;m already convinced all human beings should read it. [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>I am reading a book. (I&#8217;ll wait for your applause.)</p>
<p>(Thank you.)</p>
<p>I am reading Nassim Nicholas Taleb&#8217;s <a href="http://www.amazon.com/The-Black-Swan-Improbable-Robustness/dp/081297381X/ref=sr_1_2?ie=UTF8&amp;qid=1364572187&amp;sr=8-2&amp;keywords=black+swan"><em>The Black Swan</em></a> right now. I&#8217;m less than a hundred pages in, but I&#8217;m already convinced all human beings should read it. I could wait to finish the whole thing and write a tidy little recap here, but I decided it would be more fun to witness how long it actually takes me to read a book by regularly posting &#8220;insights&#8221;  &#8212; nuggets that, as I read them, make my tester brain cells wriggle.</p>
<p>So here is the first bit that I found worthy of reflection. In this quote, Taleb describes what he calls the &#8220;round-trip error&#8221;, by referencing his earlier example of a turkey being fed every day for a thousand days, until one day (the Wednesday before Thanksgiving) he is not.</p>
<blockquote><p>&#8220;Someone who observed the turkey&#8217;s first thousand days (but not the shock of the thousand and first) would tell you, and rightly so, that there is <strong><em>no evidence</em></strong> of the possibility of large events, i.e., Black Swans. You are likely to confuse that statement, however, particularly if you do not pay close attention, with the statement that there is <strong><em>evidence of no possible</em></strong> Black Swans. Even though it is in fact vast, the logical distance between the two assertions will seem very narrow in your mind, so that one can be easily substituted for the other.&#8221;</p></blockquote>
<p>Notice the emphasis, which is the author&#8217;s own: The difference between <em>no evidence</em> of a Black Swan (an improbable event with extreme consequences &#8212; in this case, the turkey&#8217;s unexpected demise) and <em>evidence of no possible</em> Black Swan. Is this not one of the critical thought and communication challenges of a software tester? When the testing of a product reveals no evidence of critical bugs, it is easy &#8212; and biologically natural, according to Taleb &#8212; to confuse that as evidence that there are no critical bugs present.</p>
<p>The former assertion &#8212; no evidence of possible bugs &#8212; has meaning and impact that is mostly dependent on context. The mission of my testing and the particular sampling of tests I&#8217;ve chosen and executed, among other factors, will have a lot to say about what &#8220;no evidence of possible bugs&#8221; actually means, including whether more testing, and what tests in particular, could be valuable.</p>
<p>But the latter assertion &#8212; evidence that no possible bugs exist &#8212; has no meaning. It only has truth in the isolated island nation of Simplestan, where there is but one computer and one user, and where the software is so simple that, not only are all possible risks known, but it is possible to develop a finite number of tests to cover all possible bugs. (You may know Simplestan by one of its other names: Paradise, or Boringville.) In the rest of the world, we have to train ourselves to remember that &#8220;evidence that no possible bugs exist&#8221; is a falsehood &#8212; a seductive one (it feels so similar to the other!), but one that can negatively impact the quality of the product when testers and stakeholders are led to believe in it.</p>
<p>Now, it <em>feels</em> like I&#8217;ve always been very aware of all this. But I think that may just be evidence of how good this book is.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://testersnotebook.jeremywenisch.com/2013/04/01/insights-from-the-black-swan-part-1/feed/</wfw:commentRss>
			<slash:comments>5</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">159</post-id>	</item>
	</channel>
</rss>
