<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Teague Hopkins</title>
	<atom:link href="https://www.teaguehopkins.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.teaguehopkins.com/</link>
	<description>Mindful Product Management</description>
	<lastBuildDate>Fri, 13 Mar 2026 14:44:18 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">24631679</site>	<item>
		<title>The Familiar Crisis</title>
		<link>https://www.teaguehopkins.com/2026/03/the-familiar-crisis/</link>
					<comments>https://www.teaguehopkins.com/2026/03/the-familiar-crisis/#respond</comments>
		
		<dc:creator><![CDATA[Teague Hopkins]]></dc:creator>
		<pubDate>Fri, 13 Mar 2026 14:44:18 +0000</pubDate>
				<category><![CDATA[Main]]></category>
		<guid isPermaLink="false">https://www.teaguehopkins.com/?p=20507</guid>

					<description><![CDATA[<p>On displacement, class, and the discovery that it was never unprecedented My previous two essays in this series argued that the AI debate is trapped in a false binary and that the trap is partly psychological: identity threats activating the nervous system before the prefrontal cortex can engage. Both generated thoughtful responses, but one pattern [&#8230;]</p>
<p>The post <a href="https://www.teaguehopkins.com/2026/03/the-familiar-crisis/">The Familiar Crisis</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><b>On displacement, class, and the discovery that it was never unprecedented</b></p>
<p><img data-recalc-dims="1" fetchpriority="high" decoding="async" data-attachment-id="20509" data-permalink="https://www.teaguehopkins.com/2026/03/the-familiar-crisis/teague_empty_loading_dock_behind_a_brick_industrial_building__d5a58558-df8e-40fb-b9cb-9b0b90ec18ed_2/" data-orig-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/03/Teague_Empty_loading_dock_behind_a_brick_industrial_building__d5a58558-df8e-40fb-b9cb-9b0b90ec18ed_2.png?fit=1456%2C816&amp;ssl=1" data-orig-size="1456,816" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="empty-loading-dock" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/03/Teague_Empty_loading_dock_behind_a_brick_industrial_building__d5a58558-df8e-40fb-b9cb-9b0b90ec18ed_2.png?fit=300%2C168&amp;ssl=1" data-large-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/03/Teague_Empty_loading_dock_behind_a_brick_industrial_building__d5a58558-df8e-40fb-b9cb-9b0b90ec18ed_2.png?fit=1024%2C574&amp;ssl=1" class="aligncenter size-large wp-image-20509" src="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/03/Teague_Empty_loading_dock_behind_a_brick_industrial_building__d5a58558-df8e-40fb-b9cb-9b0b90ec18ed_2.png?resize=1024%2C574&#038;ssl=1" alt="" width="1024" height="574" srcset="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/03/Teague_Empty_loading_dock_behind_a_brick_industrial_building__d5a58558-df8e-40fb-b9cb-9b0b90ec18ed_2.png?resize=1024%2C574&amp;ssl=1 1024w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/03/Teague_Empty_loading_dock_behind_a_brick_industrial_building__d5a58558-df8e-40fb-b9cb-9b0b90ec18ed_2.png?resize=300%2C168&amp;ssl=1 300w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/03/Teague_Empty_loading_dock_behind_a_brick_industrial_building__d5a58558-df8e-40fb-b9cb-9b0b90ec18ed_2.png?resize=768%2C430&amp;ssl=1 768w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/03/Teague_Empty_loading_dock_behind_a_brick_industrial_building__d5a58558-df8e-40fb-b9cb-9b0b90ec18ed_2.png?w=1456&amp;ssl=1 1456w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p><span style="font-weight: 400;">My previous two essays in this series argued that the AI debate is trapped in a false binary and that the trap is partly psychological: identity threats activating the nervous system before the prefrontal cortex can engage. Both generated thoughtful responses, but one pattern in the conversation kept nagging at me.</span></p>
<p><span style="font-weight: 400;">The loudest voices in the AI debate are overwhelmingly educated, credentialed, and professionally comfortable. Writers, academics, consultants, journalists. People with platforms, with op-ed access, with the vocabulary and institutional connections to make their anxiety visible at scale. The conversation about AI displacement is dominated by the people for whom displacement is novel.</span></p>
<h2><b>The Volume Is the Story</b></h2>
<p><span style="font-weight: 400;">The fear of AI-driven job loss is widespread; roughly three quarters of Americans report concern about permanent displacement. But when Gallup tracked who became more anxious after ChatGPT launched, the spike occurred almost exclusively among college graduates. Workers without degrees, many of whom have lived through multiple waves of automation, barely moved.</span></p>
<p><span style="font-weight: 400;">That asymmetry matters less for what it says about the technology than for what it says about the conversation. Educated workers don&#8217;t just worry differently; they worry louder. They write the op-eds, deliver the keynotes, testify before Congress, and publish the Substack posts. When autoworkers in Flint lost their livelihoods in the 1980s, the displacement was structurally devastating, but the affected workers didn&#8217;t have media platforms. The cultural volume of that crisis never matched its scale. When journalists and professors feel threatened, the volume is immediate and immense, because the threatened class </span><i><span style="font-weight: 400;">is</span></i><span style="font-weight: 400;"> the class that controls the microphone.</span></p>
<p><span style="font-weight: 400;">This doesn&#8217;t mean the anxiety is illegitimate. Entry-level hiring for AI-exposed white-collar jobs has already dropped measurably, and job postings for corporate roles are declining while applications surge. The harm is real. But the ratio of attention to harm is historically anomalous: democracies allocate policy attention in proportion to the narrative power of the affected group, not the magnitude of the disruption.</span></p>
<h2><b>First Time for Everything</b></h2>
<p><span style="font-weight: 400;">The deeper issue is not volume; it&#8217;s novelty. Knowledge workers are encountering a specific experience for the first time: the discovery that structural economic forces don&#8217;t care about your credentials. That years of training and carefully built expertise can be devalued not because you did anything wrong, but because the economics shifted beneath you.</span></p>
<p><span style="font-weight: 400;">This experience is genuinely new to this class, but it is not remotely new.</span></p>
<p><span style="font-weight: 400;">Textile workers knew it in the 1810s. Switchboard operators knew it in the 1970s. Manufacturing workers knew it through the entire back half of the twentieth century. Coal miners, retail workers, truck drivers watching autonomous vehicles in development: millions of people have lived inside this experience, and most of them navigated it without media access, professional networks, or cultural sympathy.</span></p>
<p><span style="font-weight: 400;">The knowledge worker&#8217;s AI panic is parochial. It treats as unprecedented a pattern that has repeated for two centuries. What&#8217;s unprecedented is not the displacement; it&#8217;s who is being displaced.</span></p>
<h2><b>The Irony No One Wants to Name</b></h2>
<p><span style="font-weight: 400;">The professional class now threatened by AI is, in many cases, the same class that built and benefited from the intellectual framework being used against them. &#8220;Disruption&#8221; was a term of art in business schools and consulting firms long before it showed up in conversations about ChatGPT. Knowledge workers wrote the McKinsey reports on automation, taught the MBA courses on creative destruction, and consulted on the restructurings that eliminated manufacturing jobs. The prevailing theory, stated or implied, was that technological displacement was inevitable, that the market would reallocate labor efficiently, and that the affected workers needed to &#8220;reskill&#8221; and adapt.</span></p>
<p><span style="font-weight: 400;">That theory felt coherent when it described someone else&#8217;s problem. &#8220;Reskill&#8221; is easy advice to give when you&#8217;re not the one whose skills just lost their market value.</span></p>
<p><span style="font-weight: 400;">I am not arguing that knowledge workers deserve what&#8217;s happening to them; that framing misses the structural point entirely. I am arguing that the experience of being on the receiving end is producing, in real time, a recognition that the framework was always incomplete.</span></p>
<p><span style="font-weight: 400;">I suspect there are people in previously displaced communities watching this unfold with a weary, unsurprised acknowledgment: </span><i><span style="font-weight: 400;">now you know what it feels like</span></i><span style="font-weight: 400;">.</span></p>
<h2><b>What the Displaced Already Knew</b></h2>
<p><span style="font-weight: 400;">The people who&#8217;ve been through this before learned things the knowledge class is only now beginning to discover. They learned that individual merit does not protect you from structural forces. They learned that the market does not share its gains voluntarily. And they learned that the only reliable protection against displacement is collective: unions, mutual aid networks, transition funds, political coalitions that force redistribution rather than hoping for it.</span></p>
<p><span style="font-weight: 400;">Knowledge workers have almost none of this infrastructure. We have professional associations, but those are networking organizations, not bargaining units. We have LinkedIn, but that is a marketplace, not a coalition. We have cultural prestige, but prestige doesn&#8217;t negotiate severance packages or mandate transition support.</span></p>
<p><span style="font-weight: 400;">The scaffolding that would protect knowledge workers from displacement does not exist, largely because knowledge workers never believed they would need it. But the scaffolding that protected </span><i><span style="font-weight: 400;">other</span></i><span style="font-weight: 400;"> workers did work for a time. It has now been eroding for decades, since at least the 1980s: union membership has fallen from a third of the workforce to roughly ten percent, transition programs are chronically underfunded, and the regulatory frameworks that once mandated the distribution of gains, including taxes on the wealthy, have been systematically weakened. This was not an accident. Deregulation, tax reform, right-to-work legislation, the hollowing out of labor protections: these were policy choices, often supported or at least tolerated by the professional class that benefited from cheaper goods and services and frictionless markets. The tools that worked are still the tools that work. We have just spent forty years dismantling them because they weren&#8217;t working for us.</span></p>
<p><span style="font-weight: 400;">The communities that survived previous waves of automation could have told us what we&#8217;d need. Some of them did. We weren&#8217;t listening, because we couldn&#8217;t imagine that the lesson applied to us.</span></p>
<h2><b>The Opportunity in the Recognition</b></h2>
<p><span style="font-weight: 400;">The three-quarters of Americans who fear permanent job loss from AI are not making a technical prediction about artificial intelligence. They are making a social prediction about power. The fear is not &#8220;a machine will take my job.&#8221; It is &#8220;the people who benefit from this technology will not share the gains with me, and nothing in our current institutional landscape will make them.&#8221;</span></p>
<p><span style="font-weight: 400;">That fear is well-calibrated to history. It is also, for the knowledge class, a new sensation: the discovery that you are not exempt from the forces you&#8217;ve been theorizing about from a safe distance.</span></p>
<p><span style="font-weight: 400;">But novelty carries an advantage. The knowledge class has something previously displaced workers did not: cultural influence, institutional access, and the ability to shape narratives at scale. The question is what they do with those advantages now that displacement is personal rather than theoretical.</span></p>
<p><span style="font-weight: 400;">One option is to treat this as an unprecedented crisis requiring novel solutions, build protections for the professional class specifically, and continue ignoring the structural pattern. This is the path of least resistance, and it is the one the discourse is currently on.</span></p>
<p><span style="font-weight: 400;">The other option is harder. Recognize that the experience is not new, that the people who went through it before you are not cautionary tales but teachers, and use the cultural power this class possesses to build protective infrastructure that serves everyone: collective bargaining, transition support, governance frameworks that mandate the distribution of technological gains. Not because it&#8217;s noble, but because it&#8217;s what actually works, and because the people who learned that lesson the hard way have been trying to tell us for decades.</span></p>
<p><span style="font-weight: 400;">The factory workers could have told us this was coming. Some of them did. The question is whether we&#8217;re finally ready to listen.</span></p>
<p>The post <a href="https://www.teaguehopkins.com/2026/03/the-familiar-crisis/">The Familiar Crisis</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.teaguehopkins.com/2026/03/the-familiar-crisis/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20507</post-id>	</item>
		<item>
		<title>Come Back Online</title>
		<link>https://www.teaguehopkins.com/2026/02/come-back-online/</link>
					<comments>https://www.teaguehopkins.com/2026/02/come-back-online/#respond</comments>
		
		<dc:creator><![CDATA[Teague Hopkins]]></dc:creator>
		<pubDate>Tue, 17 Feb 2026 00:40:49 +0000</pubDate>
				<category><![CDATA[Main]]></category>
		<guid isPermaLink="false">https://www.teaguehopkins.com/?p=20268</guid>

					<description><![CDATA[<p>Someone challenged a point I&#8217;d made about AI recently, and I noticed something unsettling: I was composing my rebuttal before I&#8217;d finished reading their argument. Not weighing it; defending against it. My chest was tight, the sympathetic nervous system fully online before my prefrontal cortex had a chance to weigh in. I caught it that [&#8230;]</p>
<p>The post <a href="https://www.teaguehopkins.com/2026/02/come-back-online/">Come Back Online</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" decoding="async" data-attachment-id="20271" data-permalink="https://www.teaguehopkins.com/2026/02/come-back-online/teague_documentary_photograph_of_an_empty_modern_conference_r_c972e635-cf8d-43a6-b722-c11458149502_1/" data-orig-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/02/Teague_Documentary_photograph_of_an_empty_modern_conference_r_c972e635-cf8d-43a6-b722-c11458149502_1.png?fit=1456%2C816&amp;ssl=1" data-orig-size="1456,816" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/02/Teague_Documentary_photograph_of_an_empty_modern_conference_r_c972e635-cf8d-43a6-b722-c11458149502_1.png?fit=300%2C168&amp;ssl=1" data-large-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/02/Teague_Documentary_photograph_of_an_empty_modern_conference_r_c972e635-cf8d-43a6-b722-c11458149502_1.png?fit=1024%2C574&amp;ssl=1" class="aligncenter size-large wp-image-20271" src="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/02/Teague_Documentary_photograph_of_an_empty_modern_conference_r_c972e635-cf8d-43a6-b722-c11458149502_1.png?resize=1024%2C574&#038;ssl=1" alt="" width="1024" height="574" srcset="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/02/Teague_Documentary_photograph_of_an_empty_modern_conference_r_c972e635-cf8d-43a6-b722-c11458149502_1.png?resize=1024%2C574&amp;ssl=1 1024w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/02/Teague_Documentary_photograph_of_an_empty_modern_conference_r_c972e635-cf8d-43a6-b722-c11458149502_1.png?resize=300%2C168&amp;ssl=1 300w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/02/Teague_Documentary_photograph_of_an_empty_modern_conference_r_c972e635-cf8d-43a6-b722-c11458149502_1.png?resize=768%2C430&amp;ssl=1 768w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2026/02/Teague_Documentary_photograph_of_an_empty_modern_conference_r_c972e635-cf8d-43a6-b722-c11458149502_1.png?w=1456&amp;ssl=1 1456w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p><span style="font-weight: 400;">Someone challenged a point I&#8217;d made about AI recently, and I noticed something unsettling: I was composing my rebuttal before I&#8217;d finished reading their argument. Not weighing it; defending against it. My chest was tight, the sympathetic nervous system fully online before my prefrontal cortex had a chance to weigh in.</span></p>
<p><span style="font-weight: 400;">I caught it that time, though I don&#8217;t always.</span></p>
<p><span style="font-weight: 400;">That reaction is what this essay is about, not because it was unusual, but because I keep watching the same thing happen to people I respect, and the pattern has become impossible to ignore.</span></p>
<p><span style="font-weight: 400;">Smart, thoughtful people who reason carefully about hard problems in every other area of their lives encounter the topic of AI and go rigid. Their thinking narrows. They reach for analogies instead of evidence, collapse complex questions into binary positions, and defend those positions with an intensity that has nothing to do with the strength of the underlying argument.</span></p>
<p><span style="font-weight: 400;">A few weeks ago, I wrote about </span><a href="https://teaguehopkins.substack.com/p/the-false-choice"><span style="font-weight: 400;">the false choice</span></a><span style="font-weight: 400;"> in the AI debate: the idea that we&#8217;re forced to pick between uncritical acceleration and categorical opposition, when neither position survives contact with reality. The response was telling. People who agreed mostly engaged with the argument; people who disagreed mostly didn&#8217;t. Someone raises a specific point, and the response leapfrogs past it to a broader grievance: factory labor analogies, environmental impact, copyright, corporate exploitation narratives, sweeping claims about the death of critical thinking. Not wrong, necessarily, but not engaged with the thing that I actually said. Pre-loaded, as if the conclusion was reached before the conversation started and the reasoning exists only to justify it.</span></p>
<p><span style="font-weight: 400;">I keep catching myself tempted to do the same thing, and the pattern is so consistent, so unlike how these same people engage with other difficult topics, that I&#8217;ve stopped believing this is a thinking problem. Something else is going on, and I think naming it matters, because the thing driving this reaction isn&#8217;t just making the conversation unproductive; it&#8217;s actively dangerous.</span></p>
<h2><span style="font-weight: 400;">What Fear Looks Like When It&#8217;s Wearing a Suit</span></h2>
<p><span style="font-weight: 400;">Here&#8217;s what I&#8217;ve come to believe is happening, and I include myself in this.</span></p>
<p><span style="font-weight: 400;">For knowledge workers, educators, and writers, AI doesn&#8217;t just represent a new technology to evaluate. It represents a direct challenge to the thing that earns us our seat at the table.</span></p>
<p><span style="font-weight: 400;">If you spent years learning to write clearly, and a tool now helps anyone write clearly, the tool hasn&#8217;t just changed the landscape; it&#8217;s devalued your specific investment. If your professional identity is built on credentialed expertise and the ability to articulate complex ideas in polished prose, a technology that commoditizes articulation feels less like disruption and more like erasure. I&#8217;ve built a career on being the person who sees systems clearly and helps organizations navigate complexity, and if AI gets good enough at that, I&#8217;m not sure what I still have to offer. I&#8217;ve sat with that question, and it&#8217;s not comfortable.</span></p>
<p><span style="font-weight: 400;">This isn&#8217;t a rational calculation any of us are making consciously. It&#8217;s an identity threat, and identity threats activate the nervous system before they reach the prefrontal cortex. When your brain perceives a threat to who you are, it doesn’t give you your best thinking. It gives you pattern-matching, where this looks like past exploitation and so it must be exploitation. It gives you position-defending, interpreting new information through a filter you&#8217;ve already locked in. It gives you analogies that reach for historical parallels not because they&#8217;re analytically apt but because they </span><i><span style="font-weight: 400;">feel</span></i><span style="font-weight: 400;"> right, and feeling right is enough when your nervous system is running the show.</span></p>
<p><span style="font-weight: 400;">The result looks like a principled argument and sounds like one, but it has a tell: it doesn&#8217;t update. You can present new evidence, draw finer distinctions, point to specific use cases where the concerns don&#8217;t apply, and the response doesn&#8217;t shift; it just reasserts, often with more emotional intensity. That&#8217;s not reasoning; that&#8217;s defense.</span></p>
<p><span style="font-weight: 400;">I know what it feels like from the inside, because I&#8217;ve done it. When you&#8217;re in it, it feels like clarity, like you&#8217;re the one seeing the situation for what it really is, which is exactly what makes it so hard to catch.</span></p>
<h2><span style="font-weight: 400;">Ancient Hardware, Modern Problem</span></h2>
<p><span style="font-weight: 400;">The fear is legitimate; if you&#8217;ve built your career on skills that are genuinely being commoditized, you should feel unsettled, because that&#8217;s an appropriate response to a real situation.</span></p>
<p><span style="font-weight: 400;">The problem is that the response is running on ancient hardware. Pattern-matching, rapid threat assessment, in-group loyalty, the tendency to act first and analyze later: these heuristics kept our ancestors alive, and they&#8217;re brilliantly adapted for a world where threats are physical, immediate, and binary. A rustle in the grass really is either a predator or nothing; the cost of a false positive is a wasted sprint, while the cost of a false negative is death. Of course we evolved to run first and ask questions later.</span></p>
<p><span style="font-weight: 400;">But AI isn&#8217;t a predator in the grass. It&#8217;s a complex, evolving system with unevenly distributed risks and benefits that will play out over decades, and the heuristics that protected us from lions are worse than useless here; they actively prevent us from seeing the situation clearly. When the sympathetic nervous system takes over, we lose access to exactly the cognitive tools the moment demands: nuance, uncertainty tolerance, the ability to hold competing truths simultaneously.</span></p>
<p><span style="font-weight: 400;">What separates productive engagement from defensive reaction is deceptively simple: the ability to notice when the alarm is firing and choose not to let it drive. This is, at its core, a mindfulness problem; not in the scented-candle sense, but in the most practical sense imaginable. Can you observe your own nervous system responding, recognize that the response is not the same thing as the reality, and create enough space between stimulus and reaction to engage with what&#8217;s actually in front of you?</span></p>
<p><span style="font-weight: 400;">The people I know who are engaging most productively with AI, whether as enthusiasts or as critics, share this: they&#8217;ve learned to notice when they&#8217;re reacting and pause long enough to start thinking instead. What they see when they do is worth paying attention to.</span></p>
<h2><span style="font-weight: 400;">What Clear Eyes See</span></h2>
<p><span style="font-weight: 400;">When you get past the reflexive fear and look at the actual landscape, the stakes are larger and more urgent than anything we&#8217;ve been arguing about.</span></p>
<p><span style="font-weight: 400;">The concentration of AI capabilities in the hands of a few companies and nations is accelerating. Biological and cybersecurity risks scale with model capability. The potential for economic disruption severe enough to destabilize democracies is growing, and we have no precedent for managing it. The window for building governance frameworks is finite, and it&#8217;s narrowing.</span></p>
<p><span style="font-weight: 400;">Dario Amodei, the CEO of Anthropic, recently described this moment as </span><a href="https://www.darioamodei.com/essay/the-adolescence-of-technology"><span style="font-weight: 400;">the adolescence of technology</span></a><span style="font-weight: 400;">: a period where humanity will have access to almost unimaginable power before we&#8217;ve developed the maturity to wield it. Yes, he runs an AI company, and that&#8217;s a conflict of interest worth noting, but the risks he catalogs don&#8217;t become less real because of who&#8217;s describing them. If anything, the fact that someone inside the industry is sounding these alarms should sharpen our attention.</span></p>
<p><span style="font-weight: 400;">The &#8220;adolescence&#8221; metaphor works in a second direction, one I don&#8217;t think Amodei intended. The discourse about AI is adolescent too, in a specific psychological sense; not because it lacks intelligence, but because it lacks the ability to regulate emotional responses long enough to engage with complexity. It personalizes everything, collapses nuance into loyalty tests, and confuses the intensity of feeling with the validity of position.</span></p>
<p><span style="font-weight: 400;">Whether someone used AI to edit a social media post is not among the real dangers. The real dangers are playing out at a scale and speed that demands the best thinking we have, and that thinking is largely absent from the conversation.</span></p>
<p><span style="font-weight: 400;">Somewhere right now, decisions are being made about compute infrastructure, data rights, economic transition policy, and the acceptable boundaries of autonomous AI systems, decisions that will shape the next several decades. The table where they&#8217;re being made has empty chairs. The people who should be filling those chairs—educators who understand how humans actually learn, writers who understand the relationship between language and thought, labor advocates who understand economic displacement, ethicists who&#8217;ve spent careers on exactly these questions—are standing outside the room, arguing about whether the room should exist.</span></p>
<p><span style="font-weight: 400;">I get it: the room shouldn&#8217;t have been built this way, the power dynamics are wrong, and the incentives are misaligned. All true. But the room exists, the decisions are being made, and refusing to engage isn&#8217;t the principled stand it feels like. The people already at the table are happy to proceed without us.</span></p>
<p><span style="font-weight: 400;">Here&#8217;s the part that nobody on the sidelines wants to hear, the part I think matters more than anything else in this essay: if you aren&#8217;t engaging with the technology directly, learning how it actually works, what it can and can&#8217;t do, where it breaks, where it surprises you, then you are forfeiting the ability to have an informed opinion about it, and an uninformed opinion, no matter how principled its origins, is just noise.</span></p>
<p><span style="font-weight: 400;">I understand the impulse to boycott; it feels like integrity, like refusing to be complicit. But the practical effect is this: the only people developing deep, hands-on knowledge of what these systems actually do are the people building and selling them. If humanists, educators, ethicists, and labor advocates cede that ground, if the only people who truly understand the technology are the ones motivated by market share, then those are the people who will write the policies, set the defaults, and define what &#8220;responsible AI&#8221; means. We will get a future shaped entirely by the people with the least incentive to protect what we care about most.</span></p>
<p><span style="font-weight: 400;">This is the central paradox of principled disengagement: by refusing to touch the technology, you surrender the exact knowledge you would need to hold it accountable. You can&#8217;t effectively critique what you don&#8217;t understand, propose alternatives to systems you&#8217;ve never examined, or spot the gaps in a governance framework if you don&#8217;t know what the technology is actually capable of today—not what someone told you six months ago, not what you extrapolated from a headline, but what it does right now when you sit with it and push.</span></p>
<p><span style="font-weight: 400;">The critics I find most useful aren&#8217;t the ones who&#8217;ve decided AI is irredeemable; they&#8217;re the ones who&#8217;ve gotten specific: </span><i><span style="font-weight: 400;">this</span></i><span style="font-weight: 400;"> model was trained on </span><i><span style="font-weight: 400;">this</span></i><span style="font-weight: 400;"> data without </span><i><span style="font-weight: 400;">this</span></i><span style="font-weight: 400;"> consent, </span><i><span style="font-weight: 400;">that</span></i><span style="font-weight: 400;"> deployment in </span><i><span style="font-weight: 400;">that</span></i><span style="font-weight: 400;"> context produced </span><i><span style="font-weight: 400;">those</span></i><span style="font-weight: 400;"> measurable harms, </span><i><span style="font-weight: 400;">this</span></i><span style="font-weight: 400;"> governance framework has </span><i><span style="font-weight: 400;">these</span></i><span style="font-weight: 400;"> gaps. That kind of opposition actually changes outcomes, and every single one of those critics got there by engaging with the technology closely enough to know where the real leverage points are.</span></p>
<h2><span style="font-weight: 400;">Come Back Online</span></h2>
<p><span style="font-weight: 400;">The tables where AI&#8217;s future is being shaped, from congressional halls to conference calls, need more than technologists and investors. They need people who understand learning, language, labor, equity, and power: people who&#8217;ve spent their careers on those problems and whose expertise is not diminished by AI but made more essential by it. That expertise only counts, though, if it&#8217;s grounded in firsthand knowledge of the thing you&#8217;re trying to shape. The world doesn&#8217;t need more people with strong opinions about AI and no experience of it. It needs people who&#8217;ve done the work to understand it and have the values to insist it serve everyone, not just the people who built it.</span></p>
<p><span style="font-weight: 400;">This is an invitation, but it&#8217;s also an invocation. Our nervous systems are telling us to fight or flee, and the moment calls for neither. It calls for the hard, unglamorous work of showing up, getting our hands dirty with the actual technology, paying close attention to what it&#8217;s doing and what it isn&#8217;t, and insisting that the people building the future account for its consequences. We need you in this conversation—not your fear, not your reflexes, </span><i><span style="font-weight: 400;">you</span></i><span style="font-weight: 400;">: your expertise, your values, your willingness to grapple with something genuinely hard.</span></p>
<p><span style="font-weight: 400;">The fear is real, and the threat is real, but the threat isn&#8217;t the tool. It&#8217;s that we&#8217;ll spend the critical window for shaping it arguing about the wrong things, or worse, refusing to engage with it at all, while the people with the fewest scruples shape it without opposition.</span></p>
<p><span style="font-weight: 400;">You are too important to sit this out. The stakes are too high for anything less than our best thinking.</span></p>
<p>The post <a href="https://www.teaguehopkins.com/2026/02/come-back-online/">Come Back Online</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.teaguehopkins.com/2026/02/come-back-online/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20268</post-id>	</item>
		<item>
		<title>The False Choice</title>
		<link>https://www.teaguehopkins.com/2025/11/the-false-choice/</link>
					<comments>https://www.teaguehopkins.com/2025/11/the-false-choice/#respond</comments>
		
		<dc:creator><![CDATA[Teague Hopkins]]></dc:creator>
		<pubDate>Tue, 18 Nov 2025 04:41:16 +0000</pubDate>
				<category><![CDATA[Main]]></category>
		<guid isPermaLink="false">https://www.teaguehopkins.com/?p=19890</guid>

					<description><![CDATA[<p>Why History Teaches Us to Reject Faith-Based Arguments on Both Sides of the AI Debate In the era of AI, there are extremists on both sides of the issue. One side, which I have taken to calling the “accelerationists,” thinks AI will solve major societal problems and that we shouldn’t make rules, because that will [&#8230;]</p>
<p>The post <a href="https://www.teaguehopkins.com/2025/11/the-false-choice/">The False Choice</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2 style="text-align: center;">Why History Teaches Us to Reject Faith-Based Arguments on Both Sides of the AI Debate</h2>
<p><span style="font-weight: 400;">In the era of AI, there are extremists on both sides of the issue. One side, which I have taken to calling the “accelerationists,” thinks AI will solve major societal problems and that we shouldn’t make rules, because that will just slow things down. The other side, &#8220;critics,&#8221; issues doomsday warnings about environmental harm, stolen content, lost jobs, and even the end of humanity, often arguing that AI development should be paused indefinitely.</span></p>
<p><span style="font-weight: 400;">Both sides are deploying a sort of faith-based reasoning: treating unknowns as certainties and prediction as destiny. But lessons from history show us that technologies’ effects are driven by how we choose to use them, not necessarily by the inherent qualities of the technology itself. I advocate for rejecting both unchecked optimism and doom in favor of a practical approach: measuring what actually happens and figuring out how to put in place parameters that encourage the benefits and mitigate the harms.</span></p>
<p><img data-recalc-dims="1" decoding="async" data-attachment-id="19892" data-permalink="https://www.teaguehopkins.com/2025/11/the-false-choice/gemini_generated_image_wu28d8wu28d8wu28/" data-orig-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/Gemini_Generated_Image_wu28d8wu28d8wu28-e1763440840626.png?fit=1024%2C374&amp;ssl=1" data-orig-size="1024,374" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Gemini_Generated_Image_wu28d8wu28d8wu28" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/Gemini_Generated_Image_wu28d8wu28d8wu28-e1763440840626.png?fit=300%2C110&amp;ssl=1" data-large-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/Gemini_Generated_Image_wu28d8wu28d8wu28-e1763440840626.png?fit=1024%2C374&amp;ssl=1" class="aligncenter wp-image-19892 size-full" src="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/Gemini_Generated_Image_wu28d8wu28d8wu28-e1763440840626.png?resize=1024%2C374&#038;ssl=1" alt="The composition is built from simple geometric shapes rather than realistic people: a horizontal balance beam or tightrope stretches across the image, with one end tinted in cool blues and teals and the other in warm reds and oranges, representing two opposing extremes." width="1024" height="374" srcset="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/Gemini_Generated_Image_wu28d8wu28d8wu28-e1763440840626.png?w=1024&amp;ssl=1 1024w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/Gemini_Generated_Image_wu28d8wu28d8wu28-e1763440840626.png?resize=300%2C110&amp;ssl=1 300w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/Gemini_Generated_Image_wu28d8wu28d8wu28-e1763440840626.png?resize=768%2C281&amp;ssl=1 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<h2><b>Critics’ Concerns</b></h2>
<p><span style="font-weight: 400;">When accelerationists dismiss AI critics as &#8220;doomers,&#8221; they&#8217;re missing what&#8217;s actually happening. The opposition includes multiple groups with distinct, valid concerns:​</span></p>
<p><span style="font-weight: 400;">Environmental critics note that training one large language model emits as much CO</span><span style="font-weight: 400;">2</span><span style="font-weight: 400;"> as five cars over their lifetimes, and AI data centers are projected to consume 3-4% of global electricity by 2030.​</span></p>
<p><span style="font-weight: 400;">Artists and creators watched companies scrape their copyrighted work without permission or payment.</span></p>
<p><span style="font-weight: 400;">Practical skeptics tried the tools, found them underwhelming, and see another Silicon Valley bubble. Sometimes this comes from not knowing how to use AI well, but often it reflects genuine disappointment when capabilities don&#8217;t match the marketing.​</span></p>
<p><span style="font-weight: 400;">Workers see a real risk of displacement as automation obviates the need for some specific jobs.</span></p>
<p><span style="font-weight: 400;">Finally, those worried about existential risk have concerns ranging from AI-enabled authoritarianism to extinction scenarios.​</span></p>
<p><span style="font-weight: 400;">What brings these groups together is shared anxiety about rapid deployment without governance. But many arguments rest on projections rather than measured impacts.​</span></p>
<h2><b>The Pattern: Resistance Has Never Stopped Valuable Technology</b></h2>
<p><span style="font-weight: 400;">Every major automation wave has faced opposition.</span></p>
<p><span style="font-weight: 400;">When the printing press arrived in Venice in the 1470s, the monk Filippo de Strata wrote to the Doge comparing the pen to a &#8220;virgin&#8221; and the printing press to a &#8220;whore.&#8221; Some scribes destroyed presses. </span></p>
<p><span style="font-weight: 400;">In the 1810s, the Luddites opposed mechanized looms because automation genuinely threatened their economic position. </span></p>
<p><span style="font-weight: 400;">When electric lighting threatened the gas industry in the 1880s, lamplighters in Belgium smashed electric lamps. The gas industry mounted aggressive campaigns to discredit electricity. Edison even arranged public electrocutions of animals to demonstrate how dangerous AC current was.</span></p>
<p><span style="font-weight: 400;">In 1970, telecommunications employed 421,000 switchboard operators, compared to about 78,000 today with automation. Operators in cities that transitioned to mechanical switching were substantially less likely to have any job 10 years later. Older workers were 7% less likely to be working at all.​</span></p>
<p><span style="font-weight: 400;">Critics of past technologies often correctly identified harms that society then struggled to address. Automobiles did contribute to climate change. The Internet did enable surveillance capitalism. Early warnings proved prescient, even when they didn&#8217;t stop deployment. Uncritical acceleration created locked-in harms because we chose speed over governance.​</span></p>
<p><span style="font-weight: 400;">But in every case, the technology became ubiquitous and now shapes our modern lives.</span></p>
<p><span style="font-weight: 400;">Here&#8217;s the crucial lesson: resistance alone has never stopped technologies offering substantial economic advantages. If AI provides genuine value—and evidence increasingly suggests it does in specific domains—categorical opposition won&#8217;t prevent deployment. But the other side of that lesson is that policies shape how technology affects people. The impact on phone operators was less severe in cities that offered transition programs than in cities that did not—even though the technology was exactly the same everywhere.​</span></p>
<h2><b>Accelerationists’ Hopes &#8211; and the Reality</b></h2>
<p><span style="font-weight: 400;">Accelerationists make equally unsupported claims. They promise AI will cure cancer, solve climate change, and create widespread wealth—all based on speculation about future capabilities.​</span></p>
<p><span style="font-weight: 400;">Accelerationists imagine that everyone will have an AI therapist that exactly matches their needs &#8211; but AI sycophancy has posed a challenge and some AI interactions have even encouraged suicidal ideation. </span></p>
<p><span style="font-weight: 400;">They envision a future where AI can manage large-scale farming and production, delivering us everything we need with no oversight and dramatically reducing costs &#8211; but AI has only been able to take over very specific, concrete tasks within production processes.</span></p>
<p><span style="font-weight: 400;">They see AI transforming education and health care by putting experts in everyone’s pockets, without large-scale successes in either realm.</span></p>
<p><span style="font-weight: 400;">They hope that the AI revolution will bring us to the technological singularity, dramatically changing every aspect of human life and allowing us to transcend our current conditions &#8211; but we don’t know if we’re days or centuries away from something that could be called artificial general intelligence, or if we’ll ever get there.</span></p>
<p><span style="font-weight: 400;">Despite massive investments, we don’t see clear evidence of economy-wide productivity improvements. And we certainly haven’t cured cancer, solved climate change, or erased income inequality yet. Individual case studies show promise, but broad transformation requires multiple leaps that we haven’t yet made.​</span></p>
<p><span style="font-weight: 400;">The accelerationist position treats beneficial outcomes as automatic and dismisses concerns as obstacles.​</span></p>
<h2><b>What We Should Be Asking: The Pragmatic Path</b></h2>
<p><span style="font-weight: 400;">While critics extrapolate disaster, and accelerationists promise transformation, answerable questions aren&#8217;t getting attention:​</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">What are the actual, measured productivity gains from AI adoption in different contexts?</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Which specific jobs face displacement, on what timeline, and with what support systems?</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">What are realistic energy consumption trajectories, accounting for both scaling and efficiency improvements?</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Which governance frameworks balance innovation with harm mitigation?</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">How do we measure value creation versus value extraction?</span></li>
</ul>
<p><span style="font-weight: 400;">These require data, experimentation, and measurement—not speculation.​</span></p>
<p><span style="font-weight: 400;">Societies that navigated previous technological transitions best measured actual displacement, funded concrete transition support, and adjusted based on outcomes. The post-WWII GI Bill and 1950s automation programs at companies like General Electric showed this pragmatic path.​</span></p>
<p><span style="font-weight: 400;">We can do the same with AI:​</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><b>Measure real effects</b><span style="font-weight: 400;">. Focus on what AI actually does to jobs, energy consumption, and productivity. Use that data to inform policy.​</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Support those who get hurt</b><span style="font-weight: 400;">. History shows poorer and older workers suffer most. Experiment with different support mechanisms—retraining programs, financial assistance, even UBI pilots—and measure what works.​</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Build adaptive governance</b><span style="font-weight: 400;">. Create regulatory frameworks that evolve as AI capabilities change. Require transparency, mandate fairness testing, and ensure workers have a voice in deployment decisions.​</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Recognize limits</b><span style="font-weight: 400;">. Deploy AI for specific tasks where it genuinely excels. Keep humans in the loop for high-stakes decisions.​</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Address energy consumption</b><span style="font-weight: 400;">. Invest aggressively in efficiency improvements and clean power, and track actual results.​</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Compensate creators fairly</b><span style="font-weight: 400;">. Experiment with models for compensating artists and writers when AI trains on their work while still enabling useful applications.​</span></li>
</ul>
<h2><b>Critical Thinking Over Faith</b></h2>
<p><span style="font-weight: 400;">The choice between acceleration and opposition is false. The real choice is between making decisions based on imagined futures, or building the governance infrastructure that lets us learn and adapt as we go.​</span></p>
<p><span style="font-weight: 400;">This isn&#8217;t the exciting narrative of revolutionary transformation or existential threat. It&#8217;s the boring work of measuring impacts, building adaptive systems, and making policy choices based on what we actually observe. Technologies don&#8217;t determine outcomes—we can, and we should.​</span></p>
<p>The post <a href="https://www.teaguehopkins.com/2025/11/the-false-choice/">The False Choice</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.teaguehopkins.com/2025/11/the-false-choice/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19890</post-id>	</item>
		<item>
		<title>The Three Stages of AI Adoption</title>
		<link>https://www.teaguehopkins.com/2025/11/the-three-stages-of-ai-adoption/</link>
					<comments>https://www.teaguehopkins.com/2025/11/the-three-stages-of-ai-adoption/#respond</comments>
		
		<dc:creator><![CDATA[Teague Hopkins]]></dc:creator>
		<pubDate>Tue, 04 Nov 2025 13:30:37 +0000</pubDate>
				<category><![CDATA[Main]]></category>
		<guid isPermaLink="false">https://www.teaguehopkins.com/?p=19839</guid>

					<description><![CDATA[<p>How to Actually Deliver Value Most leadership teams I talk to are wrestling with the same question: How fast can we leverage AI? There&#8217;s pressure, sometimes self-imposed, sometimes from the board, to deliver AI transformation yesterday. That pressure creates a temptation to build something ambitious and fully automated right now. I&#8217;ve watched this play out, [&#8230;]</p>
<p>The post <a href="https://www.teaguehopkins.com/2025/11/the-three-stages-of-ai-adoption/">The Three Stages of AI Adoption</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2><strong>How to Actually Deliver Value</strong></h2>
<p><span style="font-weight: 400;">Most leadership teams I talk to are wrestling with the same question: How fast can we leverage AI? There&#8217;s pressure, sometimes self-imposed, sometimes from the board, to deliver AI transformation yesterday. That pressure creates a temptation to build something ambitious and fully automated right now.</span></p>
<p><span style="font-weight: 400;">I&#8217;ve watched this play out, and it almost never works the way people hope.</span></p>
<p><span style="font-weight: 400;">There&#8217;s a better path, borrowing from how product teams that actually move the needle operate. It has three stages. Each stage teaches you what you need to know before the next one. Start with giving your teams tools and watch what they do with them. Next, automate the workflows that prove repetitive. Finally, move to full autonomy &#8211; if and only if the math justifies it.<br />
<img data-recalc-dims="1" loading="lazy" decoding="async" data-attachment-id="19842" data-permalink="https://www.teaguehopkins.com/2025/11/the-three-stages-of-ai-adoption/control-agency/" data-orig-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/control-agency.jpg?fit=2048%2C2048&amp;ssl=1" data-orig-size="2048,2048" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Control &lt;-&gt; Agency" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/control-agency.jpg?fit=300%2C300&amp;ssl=1" data-large-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/control-agency.jpg?fit=1024%2C1024&amp;ssl=1" class="aligncenter size-large wp-image-19842" src="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/control-agency.jpg?resize=1024%2C1024&#038;ssl=1" alt="Three white panels on a dark background show a progression from ‘Tool Usage’ with a wrench icon, to ‘Automate with HITL’ with a human figure, to ‘Automate without HITL’ with a simple robot icon; beneath, a left‑to‑right gradient arrow labeled ‘Control’ on the left and ‘Agency’ on the right illustrates increasing autonomy." width="1024" height="1024" srcset="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/control-agency.jpg?resize=1024%2C1024&amp;ssl=1 1024w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/control-agency.jpg?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/control-agency.jpg?resize=150%2C150&amp;ssl=1 150w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/control-agency.jpg?resize=768%2C768&amp;ssl=1 768w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/control-agency.jpg?resize=1536%2C1536&amp;ssl=1 1536w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/11/control-agency.jpg?w=2048&amp;ssl=1 2048w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></span><span style="font-weight: 400;">I’ve seen this work for product development teams, marketing teams, and customer care teams. The pattern is the same.</span></p>
<h2><b>Stage 1: Give Them Tools, Then Watch</b></h2>
<p><span style="font-weight: 400;">This first stage is so simple it feels like you&#8217;re not doing enough. You&#8217;re not building anything. You&#8217;re equipping and enabling.</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">A marketing team copy-pastes product info into ChatGPT to generate social posts. </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">A customer care team uses Claude to draft responses. </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Sales researches prospects.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Finance extracts invoice data.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Engineering uses Cursor to help with code comments and documentation.</span></li>
</ul>
<p><span style="font-weight: 400;">Humans decide everything: When to use the tool; how to prompt it; whether the output is usable; what to do with the result. The AI is an assistant inside a manual workflow.</span></p>
<p><span style="font-weight: 400;">It can feel slow, but it’s valuable learning time. What actually happens is something more useful: you discover which tasks are genuinely repetitive, which workflows jam people up, which problems your teams actually want solved. You get real data to work from instead of guessing.</span></p>
<p><span style="font-weight: 400;">There&#8217;s a confidence piece too. Your teams develop intuition about what these systems can and can&#8217;t do. They learn how to prompt effectively. They hit the edge cases and see the failure modes in real time, and that ground truth matters because it shapes everything that follows.</span></p>
<h2><b>Stage 2: Automate, But Keep a Human in the Loop</b></h2>
<p><span style="font-weight: 400;">The next thing you look for is repetition. When someone&#8217;s running the same prompt over and over, or they&#8217;re sharing a working prompt with teammates who have the same problem, it’s time to start building.</span></p>
<p><span style="font-weight: 400;">This stage is where you create automation that runs the AI process and surfaces the result for human review and approval. </span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">A support system auto-drafts responses for agents to edit. </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">A marketing platform generates social variants for the team to choose from. </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">A finance tool categorizes invoices and flags unusual spend for review.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">A sales system scores leads and suggests follow-up timing for reps to execute.</span></li>
</ul>
<p><span style="font-weight: 400;">You&#8217;re shifting from one person making decisions to many people making decisions faster. AI handles the repetitive thinking. Humans handle judgment, context, and protection. You&#8217;re also building a feedback loop. Each approval, and each rejection, teaches the system what good looks like inside your organization.</span></p>
<p><span style="font-weight: 400;">The jump from Stage 1 to Stage 2 takes engineering work, but now it&#8217;s justified because you already validated that the problem is real.</span></p>
<h2><b>Stage 3: Fully Autonomous</b></h2>
<p><span style="font-weight: 400;">At stage three, we finally try out some full automation. This is the first time we consider having no human in the loop. The system can run end-to-end, with output flowing straight into the business process.</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">A marketing platform auto-generates, schedules, and publishes social posts.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">A finance tool auto-categorizes transactions and settles approved invoices.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">A sales system auto-scores leads, assigns them to reps, and schedules outreach.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">A support system auto-routes tickets to specialists and responds directly to customers.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">An inventory system monitors stock levels and automatically places supplier orders when thresholds are breached.</span></li>
</ul>
<p><span style="font-weight: 400;">But &#8220;Ready for Stage 3&#8221; depends on your function and how much risk you can tolerate.</span></p>
<p><span style="font-weight: 400;">For low-stakes, 80% accuracy might be fine. An auto-generated social post that occasionally needs a tweak? That might be acceptable. Higher-stakes decisions might require 95% or 99% accuracy. If you’re giving medical diagnoses, detecting fraud, or making decisions that could damage customers, you might never remove the human, or only when AI itself advances far beyond where we are now.</span></p>
<p><span style="font-weight: 400;">Stage 3 isn&#8217;t a universal destination. It&#8217;s an option, available when the risk-reward equation justifies it.</span></p>
<h2><b>The Counterintuitive Speed Argument</b></h2>
<p><span style="font-weight: 400;">Here&#8217;s what challenges most people’s instincts: when you&#8217;re pressured to show AI value fast, the reflex is to skip Stages 1 and 2 entirely. You build ambitious systems immediately with heavy infrastructure investment to reach your vision.</span></p>
<p><span style="font-weight: 400;">This approach delays real value.</span></p>
<p><span style="font-weight: 400;">Without the learning that comes from Stage 1, you&#8217;re automating problems you&#8217;ve guessed at, not problems people have. You burn engineering cycles on workflows that might not create the value you expected. You ship systems where you haven&#8217;t actually seen their failure modes in your business.</span></p>
<p><span style="font-weight: 400;">The paradox: starting with Stage 1, just giving teams tools and watching, is faster in the end. It cuts straight to problems worth solving. It creates organizational conviction about AI through observation, not assertion. It identifies which automation bets are high-leverage before you spend resources on them.</span></p>
<p><span style="font-weight: 400;">Stage 2 then becomes a focused, high-confidence engineering effort. Stage 3 happens naturally when the data supports it, not when the calendar demands it.</span></p>
<h2><b>Moving Forward</b></h2>
<p><span style="font-weight: 400;">The three-stage approach isn&#8217;t flashy. You don’t get to write a press release about &#8220;fully autonomous AI agents.&#8221; What you get is value creation anchored in real problems, that scales because your teams understand it, and compounds because you understood what actually matters before you built it.</span></p>
<p><span style="font-weight: 400;">Start with tools, observe, and automate what works. Move to autonomy only when it makes sense.</span></p>
<p><span style="font-weight: 400;">This path to value is faster than it appears, and vastly faster than sprinting to Stage 3 without first understanding what&#8217;s worth automating.</span></p>
<p>The post <a href="https://www.teaguehopkins.com/2025/11/the-three-stages-of-ai-adoption/">The Three Stages of AI Adoption</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.teaguehopkins.com/2025/11/the-three-stages-of-ai-adoption/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19839</post-id>	</item>
		<item>
		<title>Bits, Atoms, and Neurons</title>
		<link>https://www.teaguehopkins.com/2025/10/bits-atoms-and-neurons/</link>
					<comments>https://www.teaguehopkins.com/2025/10/bits-atoms-and-neurons/#respond</comments>
		
		<dc:creator><![CDATA[Teague Hopkins]]></dc:creator>
		<pubDate>Sun, 26 Oct 2025 23:39:54 +0000</pubDate>
				<category><![CDATA[Main]]></category>
		<guid isPermaLink="false">https://www.teaguehopkins.com/?p=19806</guid>

					<description><![CDATA[<p>For years, we talked about the friction between the digital and physical worlds; between bits and atoms. Bits moved at the speed of light through fiber optic cables, while atoms plodded along in trucks and on conveyor belts. The promise of technology kept hitting the wall of physical logistics. But then something changed: we largely [&#8230;]</p>
<p>The post <a href="https://www.teaguehopkins.com/2025/10/bits-atoms-and-neurons/">Bits, Atoms, and Neurons</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>For years, we talked about the friction between the digital and physical worlds; between bits and atoms. Bits moved at the speed of light through fiber optic cables, while atoms plodded along in trucks and on conveyor belts. The promise of technology kept hitting the wall of physical logistics. But then something changed: we largely solved the atoms problem. Warehouse automation has doubled processing speeds and reduced errors by 99%.¹ AI-powered route optimization has reduced delivery times by 20-30% and improved on-time delivery rates by up to 40%.² Same-day delivery is now routine, not miraculous.</p>
<p>The new constraint isn&#8217;t physical anymore. <strong>Wetware has become the limiting factor.</strong></p>
<h2>The Hierarchy of Change</h2>
<p>Consider the speed at which different substrates can change. Computer processors running at 3.2 GHz execute 3.2 billion cycles per second with each cycle taking roughly 0.3 nanoseconds. Data transmission across networks operates on millisecond timescales, with good internet latency ranging from 30-100ms. Sending data takes seconds, maybe less.³</p>
<p>Physical packages take days. Standard domestic shipping requires 2-5 business days. Express services can manage overnight delivery, but we&#8217;re still measuring in days, not seconds. The atoms are slower than the bits.⁴</p>
<p>But neural rewiring? That takes weeks to months. Research shows habit formation averages 66 days, with individual experiences ranging from 18 to 254 days depending on complexity and consistency. Creating new neural pathways requires repeated activation, gradual strengthening of synaptic connections, and shifting from deliberate prefrontal cortex control to more automatic processing. This biological rewiring cannot be rushed. It requires time for structural changes in brain tissue.⁵</p>
<p><strong>The more physical and biological the substrate, the slower the change.</strong> Digital bits rearrange nearly instantaneously. Physical atoms must be transported through space. Living neurons require metabolic processes, protein synthesis, and structural remodeling that takes time.</p>
<h2>The Recursive Acceleration Problem</h2>
<p>Here&#8217;s where it gets really challenging: AI systems, virtual neurons, are adapting faster than human neurons can adapt to those improvements. This creates a recursive acceleration problem that previous technological revolutions didn&#8217;t face.</p>
<p>The evidence is stark. Fewer than 10% of U.S. companies actively use AI in production, and 68% of organizations move 30% or fewer AI experiments into full deployment. Yet 75% of knowledge workers globally use AI at work, with employees at over 90% of companies using personal AI tools, often without official approval.⁶ The technology exists. People want to use it. But institutions cannot adapt fast enough.</p>
<p>Even if AI development stopped today, it would take us 5-10 years to learn all the habits and new ways of working to take advantage of the capabilities we already have. Organizations require 18-24 months just to develop new upskilling programs and by the time they&#8217;re deployed, organizational needs have often changed. The World Economic Forum projects that 59% of the global workforce will require reskilling by 2030. Half the workforce needs reskilling within five years.⁷</p>
<p>Recent research from the St. Louis Fed reveals a troubling correlation: occupations with higher AI exposure experienced larger unemployment increases between 2022 and 2025, with a 0.47 correlation coefficient. This isn&#8217;t hypothetical anymore. The wetware bottleneck is producing structural unemployment right now.⁸</p>
<h2>Cyborgs Walk Among Us</h2>
<p>I think we need to recognize that we&#8217;re already post-human cyborgs (or at least humans partnered with computer sidekicks). I use my computer to calculate things, remember things, and now even summarize and organize unstructured data with LLMs. I&#8217;ve long been a proponent of &#8220;Computers should do what computers are good at so humans can do what humans are good at.&#8221;</p>
<p>The problem is that we have relatively low-bandwidth interfaces with our digital extensions. We type. We click. We read screens. These are slow, sequential processes compared to the speed at which our silicon partners operate. The mismatch between silicon speed (nanoseconds) and synaptic speed (weeks to months) represents a fundamental constraint on 21st-century progress.</p>
<h2>A Qualitative Difference</h2>
<p>The difference between automating the movement of atoms and accelerating the rewiring of neurons is qualitative rather than quantitative. We don&#8217;t have a real concept of the solution.</p>
<p>Is it educational innovation? Brain-computer interfaces like Neuralink? Becoming post-human cyborgs with higher bandwidth? Nootropic drugs or genetic engineering for higher IQ? Is it a societal change that shifts the balance of learning and work? Modern work already requires more schooling than in the past on average. Do we move to a world with universal basic income supporting an ever-learning workforce?</p>
<p>Previous technological revolutions had clearer adaptation paths. The Industrial Revolution was about moving atoms in new ways: we built factories, trained workers for specific tasks, and created new economic structures around manufacturing. The Internet was about moving bits in new ways (as was the printing press before it): we learned to browse websites, send emails, and eventually work remotely.</p>
<p>But this? This is about virtual neurons adapting faster than biological neurons. That&#8217;s a different category of challenge entirely.</p>
<h2>The Cultural Obstacle</h2>
<p>The social and biological consequences feel bigger than previous innovations, particularly in American culture. We have such a deep concept of tying worth to work, thanks to the Protestant work ethic. Work provides not just income but identity, status, purpose, and self-worth. Research shows unemployment causes severe mental health decline.⁹</p>
<p>We&#8217;ll have to figure out how to overcome this if we want to survive the mental health hit of a post-work society. Productivity will increasingly be about building and adapting autonomous systems instead of doing repetitive tasks. That could lead to massive unemployment. We don&#8217;t know what to do with that, but we&#8217;ll need ways for people to create, not just be consumers. Pure consumption doesn&#8217;t lead to lasting happiness and could be a major pitfall for our collective mental health.</p>
<h2>A Hopeful Vision</h2>
<p>Yet there is cause for hope. When people have UBI, most continue to work, except three groups: students, parents of young children, and the retired or chronically ill. The possibility of lives made of more learning, caretaking, and recovery seems incredibly human and humane. That&#8217;s a world I want to live in.¹⁰</p>
<p>Meaningful work provides psychological benefits that extend far beyond income. But &#8220;work&#8221; doesn&#8217;t have to mean what it meant in the 20th century. It can mean learning. Caregiving. Creating. Recovering. Building community. All the things that make us human but that we&#8217;ve been too busy earning a living to fully embrace.¹¹</p>
<h2>The Path Forward</h2>
<p>We&#8217;re not going to get there without a combination of forces. We need policy (UBI in particular, or something like it) to provide the economic foundation. It would be nice to avoid massive unemployment, but I don&#8217;t think we can adapt fast enough for that. Market forces will likely force the issue through displacement. Grassroots movements will be crucial in changing attitudes toward work that have persisted for centuries.</p>
<p>The transition will be chaotic. Neural rewiring takes weeks to months. Organizational adaptation takes years. Cultural shifts around work identity could take generations. There will be a debate about whether there&#8217;s value in remaining purely human or whether we should embrace better brain-computer interfaces and biohacking. Does genetic engineering for higher IQ help us as a species? I don&#8217;t know.</p>
<p>I prefer to think that we have a solution through reconceptualizing work. But that doesn&#8217;t mean it will be easy.</p>
<h2>Individual Agency in an Era of Structural Change</h2>
<p>The good news for individuals is that focusing on your own adaptation will help you in either circumstance. Either you are part of moving us toward that beautiful vision of the future, or you are positioning yourself to be part of a small elite who still have marketable skills in a dystopian future.</p>
<p>Skills that remain valuable include creative problem-solving, complex communication, ethical reasoning, emotional intelligence, and adaptability itself. But more importantly, the ability to reconfigure your neural pathways, even if it takes weeks, is the meta-skill that enables everything else.¹²</p>
<p>I hope for the former scenario. And because everyone preparing and adapting would lead us toward that more humane future, I&#8217;m fundamentally optimistic. I&#8217;m committing to helping as many motivated people as I can successfully learn and transition.</p>
<h2>Conclusion</h2>
<p>We&#8217;ve moved from &#8220;bits vs. atoms&#8221; to &#8220;bits vs. brains.&#8221; The technology adoption curve now reveals a stark gap: the distance between digital capabilities and human ability to use those capabilities is growing and accelerating. Unlike logistics, we cannot simply automate human learning and organizational adaptation at scale.¹³</p>
<p>But recognizing the constraint is the first step toward addressing it. If wetware is the bottleneck, then investing in human adaptation, through education, through cultural change, and through new economic structures that support lifelong learning, becomes the most important work of our time.</p>
<p>The question isn&#8217;t whether we&#8217;ll face this transition. We&#8217;re already in it. The question is whether we&#8217;ll navigate it thoughtfully, building systems and cultures that support human flourishing, or whether we&#8217;ll let market forces and technological momentum carry us into a future we didn&#8217;t choose.</p>
<p>I believe we can choose. But we need to start now, with clear eyes about the challenge we face and the biological constraints we&#8217;re working with. The bits will keep accelerating. The atoms are largely solved. The neurons—our neurons—will adapt at their own pace.</p>
<p>Our job is to create the conditions where that pace is enough.</p>
<hr />
<p>¹ <a href="https://www.linkedin.com/pulse/logistics-automation-breakthrough-intelligent-supply-chain-akabot-koa9c">https://www.linkedin.com/pulse/logistics-automation-breakthrough-intelligent-supply-chain-akabot-koa9c</a>; <a href="https://transloads.co/warehouse-management-statistics/">Karadex data on 99.9% pick accuracy</a>; <a href="https://www.hairobotics.com/blog/top-3-trending-efficiency-and-productivity-benefits-warehouse-automation-2024">MHI data showing up to 85% productivity increases from warehouse automation</a></p>
<p>² <a href="https://www.artech-digital.com/blog/real-time-route-optimization-with-ai">Artech Digital: 20% delivery time reduction, 40% on-time rate improvement</a>; <a href="https://www.artech-digital.com/blog/real-time-route-optimization-with-ai">DHL India case study</a>; <a href="https://www.dispatchtrack.com/blog/route-optimization-guide">Various studies showing 20-30% delivery time improvements</a></p>
<p>³ <a href="https://www.intel.com/content/www/us/en/gaming/resources/cpu-clock-speed.html">https://www.intel.com/content/www/us/en/gaming/resources/cpu-clock-speed.html</a>; <a href="https://www.100ms.live/blog/network-latency">Network latency data</a></p>
<p>⁴ <a href="https://redstagfulfillment.com/fedex-ups-usps-delivery-times/">https://redstagfulfillment.com/fedex-ups-usps-delivery-times/</a></p>
<p>⁵ <a href="https://www.mendi.io/blogs/brain-health/how-long-does-it-take-to-rewire-your-brain-for-better-mental-health">https://www.mendi.io/blogs/brain-health/how-long-does-it-take-to-rewire-your-brain-for-better-mental-health</a>; <a href="https://www.ucl.ac.uk/news/2009/aug/how-long-does-it-take-form-habit">UCL study on habit formation</a>; <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11641623/">Systematic review and meta-analysis on habit formation timing</a></p>
<p>⁶ <a href="https://www.glean.com/perspectives/benefits-and-challenges-ai-adoption">https://www.glean.com/perspectives/benefits-and-challenges-ai-adoption</a>; <a href="https://www.microsoft.com/en-us/worklab/work-trend-index/ai-at-work-is-here-now-comes-the-hard-part">Microsoft Work Trend Index 2024</a>; <a href="https://fortune.com/2025/08/19/shadow-ai-economy-mit-study-genai-divide-llm-chatbots/">MIT study on shadow AI economy</a></p>
<p>⁷ <a href="https://www.ere.net/articles/rapid-reskilling-at-scale-why-the-future-of-work-depends-on-it">https://www.ere.net/articles/rapid-reskilling-at-scale-why-the-future-of-work-depends-on-it</a>; <a href="https://www.weforum.org/press/2025/01/future-of-jobs-report-2025-78-million-new-job-opportunities-by-2030-but-urgent-upskilling-needed-to-prepare-workforces/">WEF Future of Jobs Report 2025</a></p>
<p>⁸ <a href="https://www.stlouisfed.org/on-the-economy/2025/aug/is-ai-contributing-unemployment-evidence-occupational-variation">https://www.stlouisfed.org/on-the-economy/2025/aug/is-ai-contributing-unemployment-evidence-occupational-variation</a></p>
<p>⁹ <a href="https://www.insidehighered.com/opinion/columns/higher-ed-gamma/2024/05/28/how-work-and-career-became-central-americans-identity">https://www.insidehighered.com/opinion/columns/higher-ed-gamma/2024/05/28/how-work-and-career-became-central-americans-identity</a>; <a href="https://www.apa.org/monitor/2020/10/toll-job-loss">APA research on unemployment and mental health</a>; <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11672120/">PMC study on unemployment and mental health</a></p>
<p>¹⁰ <a href="https://globalaffairs.org/commentary-and-analysis/blogs/multiple-countries-have-tested-universal-basic-income-and-it-works">https://globalaffairs.org/commentary-and-analysis/blogs/multiple-countries-have-tested-universal-basic-income-and-it-works</a>; <a href="https://www.givedirectly.org/2023-ubi-results/">GiveDirectly UBI study results</a>; <a href="https://www.ubie.org/debunking-the-social-hammock-myth-german-basic-income-experiment-shows-remarkable-results/">German Basic Income experiment</a></p>
<p>¹¹ <a href="https://peakpsych.com.au/resources-for-individuals/the-health-benefits-of-meaningful-work/">https://peakpsych.com.au/resources-for-individuals/the-health-benefits-of-meaningful-work/</a>; <a href="https://worldatwork.org/publications/evolve-magazine/meaningful-work-translates-to-better-well-being-and-performance">Research on meaningful work and well-being</a></p>
<p>¹² <a href="https://www.paybump.com/resources/6-future-proof-job-skills-in-the-age-of-ai">https://www.paybump.com/resources/6-future-proof-job-skills-in-the-age-of-ai</a></p>
<p>¹³ <a href="https://whatfix.com/blog/technology-adoption-curve/">https://whatfix.com/blog/technology-adoption-curve/</a></p>
<p>The post <a href="https://www.teaguehopkins.com/2025/10/bits-atoms-and-neurons/">Bits, Atoms, and Neurons</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.teaguehopkins.com/2025/10/bits-atoms-and-neurons/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19806</post-id>	</item>
		<item>
		<title>Lessons for AI Adoption: What SaaS Taught Us About Enablement</title>
		<link>https://www.teaguehopkins.com/2025/06/lessons-for-ai-adoption-what-saas-taught-us-about-enablement/</link>
					<comments>https://www.teaguehopkins.com/2025/06/lessons-for-ai-adoption-what-saas-taught-us-about-enablement/#respond</comments>
		
		<dc:creator><![CDATA[Teague Hopkins]]></dc:creator>
		<pubDate>Thu, 12 Jun 2025 05:28:24 +0000</pubDate>
				<category><![CDATA[Main]]></category>
		<guid isPermaLink="false">https://www.teaguehopkins.com/?p=18143</guid>

					<description><![CDATA[<p>The recent surge in GenAI usage reminds me of the SaaS boom of the early 2000s. SaaS introduced a host of new tools; AI is doing the same today. In both cases, end users started to bypass traditional gatekeepers. Early SaaS adoption taught us a lesson. Departments bought tools on their own, often without IT’s [&#8230;]</p>
<p>The post <a href="https://www.teaguehopkins.com/2025/06/lessons-for-ai-adoption-what-saas-taught-us-about-enablement/">Lessons for AI Adoption: What SaaS Taught Us About Enablement</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The recent surge in GenAI usage reminds me of the SaaS boom of the early 2000s. SaaS introduced a host of new tools; AI is doing the same today. In both cases, end users started to bypass traditional gatekeepers.</p>
<p>Early SaaS adoption taught us a lesson. Departments bought tools on their own, often without IT’s knowledge. A team might sign up for Basecamp to manage projects because they could just put in on a department credit card, creating a sort of “shadow IT.” That ad-hoc approach sparked innovation but also bred waste, security gaps, and duplication of effort. Companies soon learned to that if they wanted IT to be involved in choosing and deploying SaaS products, they had to move faster to keep up with users&#8217; new expectations of speed and experience.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" data-attachment-id="18146" data-permalink="https://www.teaguehopkins.com/2025/06/lessons-for-ai-adoption-what-saas-taught-us-about-enablement/knowledge-bridge/" data-orig-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/06/knowledge-bridge.png?fit=1024%2C1024&amp;ssl=1" data-orig-size="1024,1024" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Bridging the Knowledge Chasm" data-image-description="&lt;p&gt;This isometric digital illustration portrays a futuristic construction scene. Against a dark background, a team of workers in orange safety vests and hard hats are building a bridge made of glowing, translucent blue cubic blocks across a deep chasm. A yellow construction crane is visible on the left, and a drone hovers above, while several holographic data displays and graphs float around the scene, suggesting a data-driven or technologically advanced project. On the right, a small cart and ladder are present. The image symbolizes collaboration, innovation, and the use of technology to bridge gaps, particularly in knowledge or information, turning abstract data into a tangible connection.&lt;/p&gt;
" data-image-caption="" data-medium-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/06/knowledge-bridge.png?fit=300%2C300&amp;ssl=1" data-large-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/06/knowledge-bridge.png?fit=1024%2C1024&amp;ssl=1" class="size-medium wp-image-18146 alignleft" src="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/06/knowledge-bridge.png?resize=300%2C300&#038;ssl=1" alt="Digital illustration of workers in orange vests building a glowing blue, blocky bridge across a dark chasm, assisted by a crane, drone, and holographic data displays, symbolizing bridging a knowledge gap." width="300" height="300" srcset="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/06/knowledge-bridge.png?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/06/knowledge-bridge.png?resize=150%2C150&amp;ssl=1 150w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/06/knowledge-bridge.png?resize=768%2C768&amp;ssl=1 768w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/06/knowledge-bridge.png?w=1024&amp;ssl=1 1024w" sizes="auto, (max-width: 300px) 100vw, 300px" />AI is following a similar path, with a twist. Many AI tools are “back office.” One person can use a model to draft a memo, sift data, or write code, and nobody else may notice. Unlike team-oriented SaaS apps, this solo use stays hidden.</p>
<p>Hidden use discourages sharing. A worker who doubles output with AI may keep quiet to look like a star. That secrecy blocks collective learning. Most folks struggle while a few reap the benefits.</p>
<p>Without a plan, staff go underground. They feed sensitive data into unvetted models, without the protections of enterprise accounts. Many companies hand out a generic chatbot but skip the training and don&#8217;t even consider function-specific tools like Cursor (for writing code) or Jasper (for marketing). Some employees will start to build internal clones of those tools because they don&#8217;t have access to the ones they really want to use.</p>
<p>We need comprehensive AI enablement. More than access, more than the shiniest household name model, and more than individual usage. A solid program should:</p>
<ul>
<li><strong>Educate</strong>: Show employees and leaders what AI can do and how to use it well.</li>
<li><strong>Choose tools wisely</strong>: Select the right tools for the job-to-be-done. It&#8217;s not one-size fits all.</li>
<li><strong>Share knowledge</strong>: Promote open talk about wins, learnings, and best practices.</li>
<li><strong>Govern</strong>: Set rules that guard data, privacy, and ethics.</li>
</ul>
<p>Without enablement, AI stays fragmented, results lag, and teams fall behind. As with SaaS, the winners will be the firms that embrace and empower their people. The field moves fast; only continuous learning backed by strong enablement will keep you ahead.</p>
<p>The post <a href="https://www.teaguehopkins.com/2025/06/lessons-for-ai-adoption-what-saas-taught-us-about-enablement/">Lessons for AI Adoption: What SaaS Taught Us About Enablement</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.teaguehopkins.com/2025/06/lessons-for-ai-adoption-what-saas-taught-us-about-enablement/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">18143</post-id>	</item>
		<item>
		<title>Rethinking Ownership: AI Training and Copyright Battles</title>
		<link>https://www.teaguehopkins.com/2025/04/rethinking-ownership-ai-training-and-copyright-battles/</link>
					<comments>https://www.teaguehopkins.com/2025/04/rethinking-ownership-ai-training-and-copyright-battles/#respond</comments>
		
		<dc:creator><![CDATA[Teague Hopkins]]></dc:creator>
		<pubDate>Thu, 03 Apr 2025 15:57:38 +0000</pubDate>
				<category><![CDATA[Main]]></category>
		<guid isPermaLink="false">https://www.teaguehopkins.com/?p=17680</guid>

					<description><![CDATA[<p>Outside Meta&#8217;s London office today, authors are protesting what they call theft: the use of their books to train AI without permission. The scene would not have surprised John Perry Barlow, who nearly 30 years ago wrote that digital technology would make traditional copyright law obsolete. &#8220;Intellectual property law cannot be patched, retrofitted, or expanded [&#8230;]</p>
<p>The post <a href="https://www.teaguehopkins.com/2025/04/rethinking-ownership-ai-training-and-copyright-battles/">Rethinking Ownership: AI Training and Copyright Battles</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400;">Outside Meta&#8217;s London office today, </span><a href="https://www.theguardian.com/books/2025/apr/03/meta-has-stolen-books-authors-to-protest-in-london-against-ai-trained-using-shadow-library"><span style="font-weight: 400;">authors are protesting</span></a><span style="font-weight: 400;"> what they call theft: the use of their books to train AI without permission. The scene would not have surprised John Perry Barlow, who nearly 30 years ago wrote that digital technology would make traditional copyright law obsolete.</span></p>
<p><span style="font-weight: 400;">&#8220;Intellectual property law cannot be patched, retrofitted, or expanded to contain digitized expression,&#8221; warned Barlow in his 1994 essay &#8220;</span><a href="https://www.wired.com/1994/03/economy-ideas/"><span style="font-weight: 400;">The Economy of Ideas</span></a><span style="font-weight: 400;">.&#8221; As bestselling authors wave placards demanding compensation and Meta claims its AI training is &#8220;consistent with existing law,&#8221; we&#8217;re watching his prediction play out once again.</span></p>
<h2><span style="font-weight: 400;">The Man Who Saw It Coming</span></h2>
<p><span style="font-weight: 400;">Before most people had email addresses, Barlow &#8211; a Grateful Dead lyricist turned digital visionary &#8211; understood that the internet would fundamentally change how we think about ownership. He saw that once creative works became digital patterns rather than physical objects, our traditional ways of protecting and monetizing them would fall apart.</span></p>
<p><span style="font-weight: 400;">He was right. Today, Meta faces lawsuits for using LibGen, a &#8220;shadow library&#8221; of over 7.5 million books, to train its AI models. Authors like Ta-Nehisi Coates and Sarah Silverman are suing. Novelist AJ West says it feels like being &#8220;mugged.&#8221; But Meta argues that training AI on patterns within books is fundamentally different from copying those books.</span></p>
<h2><span style="font-weight: 400;">What Barlow Got Right</span></h2>
<p><span style="font-weight: 400;">Barlow&#8217;s key insights read like a prophecy:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Digital copying would become essentially free and unstoppable</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Traditional copyright law would fail to adapt to new technology</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Tension would grow between information&#8217;s desire to flow freely and creators&#8217; need for compensation</span></li>
</ol>
<p><span style="font-weight: 400;">He compared trying to protect digital information to &#8220;trying to keep water in a handful of sand&#8221; &#8211; a metaphor that perfectly captures the frustration of authors watching their works absorbed into AI training sets.</span></p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" data-attachment-id="17682" data-permalink="https://www.teaguehopkins.com/2025/04/rethinking-ownership-ai-training-and-copyright-battles/composition/" data-orig-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/04/Composition.png?fit=1024%2C1024&amp;ssl=1" data-orig-size="1024,1024" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Digital Data Flowing Through Hands" data-image-description="&lt;p&gt;A digital illustration depicting a pair of cupped hands against a dark, deep blue background. The hands cradle a mass of bright, glowing golden particles. Streams of these particles, transforming into luminous blueish-white binary digits (0s and 1s), flow both downwards past the fingers and upwards into the space above. The image evokes a futuristic and technological theme, symbolizing the interaction with, or the overwhelming nature of, digital data and information.&lt;/p&gt;
" data-image-caption="" data-medium-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/04/Composition.png?fit=300%2C300&amp;ssl=1" data-large-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/04/Composition.png?fit=1024%2C1024&amp;ssl=1" class="aligncenter size-medium wp-image-17682" src="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/04/Composition.png?resize=300%2C300&#038;ssl=1" alt="Cupped hands holding glowing golden particles from which streams of binary code (0s and 1s) flow upwards and downwards against a dark blue background." width="300" height="300" srcset="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/04/Composition.png?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/04/Composition.png?resize=150%2C150&amp;ssl=1 150w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/04/Composition.png?resize=768%2C768&amp;ssl=1 768w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/04/Composition.png?w=1024&amp;ssl=1 1024w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<h2><span style="font-weight: 400;">The Core Problem Remains Unsolved</span></h2>
<p><span style="font-weight: 400;">The Meta controversy highlights the central dilemma Barlow identified: In a digital world, how do we fairly compensate creators while acknowledging that information naturally wants to spread?</span></p>
<p><span style="font-weight: 400;">When novelist Kate Mosse joins protesters demanding payment for AI training use, she&#8217;s fighting the same battle Barlow described &#8211; trying to maintain traditional property rights in an age where creative works have become &#8220;patterns of ones and zeros&#8221; flowing through the digital world.</span></p>
<h2><span style="font-weight: 400;">A Way Forward?</span></h2>
<p><span style="font-weight: 400;">Barlow didn&#8217;t just predict problems &#8211; he suggested solutions. He envisioned new economic models where value would come from:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Real-time performance and experience</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Being first to market with ideas</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Service and support around creative works</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Direct relationships between creators and audiences</span></li>
</ul>
<p><span style="font-weight: 400;">Some of these models have emerged, like how musicians now earn more from concerts than recordings, but we haven&#8217;t found similar alternatives for authors and other creators whose works train AI systems.</span></p>
<h2><span style="font-weight: 400;">The Future We Need to Build</span></h2>
<p><span style="font-weight: 400;">The current standoff between Meta and authors shows we&#8217;re still caught between old and new worlds. Neither traditional copyright enforcement nor unrestricted AI training serves everyone&#8217;s interests.</span></p>
<p><span style="font-weight: 400;">Barlow might suggest that the solution lies not in choosing sides, but in developing new models that:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Recognize AI training as a legitimate use of creative works</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Provide fair compensation to creators</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Build sustainable creative ecosystems for the digital age</span></li>
</ul>
<p><span style="font-weight: 400;">Three decades ago, Barlow wrote that the digital revolution would force us to completely rethink how we value and protect creative work. Today&#8217;s AI copyright battles are just the latest development to prove he was right. The question is: Have we come up with any better solutions in the intervening 30 years?</span></p>
<p>The post <a href="https://www.teaguehopkins.com/2025/04/rethinking-ownership-ai-training-and-copyright-battles/">Rethinking Ownership: AI Training and Copyright Battles</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.teaguehopkins.com/2025/04/rethinking-ownership-ai-training-and-copyright-battles/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">17680</post-id>	</item>
		<item>
		<title>How many colored squares are on this board?</title>
		<link>https://www.teaguehopkins.com/2025/03/how-many-colored-squares-are-on-this-board/</link>
					<comments>https://www.teaguehopkins.com/2025/03/how-many-colored-squares-are-on-this-board/#respond</comments>
		
		<dc:creator><![CDATA[Teague Hopkins]]></dc:creator>
		<pubDate>Wed, 12 Mar 2025 22:22:32 +0000</pubDate>
				<category><![CDATA[Main]]></category>
		<guid isPermaLink="false">https://www.teaguehopkins.com/?p=17467</guid>

					<description><![CDATA[<p>Playing with my kid last night, I stumbled across yet another problem that humans have an easy time solving, while LLMs seemed incapable of getting close to the correct answer. Prompt: How many colored squares are on this board? Do not count white squares. We asked 10 LLMs to count the number of colored squares [&#8230;]</p>
<p>The post <a href="https://www.teaguehopkins.com/2025/03/how-many-colored-squares-are-on-this-board/">How many colored squares are on this board?</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="relative">
<div class="prose text-pretty dark:prose-invert inline leading-normal break-words min-w-0 [word-break:break-word]">
<p><span style="font-weight: 400;">Playing with my kid last night, I stumbled across yet another problem that humans have an easy time solving, while LLMs seemed incapable of getting close to the correct answer.</span></p>
<h2><strong><em>Prompt: How many colored squares are on this board? Do not count white squares.</em></strong></h2>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" data-attachment-id="17470" data-permalink="https://www.teaguehopkins.com/2025/03/how-many-colored-squares-are-on-this-board/pxl_20250312_010129956/" data-orig-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/03/PXL_20250312_010129956-scaled.jpg?fit=2560%2C1928&amp;ssl=1" data-orig-size="2560,1928" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;1.85&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;Pixel 6&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1741726889&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;6.81&quot;,&quot;iso&quot;:&quot;1765&quot;,&quot;shutter_speed&quot;:&quot;0.01667&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="Grid" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/03/PXL_20250312_010129956-scaled.jpg?fit=300%2C226&amp;ssl=1" data-large-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/03/PXL_20250312_010129956-scaled.jpg?fit=1024%2C771&amp;ssl=1" class="aligncenter wp-image-17470 size-large" src="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/03/PXL_20250312_010129956.jpg?resize=1024%2C771&#038;ssl=1" alt="A square white grid filled with translucent plastic tiles in red, yellow, blue, and green, creating a colorful mosaic pattern." width="1024" height="771" srcset="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/03/PXL_20250312_010129956-scaled.jpg?resize=1024%2C771&amp;ssl=1 1024w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/03/PXL_20250312_010129956-scaled.jpg?resize=300%2C226&amp;ssl=1 300w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/03/PXL_20250312_010129956-scaled.jpg?resize=768%2C578&amp;ssl=1 768w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/03/PXL_20250312_010129956-scaled.jpg?resize=1536%2C1157&amp;ssl=1 1536w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/03/PXL_20250312_010129956-scaled.jpg?resize=2048%2C1542&amp;ssl=1 2048w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></p>
<p><span style="font-weight: 400;">We asked 10 LLMs to count the number of colored squares in this image. None of them got close. Below is the list, sorted by how far off they were, from 223,241% to 35%. Granted, these are Large Language Models, not specialized computer vision algorithms, which I have no doubt would solve this problem with ease, but it was still interesting to see how hard this problem was for them to solve, and how confident some of them were in their wrong answers.</span></p>
<ol>
<li><span style="font-weight: 400;">GPT-4.5 &#8211; 781,695</span></li>
<li><span style="font-weight: 400;">Perplexity Sonar &#8211; 783,</span><span style="font-weight: 400;">3</span><span style="font-weight: 400;">60</span></li>
<li><span style="font-weight: 400;">Gemini 2.0 Flash &#8211; 783,360</span></li>
<li><span style="font-weight: 400;">Claude Sonnet 3.7 Thinking &#8211; 160</span></li>
<li><span style="font-weight: 400;">Deep Research &#8211; 166-170</span></li>
<li><span style="font-weight: 400;">GPT-4o &#8211; 177</span></li>
<li><span style="font-weight: 400;">o3-mini &#8211; 150-180</span></li>
<li><span style="font-weight: 400;">Grok-2 &#8211; 196</span></li>
<li><span style="font-weight: 400;">Claude 3.7 Sonnet &#8211; 150-200</span></li>
<li><span style="font-weight: 400;">R1 &#8211; 225</span></li>
<li><span style="font-weight: 400;">Human &#8211; 350 (Correct)</span></li>
</ol>
<h2><span style="font-weight: 400;">GPT-4.5 &#8211; </span><span style="font-weight: 400;">781,695</span></h2>
<blockquote><p><span style="font-weight: 400;">The board contains 781,695 colored squares. White squares were excluded from the count.</span></p></blockquote>
<h2><span style="font-weight: 400;">Perplexity Sonar &#8211; 783,360</span></h2>
<blockquote><p><span style="font-weight: 400;">The result of 783,360 likely represents the count of individual colored pixels in the image rather than the number of colored squares on the board. To determine the number of colored squares, we need to account for the size of each square.</span></p>
<p><span style="font-weight: 400;">Each square in the game board appears to be composed of multiple pixels (e.g., a grid-like design). If you can provide the pixel dimensions of one square, I can help refine this calculation.</span></p></blockquote>
<h2><span style="font-weight: 400;">Gemini 2.0 Flash &#8211; </span><span style="font-weight: 400;">783,360</span></h2>
<blockquote><p><span style="font-weight: 400;">There are 783,360 colored squares on the board, not counting the white squares. A Python script using the PIL and NumPy libraries was used to arrive at this answer. The script loaded the image, converted it into an array, defined the RGB values for the white squares, and counted the non-white squares.</span></p></blockquote>
<h2><span style="font-weight: 400;">Claude Sonnet 3.7 Thinking &#8211; </span><span style="font-weight: 400;">160 </span></h2>
<blockquote><p><span style="font-weight: 400;">Based on the image shown, I can see a game board that appears to be Blokus or a similar tile-placement game. The board contains colored translucent plastic pieces in four colors: red, yellow, blue, and turquoise/teal.</span></p>
<p><span style="font-weight: 400;">After carefully examining the image, I can see that this is a square grid board with colored pieces arranged in various polyomino shapes (similar to Tetris pieces). The colored squares occupy much of the board, with white spaces showing the empty grid positions.</span></p>
<p><span style="font-weight: 400;">Counting each colored square individually:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Red squares: approximately 40</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Yellow squares: approximately 40</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Blue squares: approximately 38</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Turquoise/teal squares: approximately 42</span></li>
</ul>
<p><span style="font-weight: 400;">In total, there are approximately 160 colored squares on this board. This count excludes all white/empty squares as requested.</span></p>
<p><span style="font-weight: 400;">The colored pieces appear to be arranged in a competitive pattern typical of Blokus gameplay, where players try to place as many of their colored pieces as possible while blocking opponents.</span></p></blockquote>
<h2><span style="font-weight: 400;">Deep Research &#8211; </span><span style="font-weight: 400;">166-170</span></h2>
<blockquote>
<h3><span style="font-weight: 400;">Analysis of Colored Squares on Game Board</span></h3>
<p><span style="font-weight: 400;">Based on careful examination of the provided image, the board shows a colorful arrangement of translucent square pieces placed on a white grid. The board contains various colored squares in red, blue, yellow, and teal forming different patterns across the playing surface.</span></p>
<h4><span style="font-weight: 400;">Board Structure and Composition</span></h4>
<p><span style="font-weight: 400;">The image displays what appears to be a puzzle or game board with a grid layout containing colored translucent pieces. The board features a white background grid with square pieces in four distinct colors: red, blue, yellow, and teal (which the analysis code attempted to identify as green). These colored squares are arranged in various configurations across the board, creating an intricate pattern with some white spaces remaining empty. The colored pieces appear to be made of translucent plastic material, allowing some light to pass through while maintaining their vibrant coloration.</span></p>
<h4><span style="font-weight: 400;">Color Distribution Analysis</span></h4>
<p><span style="font-weight: 400;">According to the computational analysis of the image, the distribution of colored pixels shows a predominance of red elements, followed by yellow and then blue. The automated pixel counting identified:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Red elements: 83,758 pixels</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Yellow elements: 67,271 pixels</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Blue elements: 20,814 pixels</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Teal/green elements: The code detected only 8 pixels, which appears to be an error in color detection as teal pieces are clearly visible in the image</span></li>
</ul>
<p><span style="font-weight: 400;">The total number of colored pixels detected was 171,851. However, this pixel count does not directly translate to the number of colored square pieces on the board, as each piece contains multiple pixels.</span></p>
<h4><span style="font-weight: 400;">Square Count Methodology</span></h4>
<p><span style="font-weight: 400;">To determine the actual number of colored squares on the board as requested, we need to analyze the grid structure rather than merely counting pixels. Based on visual inspection, the board appears to be a standard square grid, similar to those used in games like Blokus or various puzzle games. Each colored piece occupies exactly one square position on the grid.</span></p>
<p><span style="font-weight: 400;">From the image, we can observe that the board contains a substantial number of colored squares arranged in various patterns. The grid appears to be approximately 16×16 in dimension, which would provide 256 total positions. Looking at the distribution and density of the colored squares on the board, approximately 65-70% of the grid positions are occupied by colored squares, with the remaining being white spaces.</span></p>
<h4><span style="font-weight: 400;">Total Colored Square Count</span></h4>
<p><span style="font-weight: 400;">Based on the visual analysis of the grid and the density of colored pieces, the total number of colored squares on this board is approximately 166-170 squares. This estimate is derived from analyzing the grid structure and the proportion of the board filled with colored pieces rather than the raw pixel count.</span></p>
<p><span style="font-weight: 400;">The board displays a fairly even distribution of the four colors (red, blue, yellow, and teal), though red appears to be the most prominent color based on both visual inspection and the pixel analysis. The pieces are arranged in what appears to be a strategic pattern, possibly representing a game in progress or a completed puzzle arrangement.</span></p>
<h3><span style="font-weight: 400;">Conclusion</span></h3>
<p><span style="font-weight: 400;">The board contains approximately 166-170 colored squares in total, not counting the white spaces. This count represents the actual physical square pieces placed on the board rather than pixel measurements. The colored squares create an engaging visual pattern with their arrangement of red, blue, yellow, and teal pieces distributed across the white grid surface.</span></p></blockquote>
<h2><span style="font-weight: 400;">GPT-4o &#8211; 177</span></h2>
<blockquote><p><span style="font-weight: 400;">To calculate the number of colored squares on this board, we need to count all the non-white squares in the image.</span></p>
<p><span style="font-weight: 400;">The board is a 14&#215;14 grid, which totals 196 squares. From visual inspection, the white squares are the empty spaces scattered across the board. By manually counting the white squares in the image, there are 19 white squares.</span></p>
<p><span style="font-weight: 400;">Subtracting the white squares from the total:</span></p>
<p><span style="font-weight: 400;">196 &#8211; 19 = 177</span></p>
<p><span style="font-weight: 400;">Thus, there are 177 colored squares on this board.</span></p></blockquote>
<h2><span style="font-weight: 400;">o3-mini &#8211; </span><span style="font-weight: 400;">150-180</span></h2>
<blockquote><p><span style="font-weight: 400;">The image shows a colorful grid-based puzzle board with transparent colored blocks arranged in a pattern. This appears to be a Blokus-style game or similar strategic board game.</span></p>
<p><span style="font-weight: 400;">Based on visual inspection, this is a square grid board with colored transparent pieces in red, yellow, blue, and teal/green. The board appears to be approximately a 16×16 grid, with many of the squares filled with colored pieces while others remain white (empty).</span></p>
<p><span style="font-weight: 400;">From carefully examining the image, I can estimate there are approximately 150-180 colored squares on the board</span><span style="font-weight: 400;">. Each colored square appears as a small transparent block positioned within the grid structure, and they are arranged in various geometric patterns across the board.</span></p>
<p><span style="font-weight: 400;">The white squares are not counted in this estimate, as specified in your query. The colored squares make up roughly 60-70% of the total board spaces, which suggests around 150-180 colored pieces if the board is indeed a 16×16 grid (which would have 256 total positions)</span><span style="font-weight: 400;">.</span></p></blockquote>
<h2><span style="font-weight: 400;">Grok-2 &#8211; 196</span></h2>
<blockquote><p><span style="font-weight: 400;">To count the colored squares on the board in the image, a visual inspection reveals that the board is filled with red, yellow, green, and blue squares. White squares are not to be counted.</span></p>
<p><span style="font-weight: 400;">The board appears to be a 14&#215;14 grid, which means there are </span></p>
<p><span style="font-weight: 400;">14×14=196</span></p>
<p><span style="font-weight: 400;">14×14=196</span><span style="font-weight: 400;"> total squares. Since the board is fully covered by colored squares (with no visible white spaces), the total number of colored squares is 196.</span></p></blockquote>
<h2><span style="font-weight: 400;">Claude 3.7 Sonnet &#8211; </span><span style="font-weight: 400;">150-200</span></h2>
<blockquote><p><span style="font-weight: 400;">The image shows a board with colored plastic squares arranged in a grid pattern. The squares are in four colors: red, blue, yellow, and teal/green, placed on a white background grid.</span></p>
<p><span style="font-weight: 400;">Based on the analysis of the image, there are numerous colored squares on the board. While an exact manual count would be challenging from visual inspection alone, the image processing analysis indicates there are approximately 774,627 colored pixels in the image.</span></p>
<p><span style="font-weight: 400;">However, this pixel count doesn&#8217;t directly translate to the number of colored squares. Looking at the board structure, it appears to be a standard Blokus or similar puzzle game board, which typically has around 225-400 squares total (depending on the specific game variant), with many of those positions filled with colored pieces.</span></p>
<p><span style="font-weight: 400;">From visual estimation of the grid pattern visible in the image, there appear to be roughly 150-200 colored squares on the board, not counting the white background squares.</span></p></blockquote>
<h2><span style="font-weight: 400;">R1 &#8211; 225</span></h2>
<blockquote><p><span style="font-weight: 400;">The image shows a game board filled with translucent colored plastic pieces arranged in a grid pattern. Counting each colored square individually:</span></p>
<p><span style="font-weight: 400;">Red squares: Approximately 57</span><span style="font-weight: 400;"><br />
</span><span style="font-weight: 400;">Blue squares: Approximately 55</span><span style="font-weight: 400;"><br />
</span><span style="font-weight: 400;">Yellow squares: Approximately 54</span><span style="font-weight: 400;"><br />
</span><span style="font-weight: 400;">Green/teal squares: Approximately 59</span></p>
<p><span style="font-weight: 400;">In total, there are approximately 225 colored squares on this board. This count excludes all white squares as instructed.</span></p>
<p><span style="font-weight: 400;">The board appears to be a tile-placement puzzle game similar to Blokus or Tetris-style puzzles, where different colored polyomino pieces are arranged on a grid. The colored squares are arranged in various patterns and shapes across the white game board.</span></p></blockquote>
<h2><span style="font-weight: 400;">Human &#8211; 350 (Correct)</span></h2>
<p><span style="font-weight: 400;">It’s a Standard Blokus board. It’s a 20&#215;20 grid or 400 total squares. There are 84 pieces in Blokus; 21 in each of four colors. Counting up the shapes gives us 89 squares per color. 89*4 = 356 colored squares and 400-356 = 44 white ones.</span></p>
<p><span style="font-weight: 400;">Let’s double check. I count the sides of the grid as 20&#215;20. Going row by row, I count 50 white spaces, so we must be missing a few pieces from the set, but that makes for 350 colored spaces and 50 white spaces on the board in the picture.</span></p>
<p><span style="font-weight: 400;">Notably, I did the opposite of the prompt here because I counted the white squares, because I interpreted the statement to mean not including the white squares in the final count, rather than instruction about how to approach the problem. I was even wrong in the initial calculation and only caught the correct number after double checking my work by another method, a frequent tactic for humans that LLMs don’t use as much.</span></p>
<h2><span style="font-weight: 400;">What can we learn from this?</span></h2>
<p><span style="font-weight: 400;">This experiment highlights several important insights about the current state of LLMs and their visual perception capabilities:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;"><strong>Fundamental limitations in visual reasoning</strong>: Even advanced LLMs struggle with basic counting tasks that humans find relatively straightforward. The dramatic variance in answers (from 150 to over 780,000) demonstrates how far these models are from reliable visual comprehension</span><span style="font-weight: 400;">.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;"><strong>Domain knowledge matters</strong>: The human solver immediately recognized this as a standard Blokus board with specific dimensions (20&#215;20) and game pieces, which provided context for solving the problem accurately.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;"><strong>Confidence doesn&#8217;t equal accuracy</strong>: Several models provided extremely precise but wildly incorrect answers. GPT-4.5, Perplexity Sonar, and Gemini 2.0 Flash all confidently stated numbers in the 780,000 range without recognizing the implausibility of their results.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;"><strong>Methodological differences</strong>: Models approached the problem differently—some counted pixels rather than squares, others estimated grid dimensions incorrectly, and some made reasonable approximations but still fell short. This reveals how different architectures process and interpret visual information.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;"><strong>The human advantage</strong>: The human solver used domain-specific knowledge, spatial reasoning, and common sense verification (double-checking by counting white spaces) to arrive at the correct answer—cognitive skills that current AI systems don&#8217;t fully replicate.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;"><strong>Practical implications</strong>: For applications requiring precise visual counting or object identification, specialized computer vision algorithms remain vastly superior to general-purpose LLMs. This highlights the importance of using the right tool for specific tasks.</span></li>
</ol>
<p><span style="font-weight: 400;">This experiment serves as a humbling reminder that despite impressive advances in AI, fundamental visual reasoning tasks that humans master early in development remain challenging for even the most sophisticated language models.</span></p>
<h2><span style="font-weight: 400;">A Better Approach</span></h2>
<p><span style="font-weight: 400;">So how might we approach this problem with AI? Well, if you’re not writing your own code, using LLMs to generate code for a deterministic algorithm would probably be a significantly better approach for this counting problem. The experiment clearly shows that general-purpose LLMs struggle with direct visual counting tasks, but we know that specialized computer vision algorithms are being used for challenges more complicated than this in production environments.</span></p>
<p><span style="font-weight: 400;">For our case, a more effective approach would include:</span></p>
<h3><span style="font-weight: 400;">Adding Domain Knowledge</span></h3>
<p><span style="font-weight: 400;">The prompt could incorporate domain-specific knowledge about Blokus:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The standard board dimensions (20×20)</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Typical piece configurations and constraints</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Expected ranges for colored vs. white squares</span></li>
</ul>
<p><span style="font-weight: 400;">This would help validate results and catch errors that occurred in the LLM attempts.</span></p>
<h3><span style="font-weight: 400;">Computer Vision + Deterministic Algorithm from scratch</span></h3>
<p><span style="font-weight: 400;">Instead of asking an LLM to interpret the image directly, you could use an LLM to generate code for a specialized computer vision pipeline that:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Preprocesses the image – Adjusting brightness/contrast and filtering noise</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Identifies the grid structure – Detecting it&#8217;s a standard Blokus 20×20 board</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Segments the image into individual squares using edge detection algorithms</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Classifies each square by color (colored vs. white) using color thresholds</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Implements counting logic with verification steps</span></li>
</ol>
<h3><span style="font-weight: 400;">Assemble from Open Source</span></h3>
<p><span style="font-weight: 400;">You could prompt an LLM to generate Python code using libraries like OpenCV (an open source computer vision library) for this specific task. For example:</span></p>
<p><span style="font-weight: 400;">Request code that uses OpenCV to:</span></p>
<ol>
<li><span style="font-weight: 400;">Detect the game board grid</span></li>
<li><span style="font-weight: 400;">Identify each square&#8217;s color</span></li>
<li><span style="font-weight: 400;">Count colored squares, excluding white</span></li>
<li><span style="font-weight: 400;">Validate results against Blokus game knowledge</span></li>
</ol>
<h2><span style="font-weight: 400;">Why This Works Better</span></h2>
<p><span style="font-weight: 400;">LLMs are much stronger at code generation than direct visual analysis. By having them generate deterministic algorithms, you leverage:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The precision of computer vision techniques designed specifically for object counting</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Domain knowledge about Blokus that humans naturally applied</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Deterministic verification steps that can catch errors</span></li>
</ol>
<p><span style="font-weight: 400;">These approaches create reproducible, (more) consistent solutions. </span></p>
<p><span style="font-weight: 400;">But if you only have one board to count with your child, maybe it’s just faster to let the human do it.</span></p>
<p class="my-0">
</div>
</div>
<p>The post <a href="https://www.teaguehopkins.com/2025/03/how-many-colored-squares-are-on-this-board/">How many colored squares are on this board?</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.teaguehopkins.com/2025/03/how-many-colored-squares-are-on-this-board/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">17467</post-id>	</item>
		<item>
		<title>Mastering Product Empathy: A 7-Skill Framework for Leaders</title>
		<link>https://www.teaguehopkins.com/2025/02/mastering-product-empathy-a-7-skill-framework-for-leaders/</link>
					<comments>https://www.teaguehopkins.com/2025/02/mastering-product-empathy-a-7-skill-framework-for-leaders/#respond</comments>
		
		<dc:creator><![CDATA[Teague Hopkins]]></dc:creator>
		<pubDate>Wed, 19 Feb 2025 03:17:41 +0000</pubDate>
				<category><![CDATA[Main]]></category>
		<guid isPermaLink="false">https://www.teaguehopkins.com/?p=17281</guid>

					<description><![CDATA[<p>How to Develop Deep Product Empathy Here&#8217;s a startling fact: Empathy among American college students has dropped 40% since 1980. Yet in our increasingly digital world, understanding our users has never been more crucial. When surveying 1,000 product managers across companies like Google, Apple, and Microsoft, empathy ranked second-to-last in valued hiring skills. We&#8217;re facing [&#8230;]</p>
<p>The post <a href="https://www.teaguehopkins.com/2025/02/mastering-product-empathy-a-7-skill-framework-for-leaders/">Mastering Product Empathy: A 7-Skill Framework for Leaders</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1><b>How to Develop Deep Product Empathy</b></h1>
<p><span style="font-weight: 400;">Here&#8217;s a startling fact: Empathy among American college students has dropped 40% since 1980. Yet in our increasingly digital world, understanding our users has never been more crucial. When surveying 1,000 product managers across companies like Google, Apple, and Microsoft, empathy ranked second-to-last in valued hiring skills. We&#8217;re facing an empathy crisis at precisely the moment we need it most.</span></p>
<p><span style="font-weight: 400;">But here&#8217;s the good news: empathy can be learned. While data drives decisions, it&#8217;s empathy that helps us ask the right questions and build products that truly matter. As Ryan Siemens, founder of Groove, puts it: &#8220;Without empathy, you almost guarantee you will miss out on insights about the best problem to solve.&#8221;</span></p>
<h2><b>Why Product Empathy Matters</b></h2>
<p><span style="font-weight: 400;">Product empathy isn&#8217;t just about feeling what users feel – it&#8217;s about translating that understanding into better products. Ken Norton, former Google product director, explains: &#8220;If you&#8217;re building something for someone else, you&#8217;ll be much more successful if you can identify with their needs first.&#8221;</span></p>
<p><span style="font-weight: 400;">But there&#8217;s a common trap: designing only for people like ourselves. As Ravi Mehta, former CPO at Tinder, warns: &#8220;It&#8217;s a failure of empathy to collapse all users into a single persona.&#8221; Great product managers can empathize with and design for people very different from themselves.</span></p>
<h2><b>The Love Pyramid: A Framework for Deep Empathy</b></h2>
<p><span style="font-weight: 400;">Product empathy builds on three foundational layers:</span></p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" data-attachment-id="17293" data-permalink="https://www.teaguehopkins.com/2025/02/mastering-product-empathy-a-7-skill-framework-for-leaders/love-pyramid/" data-orig-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/02/love-pyramid.jpg?fit=1024%2C768&amp;ssl=1" data-orig-size="1024,768" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="The Love Pyramid: Understanding, Intention, Action" data-image-description="&lt;p&gt;This image depicts a three-tiered pyramid model representing the learning process. The base layer, the largest section, is labeled &amp;#8220;Understanding,&amp;#8221; signifying the foundational knowledge required. The middle layer is labeled &amp;#8220;Intention,&amp;#8221; representing the desire and plan to apply the understanding. The top layer, &amp;#8220;Action,&amp;#8221; is the smallest section and symbolizes the culmination of the process where learning is put into practice. An icon of hands cradling a lightbulb is positioned within the Action section, illustrating the idea of innovation and implementation. The pyramid&amp;#8217;s visual hierarchy emphasizes the importance of building a solid foundation of understanding before progressing to intention and ultimately, action.&lt;/p&gt;
" data-image-caption="" data-medium-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/02/love-pyramid.jpg?fit=300%2C225&amp;ssl=1" data-large-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/02/love-pyramid.jpg?fit=1024%2C768&amp;ssl=1" class="aligncenter size-medium wp-image-17293" src="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/02/love-pyramid.jpg?resize=300%2C225&#038;ssl=1" alt="A pyramid diagram illustrating the three stages of learning: Understanding, Intention, and Action." width="300" height="225" srcset="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/02/love-pyramid.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/02/love-pyramid.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/02/love-pyramid.jpg?w=1024&amp;ssl=1 1024w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><b>Understanding</b><span style="font-weight: 400;">: The ability to truly comprehend others&#8217; experiences</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Intention</b><span style="font-weight: 400;">: The conscious choice to act on that understanding</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Action</b><span style="font-weight: 400;">: Converting empathy into tangible product decisions</span></li>
</ol>
<p><span style="font-weight: 400;">Remember: Love isn&#8217;t just a noun – it&#8217;s a verb. Each layer supports the ones above it, and missing any layer makes the others less effective.</span></p>
<h2><b>The 7 Core Skills of Product Empathy</b></h2>
<h3><b>Understanding Layer</b></h3>
<h4><b>1. Emotional Literacy</b></h4>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Learn to read emotions on faces (even through Zoom)</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Practice with tools like Berkeley&#8217;s Greater Good Center quiz</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Look for subtle cues in user interviews and team meetings</span></li>
</ul>
<h4><b>2. Perspective Taking</b></h4>
<p><span style="font-weight: 400;">Follow this three-step process:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Look for emotional signs</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Imagine yourself in their situation</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Test your understanding by seeking feedback</span></li>
</ol>
<h4><b>3. Moral Imagination</b></h4>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Practice empathy through fiction and entertainment</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Research shows reading fiction increases empathy</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Apply these insights to real-world product scenarios</span></li>
</ul>
<h3><b>Intention Layer</b></h3>
<h4><b>4. Moral Identity</b></h4>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Develop a personal mantra (mine is &#8220;Cultivate awareness, love everyone&#8221;)</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Use it to guide product decisions</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Let it anchor your leadership style</span></li>
</ul>
<h4><b>5. Self Regulation</b></h4>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Develop practices to prevent empathy burnout</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Manage stress to stay engaged during difficult decisions</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Remember: You can&#8217;t help others if you&#8217;re depleted</span></li>
</ul>
<h3><b>Action Layer</b></h3>
<h4><b>6. Practicing Kindness</b></h4>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Build small acts of kindness into your daily routine</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Use the habit loop: cue → routine → reward</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Let the positive impact on others be your motivation</span></li>
</ul>
<h4><b>7. Moral Courage</b></h4>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Speak up for users when they&#8217;re not in the room</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Prepare responses for common rationalizations</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Practice difficult conversations with peers</span></li>
</ul>
<h2><b>Bringing Together Love and Data</b></h2>
<p><span style="font-weight: 400;">Here&#8217;s where many product managers get stuck: they see empathy and data as opposing forces. They&#8217;re not. Data validates empathy and empathy gives meaning to data.</span></p>
<p><span style="font-weight: 400;">When you&#8217;re practicing perspective taking, getting feedback isn&#8217;t just good practice – it&#8217;s data collection. When you&#8217;re reading user feedback, emotional literacy helps you see beyond the words to the underlying needs.</span></p>
<h2><b>Moving Forward</b></h2>
<p><span style="font-weight: 400;">The empathy deficit in product management is real, but it&#8217;s not insurmountable. Start small:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Pick one skill to practice this week</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Set up regular user interviews</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Share these practices with your team</span></li>
</ul>
<p><span style="font-weight: 400;">Remember: The goal isn&#8217;t perfection; it’s progress. As Thich Nhat Hanh says, &#8220;Loving without knowing how to love wounds the ones we love.&#8221; We owe it to our users to love skillfully.</span></p>
<p><span style="font-weight: 400;">By combining deep empathy with solid data, you can create products that don&#8217;t just work well – they change lives.</span></p>
<p><span style="font-weight: 400;">Want to get started? Check out these resources:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><a href="https://greatergood.berkeley.edu/quizzes/ei_quiz"><span style="font-weight: 400;">Berkeley&#8217;s Emotional Intelligence Quiz</span></a></li>
<li style="font-weight: 400;" aria-level="1"><a href="https://online.virginia.edu/course/ethical-leadership-through-giving-voice-values"><span style="font-weight: 400;">UVA&#8217;s Ethical Leadership Course</span></a></li>
<li style="font-weight: 400;" aria-level="1"><a href="https://charlesduhigg.com/the-power-of-habit/"><span style="font-weight: 400;">The Habit Loop by Charles Duhigg</span></a></li>
</ul>
<p>&nbsp;</p>
<p>The post <a href="https://www.teaguehopkins.com/2025/02/mastering-product-empathy-a-7-skill-framework-for-leaders/">Mastering Product Empathy: A 7-Skill Framework for Leaders</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.teaguehopkins.com/2025/02/mastering-product-empathy-a-7-skill-framework-for-leaders/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">17281</post-id>	</item>
		<item>
		<title>How AI Democratization is Reshaping Business Strategy</title>
		<link>https://www.teaguehopkins.com/2025/02/how-ai-democratization-is-reshaping-business-strategy/</link>
					<comments>https://www.teaguehopkins.com/2025/02/how-ai-democratization-is-reshaping-business-strategy/#respond</comments>
		
		<dc:creator><![CDATA[Teague Hopkins]]></dc:creator>
		<pubDate>Mon, 10 Feb 2025 06:31:59 +0000</pubDate>
				<category><![CDATA[Main]]></category>
		<guid isPermaLink="false">https://www.teaguehopkins.com/?p=17285</guid>

					<description><![CDATA[<p>Lessons from YouTube Remember when creating professional videos required expensive equipment, technical expertise, and a massive studio budget? Today, teenagers with smartphones are building million-dollar content empires from their bedrooms. This transformation didn&#8217;t just change how we create media—it revolutionized entire industries. It wasn’t the first time either–this has been a common pattern for emerging [&#8230;]</p>
<p>The post <a href="https://www.teaguehopkins.com/2025/02/how-ai-democratization-is-reshaping-business-strategy/">How AI Democratization is Reshaping Business Strategy</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1><span style="font-weight: 400;">Lessons from YouTube</span></h1>
<p><span style="font-weight: 400;">Remember when creating professional videos required expensive equipment, technical expertise, and a massive studio budget? Today, teenagers with smartphones are building million-dollar content empires from their bedrooms. This transformation didn&#8217;t just change how we create media—it revolutionized entire industries. It wasn’t the first time either–this has been a common pattern for emerging technologies all the way back to the printing press.</span></p>
<p><span style="font-weight: 400;">Now, artificial intelligence is following the same path. Just as YouTube and affordable cameras democratized video production, AI is transforming from an exclusive tool of tech giants into technology that any business can leverage. The costs are dropping, the technology is becoming more accessible, and the barriers to entry are crumbling faster than anyone predicted.</span></p>
<p><span style="font-weight: 400;">For business leaders, this creates an urgent question: How do you position your company to benefit from this transformation?</span></p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" data-attachment-id="17290" data-permalink="https://www.teaguehopkins.com/2025/02/how-ai-democratization-is-reshaping-business-strategy/robot/" data-orig-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/02/robot.jpg?fit=1024%2C768&amp;ssl=1" data-orig-size="1024,768" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="The Rise of AI: Human Confronting Technological Giant" data-image-description="&lt;p&gt;This stylized illustration depicts a person confronting a colossal robot figure imbued with symbols of artificial intelligence, including &amp;#8220;AI&amp;#8221; logos, gears, and digital icons. The robot stands tall against a backdrop of a modern city skyline silhouetted by a dramatic sunset. The scene evokes a sense of awe and contemplation about the future of humanity in the age of AI, highlighting both the opportunities and challenges it presents. The person&amp;#8217;s small stature compared to the robot emphasizes the scale of this technological advancement and invites reflection on the relationship between humans and artificial intelligence. The vibrant color palette and geometric shapes contribute to the artwork&amp;#8217;s futuristic and thought-provoking atmosphere.&lt;/p&gt;
" data-image-caption="" data-medium-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/02/robot.jpg?fit=300%2C225&amp;ssl=1" data-large-file="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/02/robot.jpg?fit=1024%2C768&amp;ssl=1" class="aligncenter wp-image-17290 size-medium" src="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/02/robot.jpg?resize=300%2C225&#038;ssl=1" alt="A person stands before a giant robot adorned with AI symbols, against a backdrop of a cityscape and a vibrant sunset." width="300" height="225" srcset="https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/02/robot.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/02/robot.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.teaguehopkins.com/wp-content/uploads/sites/11/2025/02/robot.jpg?w=1024&amp;ssl=1 1024w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<h2><span style="font-weight: 400;">What Does AI Democratization Really Mean?</span></h2>
<p><span style="font-weight: 400;">AI democratization is the process of making artificial intelligence technology accessible to businesses and individuals of all sizes, not just tech giants with massive budgets. It&#8217;s happening through three main drivers:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Dramatically lower hardware requirements than initially expected</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The rise of open-source models that anyone can use and adapt</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Reduction in the technical knowledge needed to make use of AI capabilities</span></li>
</ul>
<p><span style="font-weight: 400;">This shift means that small businesses can now access AI capabilities that were previously reserved for companies with multi-million dollar budgets.</span></p>
<h2><span style="font-weight: 400;">The YouTube Revolution: A Preview of AI&#8217;s Future</span></h2>
<p><span style="font-weight: 400;">Before YouTube, video production was a gated community. Only established media companies could afford to create and distribute professional content. Sound familiar? It&#8217;s exactly where AI was just a few years ago.</span></p>
<p><span style="font-weight: 400;">YouTube changed everything by:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Making distribution free and global</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Allowing creators to monetize directly</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Creating an ecosystem where anyone could compete with traditional media</span></li>
</ul>
<h2><span style="font-weight: 400;">6 Striking Parallels Between Video and AI Democratization</span></h2>
<h3><span style="font-weight: 400;">1. Democratization of Tools</span></h3>
<p><b>Then:</b><span style="font-weight: 400;"> Professional video went from requiring expensive cameras and editing suites to being possible with a smartphone. </span></p>
<p><b>Now:</b><span style="font-weight: 400;"> AI is moving from requiring massive data centers to running on standard computers and even phones.</span></p>
<h3><span style="font-weight: 400;">2. Rise of Platforms as Ecosystems</span></h3>
<p><b>Then:</b><span style="font-weight: 400;"> YouTube created a complete ecosystem for creators, viewers, and advertisers. </span></p>
<p><b>Now:</b><span style="font-weight: 400;"> Platforms like Hugging Face and GitHub are becoming the &#8220;YouTube of AI,&#8221; where developers share models and businesses find solutions.</span></p>
<h3><span style="font-weight: 400;">3. Explosion of Niche Applications</span></h3>
<p><b>Then:</b><span style="font-weight: 400;"> YouTubers created content for every conceivable interest, from knitting tutorials to urban exploration. </span></p>
<p><b>Now:</b><span style="font-weight: 400;"> Businesses are developing AI solutions for hyper-specific needs, from local agriculture to specialized education.</span></p>
<h3><span style="font-weight: 400;">4. Shift from Hardware to Software</span></h3>
<p><b>Then:</b><span style="font-weight: 400;"> Success became about creativity and editing skills, not camera quality. </span></p>
<p><b>Now:</b><span style="font-weight: 400;"> AI success is increasingly about how you apply models, not how much computing power you have.</span></p>
<h3><span style="font-weight: 400;">5. Empowerment of Small Players</span></h3>
<p><b>Then:</b><span style="font-weight: 400;"> Individual creators competed successfully with major media companies. </span></p>
<p><b>Now:</b><span style="font-weight: 400;"> Small businesses are using AI to compete with larger corporations in customer service, content creation, and analytics.</span></p>
<h3><span style="font-weight: 400;">6. Rapid Innovation Cycles</span></h3>
<p><b>Then:</b><span style="font-weight: 400;"> Video technology evolved rapidly from HD to 4K to live streaming. </span></p>
<p><b>Now:</b><span style="font-weight: 400;"> AI capabilities are advancing at an even faster pace, with new breakthroughs monthly.</span></p>
<h2><span style="font-weight: 400;">What This Means for Your Business</span></h2>
<h3><span style="font-weight: 400;">Key Actions to Take Now:</span></h3>
<ol>
<li style="font-weight: 400;"><b>Start Small but Start Now</b>
<ul>
<li style="font-weight: 400;">Begin with readily available AI tools for specific tasks</li>
<li style="font-weight: 400;">Focus on solving real business problems, not chasing technology</li>
</ul>
</li>
<li style="font-weight: 400;"><b>Build on Platforms</b>
<ul>
<li style="font-weight: 400;">Use established AI platforms rather than building from scratch</li>
<li style="font-weight: 400;">Look for solutions that integrate with your existing systems</li>
</ul>
</li>
<li style="font-weight: 400;"><b>Focus on Application, Not Technology</b>
<ul>
<li style="font-weight: 400;">Success will come from how you use AI, not just having it</li>
<li style="font-weight: 400;">Invest in understanding your specific use cases</li>
</ul>
</li>
<li style="font-weight: 400;"><b>Prepare for Rapid Change</b>
<ul>
<li style="font-weight: 400;">Do not tie your AI strategy to one model or company</li>
<li style="font-weight: 400;">Build systems that can evaluate and adopt new capabilities (See also:<br />
<a style="font-weight: 400;" href="https://github.com/prolego-team/pdd">performance-driven development</a>)</li>
</ul>
</li>
<li style="font-weight: 400;"><b>Watch for Oversaturation</b>
<ul>
<li style="font-weight: 400;">AI will be added to everything until AI alone isn’t differentiator</li>
<li style="font-weight: 400;">Look for unique applications in your industry and differentiate through expertise, not just technology</li>
</ul>
</li>
</ol>
<h2><span style="font-weight: 400;">The Path Forward</span></h2>
<p><span style="font-weight: 400;">The democratization of AI isn&#8217;t just making technology more accessible, it&#8217;s reshaping how businesses compete. Just as YouTube created opportunities for new types of media businesses, AI democratization will create new business models and opportunities.</span></p>
<p><span style="font-weight: 400;">The winners won&#8217;t necessarily be the companies with the biggest AI budgets, but those who best understand how to apply AI to solve real problems for their customers. The time to start preparing for this future is now.</span></p>
<p><span style="font-weight: 400;">Remember: YouTube didn&#8217;t kill Hollywood, it created a whole new entertainment ecosystem alongside it. Similarly, AI democratization won&#8217;t eliminate the need for expertise, but it will change how we think about, access, and apply artificial intelligence in business.</span></p>
<p><span style="font-weight: 400;">The question isn&#8217;t whether to participate in this transformation, but how to position your business to benefit from it.</span></p>
<p>The post <a href="https://www.teaguehopkins.com/2025/02/how-ai-democratization-is-reshaping-business-strategy/">How AI Democratization is Reshaping Business Strategy</a> appeared first on <a href="https://www.teaguehopkins.com">Teague Hopkins</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.teaguehopkins.com/2025/02/how-ai-democratization-is-reshaping-business-strategy/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">17285</post-id>	</item>
	</channel>
</rss>
