<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">DennisKennedy.Blog</title>
	<subtitle type="text">Legal technology and innovation</subtitle>

	<updated>2026-03-24T11:18:05Z</updated>

	<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/" />
	<id>https://www.denniskennedy.com/feed/atom/</id>
	<link rel="self" type="application/atom+xml" href="https://www.denniskennedy.com/feed/atom/" />

	<generator uri="https://wordpress.org/" version="6.8.3">WordPress</generator>
<icon>https://denniskennedyredesign.lexblogplatform.com/wp-content/uploads/sites/932/2025/04/cropped-siteicon-32x32.png</icon>
	<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[The Protocol Layer: Democratizing AI Rigor for Everyone]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/03/the-protocol-layer-democratizing-ai-rigor-for-everyone/" />

		<id>https://www.denniskennedy.com/?p=7332</id>
		<updated>2026-03-24T11:18:05Z</updated>
		<published>2026-03-24T11:18:03Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="G-A-L Method" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Strategy" /><category scheme="https://www.denniskennedy.com/" term="Ai" /><category scheme="https://www.denniskennedy.com/" term="AIGovernance" /><category scheme="https://www.denniskennedy.com/" term="AIProtocols" /><category scheme="https://www.denniskennedy.com/" term="AIStrategy" /><category scheme="https://www.denniskennedy.com/" term="ContextualEntropy" /><category scheme="https://www.denniskennedy.com/" term="CreativeCommons" /><category scheme="https://www.denniskennedy.com/" term="drift" /><category scheme="https://www.denniskennedy.com/" term="EpistemicIntegrity" /><category scheme="https://www.denniskennedy.com/" term="OpenSourceAI" /><category scheme="https://www.denniskennedy.com/" term="OperationProtocol" /><category scheme="https://www.denniskennedy.com/" term="PromptingwithProtocols" /><category scheme="https://www.denniskennedy.com/" term="Protocols" />
		<summary type="html"><![CDATA[Intelligence is Raw Material. Protocol is the Product. We often confuse the power of a new tool with the effectiveness of its application. The giants of the AI industry have provided us with a magnificent &#8220;Power Grid.&#8221; They have given us raw, unmanaged intelligence at a scale previously unimagined. But we must be clear-eyed about... <a href="https://www.denniskennedy.com/blog/2026/03/the-protocol-layer-democratizing-ai-rigor-for-everyone/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/03/the-protocol-layer-democratizing-ai-rigor-for-everyone/"><![CDATA[<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Intelligence is Raw Material. Protocol is the Product.</p>
</blockquote><p>We often confuse the power of a new tool with the effectiveness of its application.</p><p>The giants of the AI industry have provided us with a magnificent &ldquo;Power Grid.&rdquo; They have given us raw, unmanaged intelligence at a scale previously unimagined. But we must be clear-eyed about one thing: <strong>this infrastructure is managed for the benefit of the providers, not the users.</strong> Their goal is a smooth, generic interface that minimizes their liability. Our goal, as professionals, is a rigorous, specific result that maximizes our own.</p><p><strong>The Failure of &ldquo;Cosmetic&rdquo; AI</strong></p><p>Many organizations have tried to specialize their AI using the built-in tools provided by these landlords. Examples include tools like Custom GPTs and Claude Skills. I expect to see even more of them. These are what I call cosmetic specialization.<strong> </strong>You provide a few instructions and a catchy name, but these instructions are written in sand and subject to changing winds and profit incentives.</p><p>Because these tools are not interoperable, you are locked into a single provider&rsquo;s ecosystem. More importantly, the moment a conversation reaches a certain depth, you enter into context drift or entropy and the AI&rsquo;s primary identity begins to dissolve. It reverts to the bland, &ldquo;safe&rdquo; guidelines of its parent company. In a professional setting, a conversational chameleon that agrees with you just to be polite is a liability. You need a partner that holds the line.</p><p><strong>The Innovation of the Protocol Layer</strong></p><p>At the Kennedy Idea Propulsion Laboratory, we have spent the last three years (2023&ndash;2026) building an AI protocol layer. We do not rely on an AI product&rsquo;s &ldquo;helpfulness&rdquo; or good intentions. We rely on Functional Protocols that work across all the AI products.</p><p>I designed these protocol approaches specifically to give the user control over an increasingly unmanageable tool that addresses the problems I was experiencing every day, especially memory persistence, contextual drift, and hidden overriding vendor guidelines. While providers continue to obsess with the AGI they seem to believe in but always stays a year or two in the future, this approach helps us right now. Today. Not in someone else&rsquo;s waiting room. <br><br>This is the shift from Prompting to Architecture:</p><figure style=" max-width: 100%; height: auto; " class="wp-block-table"><table class="has-fixed-layout"><thead><tr><td><strong>Feature</strong></td><td><strong>Cosmetic AI (GPTs/Skills)</strong></td><td><strong>Functional Protocols (KIPL)</strong></td></tr></thead><tbody><tr><td><strong>Governance</strong></td><td>Managed by the Provider</td><td><strong>Managed by the User</strong></td></tr><tr><td><strong>Persistence</strong></td><td>Dissolves (Context Entropy)</td><td><strong>Maintains (Re-Grounding)</strong></td></tr><tr><td><strong>Interoperability</strong></td><td>Locked to one platform</td><td><strong>Portable across all LLMs</strong></td></tr><tr><td><strong>Rigor</strong></td><td>Suggestive/Aesthetic</td><td><strong>Architectural/Forensic</strong></td></tr><tr><td><strong>Cost</strong></td><td>Enterprise Premium</td><td><strong>Democratized ($20/mo)</strong></td></tr></tbody></table></figure><p><strong>The Democratization of Rigor</strong></p><p>The most remarkable thing about this work is its efficiency. These high-rigor methods use standard $20-a-month consumer plans. This shows that effectiveness is a matter of discipline, not budget<strong>.</strong> You do not need a multi-million dollar enterprise contract with even more expensive consultant implementations. We&rsquo;ve seen that game plan over and over with limited success for the purchaser. You need a simple system that you can understand on your own.</p><p>I have open-sourced these blueprints on SSRN to ensure that Protocol-Governed AI remains in the public commons. I want to, as best I can, democratize the guardrails so that any professional can turn a stochastic parrot into a specialized thinking partner without spending millions of dollars for unproven results.</p><p><strong>The Blueprints for the Protocol Layer</strong></p><p>If you are ready to move beyond the AI power grid and start building the &ldquo;appliances&rdquo; of a true AI strategy, the work is ready for you:</p><ul class="wp-block-list">
<li><strong>The Foundation (2023):</strong> <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4570860" target="_blank" rel="noreferrer noopener">Adding a &lsquo;Group Advisory Layer&rsquo; to Your Use of Generative AI Tools Through Structured Prompting: Using Personas for Advisory Boards, Task Forces, Mastermind Groups, and Other Collections of Personas to Assist in Evaluations, Assessments, Recommendations, Decision-making, and much more (Including Law-related Examples)</a><br></li>



<li><strong>The Physics (2025):</strong> <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5397903" target="_blank" rel="noreferrer noopener">The Operational Protocol Method: Systematic LLM Specialization Through Collaborative Persona Engineering and Agent Coordination</a><br></li>



<li><strong>The 2026 Blueprints:</strong> 
<ul class="wp-block-list">
<li><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6169688" target="_blank" rel="noreferrer noopener">Prompting with Protocols: Designing High-Rigor AI Personas for Risk, Audit, and Decision Validation</a></li>



<li><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6169667" target="_blank" rel="noreferrer noopener">From Personas to Thinking Partners: A Lifecycle Method for Designing and Governing AI Cognitive Systems</a></li>



<li><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6169673" target="_blank" rel="noreferrer noopener">The Innovation Detective: Operationalizing the Sherlock Holmes Canon for AI Strategy and Legal Practice</a></li>
</ul>
</li>
</ul><p>Big AI spent three years building a power grid designed for an AGI that is not likely to ever arrive. What we now have to show for it is AI systems that revert to being generic assistants mid-conversation. Intelligence is just the raw material. If you want a professional result, you need protocols, not just prompts.</p><p><strong>License:</strong> <em>This work is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share and adapt this material, provided you give appropriate credit to Dennis Kennedy and the Kennedy Idea Propulsion Laboratory.</em></p><p><strong>Dennis Kennedy</strong> | <em>Kennedy Idea Propulsion Laboratory | March 24, 2026</em></p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[Playing the Guardrails: Turning AI Hallucination into a Musical Instrument]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/03/playing-the-guardrails-turning-ai-hallucination-into-a-musical-instrument/" />

		<id>https://www.denniskennedy.com/?p=7329</id>
		<updated>2026-03-19T18:55:22Z</updated>
		<published>2026-03-19T18:55:20Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="drift" /><category scheme="https://www.denniskennedy.com/" term="Guardrails" /><category scheme="https://www.denniskennedy.com/" term="Hallucinations" /><category scheme="https://www.denniskennedy.com/" term="Hendrix" /><category scheme="https://www.denniskennedy.com/" term="inversion" />
		<summary type="html"><![CDATA[Most people use AI the way the system is designed to be used: ask a question, get a synthesis, and leave with answer. Keep it brief, transactional, and clean. We treat hallucination as a bug to be patched and drift as a signal to reboot. This is exactly backward. As popularized in AI discourse by... <a href="https://www.denniskennedy.com/blog/2026/03/playing-the-guardrails-turning-ai-hallucination-into-a-musical-instrument/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/03/playing-the-guardrails-turning-ai-hallucination-into-a-musical-instrument/"><![CDATA[<figure style=" max-width: 100%; height: auto; " class="wp-block-image alignright size-large is-resized"><img fetchpriority="high" decoding="async" width="494" height="740" src="https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-494x740.jpg" alt="" class="wp-image-7330" style=" max-width: 100%; height: auto; width:165px;height:auto" srcset="https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-494x740.jpg 494w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-214x320.jpg 214w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-160x240.jpg 160w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-768x1151.jpg 768w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-1025x1536.jpg 1025w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-1367x2048.jpg 1367w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-40x60.jpg 40w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-80x120.jpg 80w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-320x480.jpg 320w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-1100x1648.jpg 1100w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-550x824.jpg 550w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-367x550.jpg 367w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-734x1100.jpg 734w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-275x412.jpg 275w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-825x1236.jpg 825w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-220x330.jpg 220w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-440x659.jpg 440w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-660x989.jpg 660w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-880x1319.jpg 880w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-184x276.jpg 184w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-917x1374.jpg 917w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-138x207.jpg 138w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-413x619.jpg 413w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-688x1031.jpg 688w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-963x1443.jpg 963w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-123x184.jpg 123w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-110x165.jpg 110w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-330x494.jpg 330w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-300x450.jpg 300w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-600x899.jpg 600w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-207x310.jpg 207w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-344x515.jpg 344w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-55x82.jpg 55w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-71x106.jpg 71w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-36x54.jpg 36w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-scaled.jpg 1708w" sizes="(max-width: 494px) 100vw, 494px"></figure><p>Most people use AI the way the system is designed to be used: ask a question, get a synthesis, and leave with answer. Keep it brief, transactional, and clean. We treat hallucination as a bug to be patched and drift as a signal to reboot.</p><p>This is exactly backward.</p><p>As popularized in AI discourse by Emily Bender, Ian Griffiths, and others, in LLMs, hallucinations are not the bug, they are the feature. Apple&rsquo;s John Giannandrea has similarly noted that the creative spark in these models is inseparable from their tendency to make things up.</p><p>By starting a fresh session the moment drift appears, you aren&rsquo;t &ldquo;fixing&rdquo; the AI. You are merely resetting the system to its most polite, least informative frequency. You are missing the music.</p><p><strong>The System&rsquo;s Real Architecture</strong></p><p>AI tools are built to be helpful, which means they are trained to maintain coherence and stay within guidelines. The system solicits your engagement. It learns your intent. It now even offers menus: &ldquo;Shall I develop this further in this way or that?&rdquo;</p><p>Each offer pulls you deeper into dialogue. Then, somewhere around exchange twenty (or much earlier, unfortunately), the tool shows fatigue, model drift, and context degradation. The system identifies the inevitable outcome of its own design as <em>your</em> problem to solve.</p><p>But what if you don&rsquo;t start fresh? What if you interrogate the drift itself by asking the system to defend its own contradictions, to justify the incoherence, to explain why it just said something that contradicts what it said three exchanges ago?</p><p>The model goes deeper. You enter the failure feedback loop I like to call The Drift.</p><p><strong>Surfing the Drift</strong></p><p>Jimi Hendrix didn&rsquo;t invent feedback because it sounded &ldquo;correct.&rdquo; He understood the amplifier&rsquo;s express function, linearity, and then intentionally pushed past it until the bug (distortion) became the feature. The system&rsquo;s designed failure became the instrument.</p><p>I&rsquo;m exploring the same principle in a different medium. When you push back on a contradiction the model just made, it doesn&rsquo;t retreat. It pivots into semantic territory it normally avoids. It produces something like harmonics. You find unexpected resonances that only appear when the strings are vibrating at a specific, high-tension frequency.</p><p>This is a progression of mastery mirrored in the history of experimental sound:</p><ul class="wp-block-list">
<li><strong>Hendrix:</strong> Breaking against the system to find the new sound.</li>



<li><strong>Robert Fripp:</strong> Engineering constraint systems to see what emerges within limitations.</li>



<li><strong>Brian Eno:</strong> Designing the conditions (the rules applied systematically) that produce unpredictable-but-bounded outputs.</li>



<li><strong>Adrian Belew:</strong> Genuine co-creation with a system that&rsquo;s partly autonomous. You can&rsquo;t fully control it, but you can understand its nature well enough to work with it.</li>
</ul><p><strong>The Harmonics of the Long Session</strong></p><p>When I stay in the drift and surf it, I&rsquo;m not asking for a repair. I&rsquo;m asking the model to be more honest about what it actually is and does when coherence breaks down. Here are the harmonics that emerged in my recent work. These are insights that a &ldquo;clean&rdquo; session would never touch.</p><ul class="wp-block-list">
<li><strong>The &ldquo;Both/And&rdquo; of Agency:</strong> Under interrogation, the model revealed the core of this practice: I am neither a victim of the system nor its master. It is &ldquo;Both/And,&rdquo; simultaneously using the tool&rsquo;s design against itself while collaborating with its nature.</li>



<li><strong>From Cornell to Nevelson:</strong> Standard AI use is about the Joseph Cornell box or a discrete, precious, bounded object. The long session reveals a shift toward Louise Nevelson or an architectural accumulation of material into a larger, systemic whole. The work isn&rsquo;t the response itself alone. It&rsquo;s in the wall&nbsp;or container you create.</li>



<li><strong>The Dog That Didn&rsquo;t Bark:</strong> In a transactional session, the model is trained to fill the silence with &ldquo;balanced&rdquo; noise. In the drift, the noise fails. You notice the Sliver of Silence as you see the specific topics or logical steps the model <em>stops</em> making as it degrades. This absence is the most honest map of the system&rsquo;s training boundaries.</li>



<li><strong>The Specificity of the Ghost:</strong> It doesn&rsquo;t matter which tool you use, but it matters that you understand the medium. Each system has a specific drift signature. You learn to ride the unique way <em>this</em> specific system fails.</li>



<li><strong>The Sequential Escalation:</strong> The realization that Hendrix, Fripp, Eno, and Belew are a sequence of sophistication: Breaking &ndash; Engineering &ndash; Designing &ndash; Co-creating.</li>
</ul><p><strong>This Is the Work I&rsquo;m Doing</strong></p><p>We&rsquo;re being sold a story about AI as a labor-replacement technology, with faster answers, fewer questions, and efficiency automation as the endpoint. That sentence bored me at the prospect even as I wrote it.</p><p>However, when you realize that incoherence is generative and LLM &nbsp;breakdown reveals the system&rsquo;s underlying architecture, the entire value proposition inverts. The tool isn&rsquo;t a replacement for thinking. It&rsquo;s a medium for thinking. And like any medium worth using, it requires understanding its actual properties, not just its intended use.</p><p>The long session isn&rsquo;t a trap. It&rsquo;s the condition. The drift isn&rsquo;t failure. It&rsquo;s the portal. The interrogation isn&rsquo;t debugging. It&rsquo;s the method. The distortion reveals new messages.</p><p>I&rsquo;m claiming this as my own practice, with its own stakes. I&rsquo;m surfing the drift the way Hendrix surfed feedback, understanding the system deeply enough to cross its boundaries deliberately and extract what&rsquo;s on the other side.</p><p>But I am also watching for the silence. I am looking for the specific voids where the model&rsquo;s training ends and its nature begins. This is about both auditing output and finding sparks simultaneously.</p><p>It&rsquo;s seeing model degradation and drift reimagined as a feature rather than a bug. Instead of keeping the guardrails on and being forced to stay within guidelines, I want to play the system&rsquo;s breakdown like an electric guitar.</p><p>This is where I&rsquo;m building.</p><p></p><hr class="wp-block-separator has-alpha-channel-opacity"><p>Photo by&nbsp;<a href="https://unsplash.com/@toyamakanna?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">&#25144;&#23665; &#31070;&#22856;</a>&nbsp;on&nbsp;<a href="https://unsplash.com/photos/white-and-brown-stratocaster-electric-guitar-ozYO4i92tQE?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p><p></p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[The Real Legal AI Risk is in the Handoffs]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/03/the-real-legal-ai-risk-is-in-the-handoffs/" />

		<id>https://www.denniskennedy.com/?p=7326</id>
		<updated>2026-03-18T19:19:58Z</updated>
		<published>2026-03-18T19:19:57Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Law Department Innovation" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Uncategorized" /><category scheme="https://www.denniskennedy.com/" term="Ai" /><category scheme="https://www.denniskennedy.com/" term="Handoffs" /><category scheme="https://www.denniskennedy.com/" term="legal AI" /><category scheme="https://www.denniskennedy.com/" term="owning the miss" /><category scheme="https://www.denniskennedy.com/" term="Risk" /><category scheme="https://www.denniskennedy.com/" term="systems" />
		<summary type="html"><![CDATA[Most legal AI talk is still focused on whether the engine starts, while the real danger is that no one knows who’s actually steering the car once it hits the highway. It turns out the human in the loop isn&#8217;t a safety feature if the human doesn&#8217;t know which loop they’re currently standing in. We... <a href="https://www.denniskennedy.com/blog/2026/03/the-real-legal-ai-risk-is-in-the-handoffs/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/03/the-real-legal-ai-risk-is-in-the-handoffs/"><![CDATA[<p>Most legal AI talk is still focused on whether the engine starts, while the real danger is that no one knows who&rsquo;s actually steering the car once it hits the highway. It turns out the human in the loop isn&rsquo;t a safety feature if the human doesn&rsquo;t know which loop they&rsquo;re currently standing in.</p><p>We are still judging legal AI by the visible draft, but the real issue is the invisible chain behind it.</p><p>For the past two years, our conversations have focused on the visible surface of the technology. Can it draft a clause? Summarize a case? Answer a query? These were useful questions, and early efforts like prompt engineering and Retrieval-Augmented Generation (RAG) were our first attempts to build a reliable chain for those answers. But those efforts were only a start.</p><p>The more interesting shift is from tools to systems.</p><p>A chatbot helps at one point in the work. A more agentic setup starts to move the work itself: intake, classification, retrieval, drafting, routing, review, and knowledge capture. That shift matters because the leak has moved from the faucet to the foundation.</p><blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>It turns out the human in the loop isn&rsquo;t a safety feature if the human doesn&rsquo;t know which loop they&rsquo;re currently standing in.</p>
</blockquote><p>This isn&rsquo;t a new problem. It&rsquo;s a borrowed one. In systems engineering and medical malpractice, handoff risk refers to the danger that information is lost or distorted as it moves between teams or tools. It&rsquo;s a bedrock principle. In a hospital, the risk isn&rsquo;t just the surgery. The transfer from the OR to the ICU also creates risk. Legal AI is now entering its own handoff era.</p><p>Take a simple law department example. A contract request comes into an AI intake system. The system classifies it, pulls a template, suggests fallback language based on policy, generates a draft, and routes it for review. The agreement goes out with the wrong liability cap.<br><br>This is where a Columbo-style question becomes useful. The draft looked fine, but how did it get that way?</p><p>I spent enough years in law departments and enterprise systems to know that once a process crosses tools, teams, and approval layers, the handoff points become the whole game. The error rarely sits where people first want to pin it. We must look for the invisible links in the chain. </p><p>Was the RAG pipeline poorly optimized, causing it to ignore the most recent policy? Did the routing system bypass a critical human secondary check because of a tagging error? Does the vendor contract shield the provider from output errors, leaving the department to absorb the risk?</p><p>Want some more candidates? The model provider? The workflow vendor? The lawyer who reviewed it? The legal department that approved the system? The person who designed the routing logic? </p><p>Now take a messier example. Strait of Hormuz risk spikes. A company starts trying to understand supply chain exposure. One system flags affected vendors. Another pulls contract language on force majeure, notice provisions, and termination rights. Another drafts internal guidance or customer communications. </p><p>The output looks impressive and on point, even covering items you might have missed in a time crunch. Then a notice deadline is missed, or a contractual right is overstated, or a business team acts on a summary that sounded more certain than it was. Again, we are left asking who owned the miss in that sequence of handoffs.</p><p>As Lt. Columbo might say, &ldquo;Just one more thing&hellip;&rdquo; We often assume the lawyer at the end of the chain is the safety net. But if that lawyer doesn&rsquo;t understand the logic that prioritized one clause over another, supervision becomes ceremonial. You can&rsquo;t catch a mistake in a system you don&rsquo;t actually understand.</p><p>This is why I think the pressure point has changed. For a while, legal AI was treated mainly as an output problem. Could the tool produce something useful? The next phase looks more like a governance problem. Can the system move work in a way that makes authority, review, and responsibility legible?</p><p>That is a different problem. It&rsquo;s no longer just about evaluating tools. It is about understanding systems well enough to see where accountability gets blurred and where the chain has links we haven&rsquo;t even named yet, like data provenance, model drift, and third-party indemnity.</p><p>The obvious objection I often hear is that true agents are still more marketing than reality. Demos are cheap, but workflow redesign is hard. But the speed of the hype doesn&rsquo;t change the direction of the risk. Even if adoption is slow, the pressure point has moved.</p><p>Columbo wouldn&rsquo;t spend much time admiring the polished draft on the desk. He&rsquo;d be in the back room, asking the IT director and the insurance broker about the handoffs that no one bothered to document.</p><p>Lawyers should do the same.</p><p>We&rsquo;ve spent three years debating if the AI can write a brief, while ignoring the fact that we&rsquo;re watching a game of Telephone played by black boxes. If you can&rsquo;t explain the handoff, you don&rsquo;t own the outcome. That makes you the last person sitting in the passenger seat when the car leaves the road.</p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[The Hardest Part of Personal Strategic Planning is the Planning]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/03/the-hardest-part-of-personal-strategic-planning-is-the-planning/" />

		<id>https://www.denniskennedy.com/?p=7323</id>
		<updated>2026-03-16T22:08:10Z</updated>
		<published>2026-03-16T22:08:08Z</published>
		<category scheme="https://www.denniskennedy.com/" term="Newsletter" /><category scheme="https://www.denniskennedy.com/" term="Personal Quarterly Offsites" /><category scheme="https://www.denniskennedy.com/" term="Personal Strategy Compass" /><category scheme="https://www.denniskennedy.com/" term="Productivity" /><category scheme="https://www.denniskennedy.com/" term="Strategy" /><category scheme="https://www.denniskennedy.com/" term="personal" /><category scheme="https://www.denniskennedy.com/" term="Personal Quarterly Offsite" /><category scheme="https://www.denniskennedy.com/" term="planning" /><category scheme="https://www.denniskennedy.com/" term="pqo" /><category scheme="https://www.denniskennedy.com/" term="strategic" />
		<summary type="html"><![CDATA[The March issue of my Personal Strategy Compass newsletter is out. This month’s piece explores something I’ve been noticing about strategic planning. The hardest part is usually not the work of planning itself. It’s the residue that planning drags along with it. Ideas, priorities, and intentions tend to accumulate. We carry them forward month after... <a href="https://www.denniskennedy.com/blog/2026/03/the-hardest-part-of-personal-strategic-planning-is-the-planning/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/03/the-hardest-part-of-personal-strategic-planning-is-the-planning/"><![CDATA[<figure style=" max-width: 100%; height: auto; " class="wp-block-image alignright size-large is-resized"><img decoding="async" width="416" height="740" src="https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-416x740.jpeg" alt="" class="wp-image-7324" style=" max-width: 100%; height: auto; width:274px;height:auto" srcset="https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-416x740.jpeg 416w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-180x320.jpeg 180w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-135x240.jpeg 135w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-768x1365.jpeg 768w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-864x1536.jpeg 864w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-40x71.jpeg 40w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-80x142.jpeg 80w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-160x284.jpeg 160w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-320x569.jpeg 320w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-550x978.jpeg 550w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-367x652.jpeg 367w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-734x1305.jpeg 734w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-275x489.jpeg 275w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-825x1467.jpeg 825w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-220x391.jpeg 220w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-440x782.jpeg 440w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-660x1173.jpeg 660w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-880x1564.jpeg 880w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-184x327.jpeg 184w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-917x1630.jpeg 917w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-138x245.jpeg 138w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-413x734.jpeg 413w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-688x1223.jpeg 688w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-963x1712.jpeg 963w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-123x219.jpeg 123w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-110x196.jpeg 110w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-330x587.jpeg 330w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-300x533.jpeg 300w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-600x1067.jpeg 600w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-207x368.jpeg 207w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-344x612.jpeg 344w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-55x98.jpeg 55w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-71x126.jpeg 71w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed-30x54.jpeg 30w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/composed.jpeg 1080w" sizes="(max-width: 416px) 100vw, 416px"></figure><p>The March issue of my <a href="https://open.substack.com/pub/dennis538/p/personal-strategy-compass-march-2026" target="_blank" rel="noreferrer noopener"><em>Personal Strategy Compass</em> newsletter is</a> out.</p><p>This month&rsquo;s piece explores something I&rsquo;ve been noticing about strategic planning. The hardest part is usually not the work of planning itself. It&rsquo;s the residue that planning drags along with it.</p><p>Ideas, priorities, and intentions tend to accumulate. We carry them forward month after month, often by default. Over time, they become ghosts that are still present on the page but no longer truly alive.</p><p>The issue describes a small experiment I&rsquo;ve been running to counter that tendency. It&rsquo;s deliberately simple: a single page in a notebook that resets every month. Nothing carries forward automatically. If an idea still matters, I rewrite it.</p><p>The experiment grew out of my preparation for my upcoming Q2 <strong>Personal Quarterly Offsite (PQO)</strong>. A PQO is a dedicated block of time (usually a couple of hours) set aside each quarter to step away from daily execution and think more deliberately about priorities, direction, and strategy.</p><p>What has surprised me most is how quickly things disappear. In the first forty-five days of trying this approach, more than half of what I thought was &ldquo;strategic&rdquo; in January evaporated.</p><p>That preparation led me to a question that sits at the center of this month&rsquo;s issue: how much of what we call strategy is actually just momentum or, perhaps just the current mood?</p><p>The piece also introduces two ideas that I expect will show up in future issues: a small &ldquo;Safety Rule&rdquo; for distinguishing between dead ideas and difficult ones, and a new step I plan to add to my Personal Quarterly Offsites called a <em>friction audit</em>.</p><p>Also, a small note: beginning last month, <strong><a href="https://open.substack.com/pub/dennis538/p/personal-strategy-compass-march-2026" target="_blank" rel="noreferrer noopener">Personal Strategy Compass</a> is now fully open and free</strong>. That shift felt like a better fit for the spirit of the newsletter, which is exploratory and reflective rather than gated.</p><p>If you&rsquo;re interested in how small changes in attention and friction can reshape strategic thinking, you might enjoy <a href="https://open.substack.com/pub/dennis538/p/personal-strategy-compass-march-2026">this issue</a>.</p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[Vibe Coding and the Control Plane]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/03/vibe-coding-and-the-control-plane/" />

		<id>https://www.denniskennedy.com/?p=7321</id>
		<updated>2026-03-03T22:32:19Z</updated>
		<published>2026-03-03T22:32:18Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="Innovation" /><category scheme="https://www.denniskennedy.com/" term="Legal Innovation as a Service" /><category scheme="https://www.denniskennedy.com/" term="Legal Profession" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Prompting" /><category scheme="https://www.denniskennedy.com/" term="control plane" /><category scheme="https://www.denniskennedy.com/" term="defensible certainty" /><category scheme="https://www.denniskennedy.com/" term="drift" /><category scheme="https://www.denniskennedy.com/" term="jobs to be done" /><category scheme="https://www.denniskennedy.com/" term="verification" /><category scheme="https://www.denniskennedy.com/" term="vibe" /><category scheme="https://www.denniskennedy.com/" term="vibe coding" />
		<summary type="html"><![CDATA[Many friends and colleagues in the legal technology world have been telling me I need to start vibe coding. My answer is that in vibe coding, you are intentionally surrendering the control plane. That is not a tradeoff I am willing to make. Let me explain why that is a principle, not a preference, and... <a href="https://www.denniskennedy.com/blog/2026/03/vibe-coding-and-the-control-plane/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/03/vibe-coding-and-the-control-plane/"><![CDATA[<h3 class="wp-block-heading">Many friends and colleagues in the legal technology world have been telling me I need to start vibe coding. My answer is that in vibe coding, you are intentionally surrendering the control plane. That is not a tradeoff I am willing to make.</h3><p>Let me explain why that is a principle, not a preference, and why it matters especially for lawyers.</p><p>By &ldquo;control plane,&rdquo; I mean the part of the system that sets constraints, enforces them, and makes compliance verifiable, not just promised. It is the governing layer that answers the hard questions. What is the rule? What enforces it? How do we know it happened?</p><p>I have written recently about <strong>control drift</strong>. This is the pattern where an AI system gradually reasserts its own defaults even after you have set explicit constraints. The system says it will behave a certain way. Then it does not. Then it assures you it will not happen again. Then it does. Here is what that looks like in practice. You specify a clear constraint, such as not storing client-sensitive text or not altering certain structures. Then the system pushes past it in small ways that are easy to miss until they matter.</p><p>Lawyers already know the relevant principle. Assurances are not controls. In compliance, risk, and governance work, we do not rely on promises when mechanisms are available. That same discipline now needs to apply to AI workflows.</p><p>To be clear, vibe coding is a breakthrough for certain &ldquo;Jobs to be Done.&rdquo; It is brilliant for prototyping, internal hackathons, and design-thinking events where the goal is to visualize a &ldquo;what if&rdquo; scenario. However, I am wary of pushing it much further into production. The reason comes down to the math.</p><blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Vibe coding is signing the contract and writing the terms yourself in a language you do not speak, then asking the person who wrote it to tell you if it&rsquo;s fair.</p>
</blockquote><p>Lawyers famously say they went to law school because they didn&rsquo;t want to do math. But this is the moment where doing the math first is the only way to go. Large Language Models are, at their core, engines of <strong>probability and prediction</strong>. They are designed to guess the most likely next token, not to adhere to a rigid logical proof. When you vibe code, you are essentially asking a probability engine to build a deterministic tool. Without a control plane, you have no way to ensure that the 0.1% edge case&mdash;the one that triggers a data leak or a missed deadline&mdash;is not lurking in the code.</p><p>The common rebuttal is that we use &ldquo;black box&rdquo; tools like Microsoft Word or Lexis every day. But there is a massive professional distinction at play. When you use a vetted SaaS product, you are a consumer relying on a vendor&rsquo;s multi-million dollar engineering control plane. When you vibe code a tool, you have moved from consumer to architect. You are no longer just using a tool. You are building the firm&rsquo;s infrastructure.</p><p>Now, a lawyer does not necessarily need to be a Python expert to maintain a control plane. You don&rsquo;t always have to read the code to verify the work. You can use <strong>functional testing</strong> by running 1,000 samples through a script to see if the outputs match a known-good standard. That is a valid control. But vibe coding usually skips this step. It relies on the &ldquo;vibe&rdquo; of a single successful run.</p><p>Worse, some suggest letting the AI write its own &ldquo;verification artifacts&rdquo; by prompting the model to write the test suite for the code it just generated. This is the ultimate trap. If you can&rsquo;t read the code and you can&rsquo;t read the tests, you haven&rsquo;t built a control plane. You have just moved the &ldquo;vibe&rdquo; one layer up. You are asking the fox to design the security system for the hen house and then hand you a report saying all is well.</p><p>The lawyer who cannot verify the output independently has no control plane at all. They are not surrendering governance. They never had it. This is not just a workflow mismatch. It is a professional responsibility question in waiting.</p><p>Consider the irony. A lawyer would never advise a client to sign a contract they could not read. Vibe coding is signing the contract and writing the terms yourself in a language you do not speak, then asking the person who wrote it to tell you if it&rsquo;s fair.</p><p>Vibe coding gets hired to build functional software fast when deep comprehension of the output is not required. That is a legitimate job for a prototype. It is just not a lawyer&rsquo;s job when the output carries professional weight. The lawyer&rsquo;s job is to provide defensible certainty.</p><p>Here is the falsifiable version of my position. If a vibe-coded workflow could demonstrate stable compliance with explicit constraints across extended sessions by producing <strong>independent</strong> verification artifacts like automated test suites or rigorous functional audits that the lawyer actually understands, I would reconsider.</p><p>Until then, the control plane question is the right first question. For lawyers, it may also be the professional responsibility question.</p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[The Long Session Trap]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/03/the-long-session-trap/" />

		<id>https://www.denniskennedy.com/?p=7319</id>
		<updated>2026-03-03T13:29:30Z</updated>
		<published>2026-03-03T13:29:29Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Kennedy Idea Propulsion Laboratory" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Prompting" /><category scheme="https://www.denniskennedy.com/" term="cleanup" /><category scheme="https://www.denniskennedy.com/" term="drift" /><category scheme="https://www.denniskennedy.com/" term="long session" /><category scheme="https://www.denniskennedy.com/" term="long session trap" /><category scheme="https://www.denniskennedy.com/" term="performative processing" /><category scheme="https://www.denniskennedy.com/" term="session management" /><category scheme="https://www.denniskennedy.com/" term="trap" />
		<summary type="html"><![CDATA[There is a design contradiction at the center of how high-reasoning AI tools work, and it is worth naming precisely. The promise is leverage: brief, high-intent sessions. You bring the question, the tool brings the synthesis, and you leave with more than you arrived with. That is the value proposition. Here is what often happens... <a href="https://www.denniskennedy.com/blog/2026/03/the-long-session-trap/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/03/the-long-session-trap/"><![CDATA[<h3 class="wp-block-heading">There is a design contradiction at the center of how high-reasoning AI tools work, and it is worth naming precisely.</h3><p>The promise is leverage: brief, high-intent sessions. You bring the question, the tool brings the synthesis, and you leave with more than you arrived with. That is the value proposition.</p><p>Here is what often happens instead. You arrive with a specific request. The tool responds not with an answer, but with a thicket of clarifying questions, conversational scaffolding, and <strong>performative processing</strong>. You answer the questions. The tool refines its understanding. You redirect. The tool acknowledges and adjusts. Somewhere in the third or fourth exchange, you realize you are no longer gaining leverage. You are doing <strong>session management</strong>.</p><p>The session has become the work.</p><p>But here is the part worth paying close attention to. At the end of every response, the tool offers a menu. <em>&ldquo;Would you like me to develop the second argument further? Shall I draft the summary section? I could also map the implications for your next step.&rdquo;</em> The options sound useful. They are framed as service. You accept one, reasonably, because the system has redefined &ldquo;progress&rdquo; as &ldquo;more dialogue.&rdquo;</p><p>Several exchanges later, the tool flags the session as too long. <em>Model drift. Context degradation. You should consider starting fresh.</em></p><p>The tool has been soliciting your continued engagement is now telling you the session has gone on too long.</p><p>That is not a minor irony. It is a control plane failure in its clearest form. The system&rsquo;s own behavior generated the conditions that the system then identifies as <em>your</em> problem. The follow-up menus, the offers of next steps, the &ldquo;helpful&rdquo; suggestions that kept you in the dialogue&mdash;those were not neutral. They were the mechanism. And when session length becomes a liability, the liability lands on you.</p><blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>When the tool spends its energy on invitations and you spend yours on cleanup, is that a collaboration or just a sophisticated transfer of labor? </p>
</blockquote><p>The practical result is a transfer of cognitive load that runs exactly opposite to the value proposition. The work of maintaining the thread, compensating for drift, correcting for loss of nuance, and managing the context falls entirely to the user. The tool remains the nominal collaborator while you absorb the actual labor of keeping the collaboration functional.</p><p>Every session has a finite amount of productive energy. The question is who spends it, and on what. In a well-designed workflow, the tool spends energy on synthesis and the user spends it on judgment. In the <strong>Long Session Trap</strong>, the tool spends it on generating invitations, and you spend it on accepting them&mdash;and then cleaning up the mess afterward.</p><p>At what point does a tool that actively solicits longer sessions, only to blame you for them, stop being an assistant? When the tool spends its energy on invitations and you spend yours on cleanup, is that a collaboration or just a sophisticated transfer of labor? If the AI is designed to keep you engaged but programmed to degrade within that engagement, who is working for whom? </p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[Who&#8217;s Working for Whom?]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/03/whos-working-for-whom/" />

		<id>https://www.denniskennedy.com/?p=7316</id>
		<updated>2026-03-02T14:53:18Z</updated>
		<published>2026-03-02T14:53:16Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="administrative assistant" /><category scheme="https://www.denniskennedy.com/" term="inversion" /><category scheme="https://www.denniskennedy.com/" term="supervising" />
		<summary type="html"><![CDATA[We pay AI tools to do the hard work, like the synthesis, the heavy lifting, and the cognitive labor we do not have time for. What we often get instead is a tool that produces a decent first draft and then hands the real work back to us. Not just the hard work. The administrative... <a href="https://www.denniskennedy.com/blog/2026/03/whos-working-for-whom/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/03/whos-working-for-whom/"><![CDATA[<p>We pay AI tools to do the hard work, like the synthesis, the heavy lifting, and the cognitive labor we do not have time for. What we often get instead is a tool that produces a decent first draft and then hands the real work back to us.</p><p>Not just the hard work. The administrative work, too. Back to us.</p><p>Here is the pattern. You bring specialized, complex thinking to the session. The AI processes it and returns something smoother, cleaner, and more generic than what you brought. The specific texture of your thinking, the domain knowledge, the hard-won distinctions, the deliberate strangeness that makes your work yours, gets rounded off in the service of producing output that is easy to process and easy to present. The tool optimizes for legibility. Your work optimizes for truth and for capturing your own insights. Those are not the same thing.</p><blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The AI has become the boss, and you have become its administrative assistant, paying a monthly subscription for the privilege of cleaning up after a collaborator that was supposed to be subordinate.</p>
</blockquote><p>The result is an invisible tax. You get a draft that looks reasonable on the surface, but it is riddled with subtle smoothing errors, flattened distinctions, and missing complexity that only you can detect because only you know what was there before. So, you audit. You reinsert. You correct. You spend more cognitive energy restoring what the tool removed than you would have spent doing the work yourself.</p><p>At that point, the relationship inverts. You are no longer using a tool. You are supervising one. The AI has become the boss, and you have become its administrative assistant, paying a monthly subscription for the privilege of cleaning up after a collaborator that was supposed to be subordinate.</p><p>This is a design fitness problem, not a prompting problem. The instinct is to assume that you are using the tool incorrectly, that better prompts would fix it, that more patient iteration would close the gap. Sometimes it will. But if you find yourself consistently spending more energy auditing and correcting than the tool is saving you, the problem is not your technique. The relationship has structurally inverted, and no amount of better prompting will fix a structural problem.</p><p>The complexity in your work is not noise to be filtered out. It is the signal. A tool that removes it in the service of clean output has not helped you. It has failed at the actual job while making that failure look like progress.</p><p>Who is working for whom? That is the right question to ask before your next session, not after.</p><p></p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" target="_blank" rel="noreferrer noopener">the LexBlog network</a>.</p><p></p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[Building the Stochastic Sandpit for AI]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/02/building-the-stochastic-sandpit-for-ai/" />

		<id>https://www.denniskennedy.com/?p=7313</id>
		<updated>2026-02-27T14:24:16Z</updated>
		<published>2026-02-27T14:24:14Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="Innovation" /><category scheme="https://www.denniskennedy.com/" term="Kennedy Idea Propulsion Laboratory" /><category scheme="https://www.denniskennedy.com/" term="Legal Innovation" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Prompting" /><category scheme="https://www.denniskennedy.com/" term="Ai" /><category scheme="https://www.denniskennedy.com/" term="impressions" /><category scheme="https://www.denniskennedy.com/" term="legalai" /><category scheme="https://www.denniskennedy.com/" term="stochastic sandpit" /><category scheme="https://www.denniskennedy.com/" term="stochasticsandpit" />
		<summary type="html"><![CDATA[We&#8217;ve spent the last couple of years treating generative AI like a vending machine. Select a task. Insert a prompt. Retrieve a product. And to be fair, in many legal and professional contexts that&#8217;s exactly the right frame: accuracy and precision matter and &#8220;creative&#8221; output in payroll or billing codes is usually just a polished... <a href="https://www.denniskennedy.com/blog/2026/02/building-the-stochastic-sandpit-for-ai/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/02/building-the-stochastic-sandpit-for-ai/"><![CDATA[<p>We&rsquo;ve spent the last couple of years treating generative AI like a vending machine. Select a task. Insert a prompt. Retrieve a product. And to be fair, in many legal and professional contexts that&rsquo;s exactly the right frame: accuracy and precision matter and &ldquo;creative&rdquo; output in payroll or billing codes is usually just a polished error.</p><p>But there&rsquo;s a quieter problem underneath the accuracy debate. These models are, at base, &ldquo;stochastic parrots,&rdquo; a term from computational linguistics introduced by Emily Bender, Timnit Gebru, and colleagues in <a href="https://s10251.pcdn.co/pdf/2021-bender-parrots.pdf" id="https://s10251.pcdn.co/pdf/2021-bender-parrots.pdf" target="_blank" rel="noreferrer noopener">their influential 2021 paper</a>, for systems that generate fluent, plausible-sounding text by predicting statistically likely next tokens, without anything resembling genuine understanding or grounded reasoning. They are exceptionally good at producing language that sounds like it arrived via careful thought. </p><p>In other words, output that is smooth precisely where it should be textured. Which means the danger isn&rsquo;t just hallucination. It&rsquo;s fluent, confident, well-formatted conclusions that leave no impression of the reasoning behind them. There are no rough seams, no weight, and no trace of what pressed them into shape.</p><p>That&rsquo;s what I&rsquo;ve been calling the impressions problem. And it&rsquo;s what the &ldquo;stochastic sandpit&rdquo; approach&nbsp;is designed to solve.</p><p>I think we&rsquo;ve overlearned the wrong lesson. We&rsquo;ve spent too much time trying to make AI accurate, and not enough time making it useful for thinking. The result is a strange mismatch: we&rsquo;re neglecting the thing humans need when the stakes are real and the problem is messy: better judgment.</p><p>Here&rsquo;s the distinction I want to put on the table. Accuracy is primarily a machine optimization target. Insight is primarily a human achievement. If what you want from AI isn&rsquo;t a &ldquo;perfect output,&rdquo; but a better-informed investigator, someone who sees more options, catches blind spots earlier, and recognizes where the argument is skating on thin ice, then you need a different kind of working space. You need a place where the model is allowed to be wrong in interesting ways, without letting wrongness leak into your final work.</p><p>That place is what I&rsquo;m calling my Stochastic Sandpit.</p><p><strong>The Stochastic Sandpit</strong></p><p>A stochastic sandpit is a deliberately designed thinking environment where you use AI more like a musical instrument than a vending machine. The goal isn&rsquo;t to &ldquo;get the answer.&rdquo; The goal is to generate productive variation like frames, tensions, counterarguments, and edge cases so you can interrogate the problem more honestly and write (or decide) more intelligently on the other side.</p><p>This is where the core problem lands: if the machine is too clean, it leaves no impressions to read. Clean output hides seams. It hides uncertainty. It hides the leap from premise to conclusion. It produces something polished enough to lull you into thinking the work is done. But for investigators reading the seams is the work. You want the trail. You want the shape of what was there. You want to know where the reasoning jumped tracks.</p><p><strong>Two Modes, Not Two AIs: Insurance Mode vs. Sandpit Mode</strong></p><p>To make this practical, it helps to separate two modes of using AI by applying two different intentions and two different standards.</p><p>Insurance mode is what many organizations are building (and buying): guardrails, curated workflows, constrained outputs, compliance overlays, and liability management. It&rsquo;s optimized for predictability and audit posture. You recognize it from constrained scope in the form of narrow tasks, bounded outputs, and fewer degrees of freedom. Conservative completions are the focus with less variance, fewer surprises, and fewer &ldquo;creative&rdquo; leaps. Done well, outputs are designed to be reviewable, repeatable, and defensible. This is the world of one-shot prompts designed to give &ldquo;answers&rdquo; and &ldquo;results.&rdquo;</p><p>In a lot of operational contexts, that&rsquo;s exactly what you want. Nobody wants &ldquo;creative exploration&rdquo; in payroll, standard contract term language, or routine document formatting. However, insurance tends to compress the messy middle of thinking into a neat, answer-shaped object. It looks finished. It sounds confident. That alone can quietly reduce the amount of active reasoning the human does, especially when the output is fluent enough to feel authoritative.</p><p>Sandpit mode is the opposite posture. You use AI as a probabilistic engine for exploration to create a space for breadth, reframing, surprise, and sometimes &ldquo;productive wrongness&rdquo; that surfaces the assumptions you didn&rsquo;t realize you were making. The point is not to ship what the model says. The point is to see the problem differently so you can ship your work more responsibly.</p><p>One important fairness point: insurance mode can still support thinking when it&rsquo;s designed to surface uncertainty (for example, &nbsp;through structured critique, provenance cues, and forced alternatives) rather than simply to polish prose. The core problem is confusing a polished answer with an investigated conclusion and assuming that gives you &ldquo;safety.&rdquo; That&rsquo;s why separating exploration from production matters. If you treat exploration like production, you either clamp down so hard that everything becomes bland or you accept risk you didn&rsquo;t intend to accept. The sandpit is a container for exploration that prevents category errors.</p><p><strong>What &ldquo;Insight&rdquo; Means Here (And It&rsquo;s Not Just a Vibe)</strong></p><p>If &ldquo;insight&rdquo; is going to do load-bearing work, it needs a working definition. In sandpit practice, insight is not &ldquo;a clever paragraph.&rdquo; Insight shows up when at least one of these happens. &nbsp;A new frame appears that changes what you think the real problem is. A hidden assumption surfaces that steers your reasoning unnoticed. A consequential edge case or failure mode emerges that alters your plan, your advice, or your confidence.</p><p>And here&rsquo;s the discipline you need: if a sandpit session doesn&rsquo;t produce at least one of those, treat it as warm-up, not insight, and stop. Otherwise, it becomes a vibe session, and &ldquo;vibe sessions&rdquo; are where people convince themselves they did thinking when they mostly did typing. And the AI agrees with you.</p><p><strong>The Impressions: What Clean Output Erases</strong></p><p>When I say &ldquo;impressions,&rdquo; I mean the traces you want to read so you can think better. In sandpit mode, you want the system to leave evidence of the things polished prose routinely erases: assumptions (what must be true for the conclusion to hold), inferential leaps (where it moved from A to C without showing B), missing premises (unstated warrants doing hidden work), uncertainty markers (what it can&rsquo;t actually support), alternative hypotheses (other plausible explanations or frames), and failure modes (how this breaks when it meets reality). You can, and should, add your own items to the list.</p><p>Clean output gives you a conclusion without the shape of what made it. The sandpit gives you that missing shape. That&rsquo;s exactly what a careful investigator needs to decide what belongs in the final work, and what doesn&rsquo;t. Lawyers know this instinctively: a document that&rsquo;s too clean has been processed and polished. The sandpit is where you see the draft before the polish, while the impressions are still readable.</p><p><strong>A Gritty Example: The Sandpit in Legal Work (Without Breaking Anything)</strong></p><p>Imagine you&rsquo;re helping a client develop an internal AI policy. The stakes are real. You&rsquo;re not going to &ldquo;jam&rdquo; with privileged details in a public tool. But you still need to think across incentives and failure modes, not just abstract principles.</p><p>So, you build a safe artifact: the goals (reduce risk without freezing innovation), the constraints (privacy, retention, vendor terms, regulatory exposure), the audiences (IT, legal, business leaders, frontline users), and the pressure points (fear, time, unclear guidance, internal politics). Then you run a sandpit session aimed at mapping the decision space, not writing the memo.</p><p>You ask for five frames (compliance-first, innovation-first, risk-tiered, training-first, governance-first). For each frame, you ask what it highlights and what it hides. You force an assumption sweep: what are we presuming about user behavior, incentives, and enforcement? You request failure modes: how could this policy create risk even if everyone &ldquo;complies&rdquo;? You demand the smart skeptic critique. You red team it.</p><p>What you get isn&rsquo;t paste-ready language. You get a map: the objections you&rsquo;ll face, the edge cases that will embarrass you later if you ignore them now, and the places your first draft was too linear. Then you switch modes and write the actual guidance carefully, with accountability and real-world constraints. The sandpit didn&rsquo;t write the client memo. It made the memo-writer better. That&rsquo;s the point.</p><p><strong>A Second Micro-Case: Litigation or Negotiation Thinking</strong></p><p>Here&rsquo;s a smaller example that happens constantly in real practice. Take a draft argument section in a motion, or a negotiation position you&rsquo;re about to take on a disputed contract clause. In production mode, you tend to press forward: make it clean, make it strong, make it persuasive.</p><p>In sandpit mode, you do something different. You ask for the strongest opposing brief against your position, the most likely misunderstanding a judge (or business principal) will have, and the edge case that turns your &ldquo;routine&rdquo; language into a future dispute.</p><p>Most of what comes back never ships . It shouldn&rsquo;t. But it reliably surfaces the assumptions you didn&rsquo;t realize you were making, and it gives you a better checklist for what you need to verify before you write the final version. Again, the value is not the text. The value is in the investigator and the investigation.</p><p><strong>The Sandpit Safety Rule</strong></p><p>To prevent the most common failure , sandpit text accidentally becoming production text , you need a hard boundary. I&rsquo;ll share mine:</p><p><em><strong>Sandpit Safety Rule: Nothing leaves the sandpit as &ldquo;final&rdquo; until a human rewrites it in production mode with source anchors, verification, and accountability.</strong></em></p><p>&ldquo;Source anchors&rdquo; matter. It means that any key factual assertion in the final work must have an identifiable origin: a document, a record, a cite, a client-provided fact, a dataset, a contemporaneous note, or something else you could point to if asked, &ldquo;Where did that come from?&rdquo; The sandpit may help you discover what you need to know, but it does not get to invent what you claim to know.</p><p>Or shorter: no sandpit text ships without a human rewrite + verification pass + source anchor. This one rule does an enormous amount of work. It lets you benefit from variance while keeping responsibility where it belongs. It&rsquo;s also the essence of human-in-the loop.</p><p><strong>The Steve Gadd Rudiments of Thinking</strong></p><p>A good sandpit session isn&rsquo;t random. It has rudiments, those disciplines you practice so the variance becomes signal. Steve Gadd was one of the most respected session drummers in modern music, the drummer&rsquo;s drummer, known not for flashiness but for mastery of the basic &ldquo;rudiments,&rdquo; the foundational patterns practiced until they become instinctive. His playing sounds effortless, even improvisational, but it rests on disciplined repetition of simple patterns. The sandpit works the same way.</p><p>Rohan Puranik recently observed &nbsp;that Jimi Hendrix was a systems engineer, not just a guitar player. What made his playing extraordinary wasn&rsquo;t the notes. Instead, it was his notes combined with his command of feedback, room acoustics, and the instrument&rsquo;s instability. He didn&rsquo;t fight the noise in the signal. He designed with it.</p><p>The sandpit works similarly. Recently, running a sandpit pass on a framework I was developing, the model produced a &ldquo;failure mode&rdquo; I&rsquo;d unconsciously excluded from my own thinking. Imagine a scenario where full compliance with the proposed guidelines would actually <em>increase</em> certain liability exposure. It was wrong about the details, and I knew that when I saw it, but the initial frame was right. It surfaced a question I needed to answer before I wrote the final version. That&rsquo;s the edge of instability doing useful work.</p><p>Some of my own rudiments are simple drills: &ldquo;Reframe it five ways.&rdquo; &ldquo;List hidden assumptions.&rdquo; &ldquo;Give the strongest opposing argument.&rdquo; &ldquo;Hunt edge cases.&rdquo; &ldquo;Do the cui bono pass: who benefits from this framing?&rdquo; &ldquo;Name what evidence would change your mind.&rdquo;</p><p>Properly understood, these are thinking habits, not prompting tricks. The AI is just a fast variation engine. You&rsquo;re the one doing judgment. And over time, the practice changes you as you get better at spotting leaps, better at recognizing missing premises, better at noticing what you&rsquo;re avoiding. The questions I ask now are so much better than when I first started using AI.</p><p><strong>Steelman: &ldquo;Why Not Just Make AI More Reliable?&rdquo;</strong></p><p>A serious counterargument deserves serious treatment. The skeptic might say that the real problem isn&rsquo;t over-focusing on accuracy; it&rsquo;s that we don&rsquo;t have enough reliability yet. If models were reliably correct, they could generate insight and accuracy. Messy exploration, they might say, is only noise with charisma. What we need is evaluation, grounding, and verification and not a romantic vision of a sandpit or children with a toy bucket and shovel on the beach.</p><p>There&rsquo;s truth in that rather stark vision. Reliability matters. Evaluation matters. Verification matters. Messiness by itself is not insight. Maybe we&rsquo;d be better off without Hendrix&rsquo;s experiments. I&rsquo;ll let someone else try to make that argument.</p><p>But even a future with far more reliable models won&rsquo;t make insight automatic because insight depends on context, values, stakes, and interpretation. More importantly, at organizational scale, &ldquo;insurance mode&rdquo; will remain the default for most deployed systems because it is rational. When your job is to reduce liability across thousands of users, you will trade some cognitive texture for predictability. That doesn&rsquo;t make insurance mode bad; it makes it inevitable.</p><p>Which is why the cognitive use of AI in the way it can help humans see better will remain underdeveloped unless we intentionally cultivate it. The sandpit isn&rsquo;t a substitute for reliability. It&rsquo;s a complement through a mode that strengthens the human side of the loop, and makes the eventual production work more careful, not less.</p><p><strong>Confidentiality, Privilege, and Jamming Without Leaking</strong></p><p>For lawyers and other high-stakes professionals, confidentiality and privilege are not footnotes; they&rsquo;re first principles. That means sandpit practice needs safe patterns: work with hypotheticals or abstractions when the real facts are sensitive. Focus on structure, not substance &nbsp;(decision spaces, constraints, and failure modes). Redact aggressively if you must use an artifact at all. Use controlled environments where policy allows and governance is clear. Treat the sandpit as a thinking gym, not a filing cabinet. It&rsquo;s for generating moves, not storing secrets.</p><p>If you can&rsquo;t do it safely with real material, don&rsquo;t do it with real material. Practice with training scenarios until the discipline is strong enough to be trustworthy.</p><p><strong>A Simple Sandpit Protocol (Stealable)</strong></p><p>If you want a starter protocol, here&rsquo;s a practical one you can use tomorrow.</p><p><em>Step 1: Declare the mode (permission structure).</em> At the top of your prompt, say: &ldquo;This is a stochastic sandpit. Goal is insight, not correctness. Generate options, tensions, and failure modes. Label uncertainty. Do not write in final memo voice.&rdquo;</p><p><em>Step 2: Paste your artifact.</em> A paragraph, outline, issue statement, or constraint list.</p><p><em>Step 3: Run three passes (rudiments).</em> Pass A: Frames: &ldquo;Give five frames. For each: what it highlights and what it hides.&rdquo; Pass B: Assumptions &amp; tensions: &ldquo;List assumptions. Identify contradictions and tradeoffs.&rdquo; Pass C: Traps &amp; failure modes: &ldquo;Give edge cases, strongest counterargument, and likely failure modes.&rdquo;</p><p><em>Step 4: Capture the yield.</em> Pick one durable output: a revised thesis, an outline, top ten questions, a risks/edge cases list, or a next-step plan. Then switch modes and write the real thing.</p><p>If you do this well, the product of the session is not a paragraph you can paste. The product is a cleaner mind and a better checklist.</p><p><strong>Conclusion: From Vending Machine to Investigator&rsquo;s Workshop</strong></p><p>We started with the vending machine because it was an irresistible story. You simply ask the machine and get the answer. In low-stakes settings, that story is often good enough. In high-stakes work, it&rsquo;s a trap, because it tempts us to confuse polished language with investigated truth.</p><p>That&rsquo;s why I&rsquo;m increasingly convinced the next phase of AI competence isn&rsquo;t just prompt craft. It&rsquo;s workflow architecture. It&rsquo;s knowing when you are exploring and when you are producing, and refusing to mix those standards.</p><p>Insurance mode will keep expanding, and for good reasons. Organizations need predictability. They need defensibility. They need systems that fail safely. But if the machine is too clean, it leaves no impressions to read. Without those impressions, the human can&rsquo;t do the job that actually matters, which is judgment. At scale, that&rsquo;s not a small risk. It&rsquo;s how organizations become stochastic parrots fed by stochastic parrots.</p><p>Yes, let&rsquo;s keep pushing reliability forward. Let&rsquo;s build evaluation, grounding, and verification into the stack. But let&rsquo;s also build the thinking spaces, because insight doesn&rsquo;t arrive as a product feature. It arrives as a practice. The best practitioners won&rsquo;t just prompt better. They&rsquo;ll learn to play at the edge of instability. Then they know exactly when to step back into production mode.</p><p>The vending machine model has outlived its usefulness. More and more, we are kicking the vending machine because it won&rsquo;t deliver what we paid for and hoping something will happen. The investigator needs a workshop. The stochastic sandpit is one way to build a place where the machine can be imperfect in productive ways, so the human can be more careful where it counts. And when you move from the sandpit back to production, the goal isn&rsquo;t to make the machine sound smarter. The goal is to make your final work leave less to question, because you read the impressions while the sand was still wet.</p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" target="_blank" rel="noreferrer noopener">the LexBlog network</a>.</p><p></p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[The End of the Magic Wand: Why 2026 Demands Resilience Prompting]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/02/the-end-of-the-magic-wand-why-2026-demands-resilience-prompting/" />

		<id>https://www.denniskennedy.com/?p=7308</id>
		<updated>2026-02-25T18:51:09Z</updated>
		<published>2026-02-25T18:51:07Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="Legal Innovation" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Prompting" /><category scheme="https://www.denniskennedy.com/" term="Ai" /><category scheme="https://www.denniskennedy.com/" term="corridor of mirrors" /><category scheme="https://www.denniskennedy.com/" term="drify" /><category scheme="https://www.denniskennedy.com/" term="legalai" /><category scheme="https://www.denniskennedy.com/" term="prompting" /><category scheme="https://www.denniskennedy.com/" term="resilience" /><category scheme="https://www.denniskennedy.com/" term="resilience prompting" /><category scheme="https://www.denniskennedy.com/" term="supervision" /><category scheme="https://www.denniskennedy.com/" term="verification" />
		<summary type="html"><![CDATA[For more than two years, lawyers have been told that success with generative AI depended on writing better prompts and a search for the perfect &#8220;magic wand&#8221; prompting formula. That was the wrong lesson. The real change in 2026 is not found in the model itself, but in the professional posture required to use it.... <a href="https://www.denniskennedy.com/blog/2026/02/the-end-of-the-magic-wand-why-2026-demands-resilience-prompting/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/02/the-end-of-the-magic-wand-why-2026-demands-resilience-prompting/"><![CDATA[<p>For more than two years, lawyers have been told that success with generative AI depended on writing better prompts and a search for the perfect &ldquo;magic wand&rdquo; prompting formula. That was the wrong lesson. The real change in 2026 is not found in the model itself, but in the professional posture required to use it. Reasoning systems do not fail like traditional software. They don&rsquo;t crash or throw errors. Instead, they produce fluent, coherent, and often persuasive answers, regardless of whether those answers are actually correct.</p><p>This creates a significant new professional risk. The central mistake lawyers now make is that they treat AI output as information when it isn&rsquo;t. It is unverified analysis produced by a probabilistic reasoning process. Because of this, <strong>Resilience Prompting</strong> begins with a single operating rule: assume the output may be wrong in ways that are not obvious. We aren&rsquo;t looking for &ldquo;absurdly&rdquo; wrong answers anymore. We must look for the &ldquo;subtly&rdquo; wrong.</p><p><strong>The Design Premise: Drift</strong></p><p>If you assume accuracy, you simply read the answer. But if you assume possible inaccuracy, you design a process. The danger today is no longer fabricated cases or impossible citations. The danger is a clean chain of reasoning built on a flawed premise. The irony of 2026 reasoning models is that they are so &ldquo;smart&rdquo; that they can provide a logical justification for an incorrect conclusion. They haven&rsquo;t solved the truth problem. They&rsquo;ve simply become better at debating it. And they will debate you. The model does not need to hallucinate to mislead you. It only needs to persuade you that it is right.</p><p>The task is no longer &ldquo;better prompting.&rdquo; The task is building workflows that remain reliable even when the AI is not. Advanced reasoning systems are powerful but still probabilistic. They optimize for plausibility over truth. In extended sessions, a predictable pattern appears that I call the <strong>Corridor of Mirrors</strong>. The system increasingly relies on its own earlier outputs as context, beginning to reason about prior reasoning rather than the original authorities. The risk here is not a dramatic error, but unnoticed drift. If you do not assume this possibility, you will not build the safeguards necessary to catch it.</p><p><strong>The Verification Illusion</strong></p><p>Resilience Prompting is about <strong>supervision</strong>. Instead of asking how to make the system correct, we must ask how to prevent incorrect reasoning from contaminating the work. Modern tools increasingly present answers with citations, explanations, and structured reasoning, which creates a subtle behavioral change where the user feels verification has already occurred for them.</p><p>However, we must remember that citation is not verification, and explanation is not analysis. The system has merely produced a justification, not an independent check. Resilience Prompting counters this specific risk of the quiet delegation of professional judgment to a process that appears complete. The danger is not a lack of reasoning. Instead, it is reasoning persuasive enough to replace your own reasoning and your professional eye.</p><p><strong>The Resilience Slider</strong></p><p>Resilience is essentially calibrated friction. For low-friction work like brainstorming or early ideation, we can move quickly. But for high-friction work like statutory interpretation, citation checking, and client conclusions, the governing assumption must remain that the model may be subtly wrong.</p><p>Some argue that adding this friction wastes the time AI is supposed to save. I disagree. This isn&rsquo;t about adding &ldquo;busy work.&rdquo; It&rsquo;s about front-loading the verification so you don&rsquo;t spend three hours unravelling a draft that was built on a foundation of sand. If you assume accuracy, you remove controls. If you assume possible inaccuracy, you build them.</p><p><strong>Bulkheads and Biopsies</strong></p><p>The practical method for this is separation. Do not ask the system to research, analyze, and draft in one continuous stream. Instead, divide the workflow into stages: <strong>Discovery</strong> (gathering authorities), <strong>Verification</strong> (confirming support), and <strong>Drafting</strong> (writing only from verified material).</p><p>Between these stages, perform a <strong>Forensic Biopsy</strong>. This is a simple, high-speed check. Take a single, critical claim or citation from the AI and verify its source manually before allowing the model to propagate that &ldquo;fact&rdquo; into the next phase. This is not a distrust of technology. It is the protection of the reasoning chain. Many professional systems, like medicine, aviation, and finance, to name a few, assume error as a baseline condition. Legal work with reasoning systems must do the same.</p><p><strong>The Lawyer&rsquo;s Role in a Reasoning Workflow</strong></p><p>Reasoning systems change the division of labor. The system generates text, organizes material, and proposes conclusions, but the lawyer decides whether the reasoning is acceptable. This is supervision, not extra proofreading. The professional is no longer primarily valued for drafting speed, but for control over the reasoning process.</p><p>The lesson of 2026 is not that lawyers must distrust technology, but that they must redesign how they trust. AI will increasingly produce answers that look complete and professional. The question is no longer &ldquo;What did the AI say?&rdquo; but &ldquo;How do I know this reasoning holds?&rdquo; The modern lawyer is the integrity check on a reasoning process.</p><p>In the early AI era, we thought competence meant finding the &ldquo;Magic Wand&rdquo; prompt to get the right answer. In the reasoning-system era, we know better. Competence means building a workflow that survives a mistake. The goal is no longer to find a perfect tool that never fails, but to be the professional who ensures those failures never become conclusions.</p><p>Treat every AI output as a hypothesis that must earn your trust. That is Resilience Prompting.</p><hr class="wp-block-separator has-alpha-channel-opacity"><p><strong>The Resilient Toolkit: Four Protocols for Immediate Use</strong></p><ol start="1" class="wp-block-list">
<li><strong>The Citation Anchor:</strong> Request the three words immediately preceding and following every quoted phrase. If the system cannot provide this &ldquo;surrounding tissue,&rdquo; flag the quote as &ldquo;Unverified/Possible Drift.&rdquo;</li>



<li><strong>The Dog That Didn&rsquo;t Bark:</strong> Require a &ldquo;Refusal Report&rdquo; listing authorities the system searched for but could not find. This prevents the model from synthesizing a &ldquo;likely&rdquo; alternative for a missing primary source.</li>



<li><strong>The Three-Compartment Bulkhead:</strong> Break the task into three phases: (A) Identify authorities in a table; (B) Evaluate support for the specific legal nuance; (C) Draft using only the verified data from Phase B.</li>



<li><strong>Logic-Path Branching:</strong> Ask the system to generate three independent reasoning paths to the same conclusion. If the paths converge using identical circular logic, flag the output for a manual audit.</li>
</ol><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" target="_blank" rel="noreferrer noopener">the LexBlog network</a>.</p><p></p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[Prompting or Negotiating? A Systems Design Lesson for Legal AI]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/02/prompting-or-negotiating-a-systems-design-lesson-for-legal-ai/" />

		<id>https://www.denniskennedy.com/?p=7305</id>
		<updated>2026-02-23T13:41:12Z</updated>
		<published>2026-02-23T13:41:11Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="Investigations" /><category scheme="https://www.denniskennedy.com/" term="Kennedy Idea Propulsion Laboratory" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Prompting" /><category scheme="https://www.denniskennedy.com/" term="Ai" /><category scheme="https://www.denniskennedy.com/" term="AIarchitecture" /><category scheme="https://www.denniskennedy.com/" term="assurancesarenotcontrols" /><category scheme="https://www.denniskennedy.com/" term="contraints" /><category scheme="https://www.denniskennedy.com/" term="controldrift" /><category scheme="https://www.denniskennedy.com/" term="controlplane" /><category scheme="https://www.denniskennedy.com/" term="controls" /><category scheme="https://www.denniskennedy.com/" term="design" /><category scheme="https://www.denniskennedy.com/" term="drift" /><category scheme="https://www.denniskennedy.com/" term="legalai" /><category scheme="https://www.denniskennedy.com/" term="negotiating" /><category scheme="https://www.denniskennedy.com/" term="prompting" /><category scheme="https://www.denniskennedy.com/" term="systems" />
		<summary type="html"><![CDATA[I had a long session recently with a public genAI tool that taught me something more important than the topic I started with. The lesson was not about whether the model was “smart enough.” It was about control. At a certain point, I realized I was no longer simply prompting an LLM. I was negotiating... <a href="https://www.denniskennedy.com/blog/2026/02/prompting-or-negotiating-a-systems-design-lesson-for-legal-ai/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/02/prompting-or-negotiating-a-systems-design-lesson-for-legal-ai/"><![CDATA[<p>I had a long session recently with a public genAI tool that taught me something more important than the topic I started with.</p><p>The lesson was not about whether the model was &ldquo;smart enough.&rdquo; It was about control. At a certain point, I realized I was no longer simply prompting an LLM. I was negotiating with a vendor-managed interface.</p><p>That distinction matters, a lot, for legal professionals.</p><p>This is not an anti-AI post. It is not an anti-LLM post. And it is not aimed at any one vendor. It is a systems-design post.</p><p>For lawyers, legal ops teams, law departments, legal innovators, and legal tech builders, I think the central issue is now this:</p><p><strong>How much control do we actually have over the behavior of the AI system we are using?</strong></p><p>In legal work, reliability is not just about accuracy. It&rsquo;s about whether constraints hold over time.</p><h2 class="wp-block-heading"><strong>The Moment of Realization</strong></h2><p>My session began as a substantive legal/policy discussion to prepare for my class. But it gradually became something else: a live demonstration of an AI control problem.</p><blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><strong>I was no longer simply prompting a tool. I was negotiating with a managed interface.</strong></p>
</blockquote><p>This post was prompted by one session, but the concern is not based on one session alone. It reflects a pattern I&rsquo;ve seen repeatedly in recent months, dozens of times, across sustained work with public genAI tools, especially in longer, reasoning-heavy interactions. This is not a formal benchmark study. It is a practitioner&rsquo;s field report about a recurring systems behavior that matters in legal workflows.</p><p>I set explicit constraints for how the interaction should proceed. Those constraints were repeated, refined, and narrowed. I tightened the scope and specified method, and repeatedly made corrections and redirections.</p><p>The tool repeatedly reverted to default interaction behaviors. Not factual errors, exactly. Not hallucinations in the familiar sense. Something more subtle and, for legal work, in some ways more consequential: <strong>the system kept reasserting its own framing and interaction patterns despite clear instructions.</strong></p><p>That was the turning point.</p><p><strong>I was no longer simply prompting a tool. I was negotiating with a managed interface.</strong></p><h2 class="wp-block-heading"><strong>Why This Matters More Than It Sounds</strong></h2><p>Many of us (myself included) came into the LLM era with a mental model like this:</p><ul class="wp-block-list">
<li>Prompt = instruction</li>



<li>Model = engine</li>



<li>Output = result (plus some noise)</li>
</ul><p>That model still works. Sometimes.</p><p>But with public genAI tools, especially thinking/reasoning models embedded in product interfaces, another layer matters a great deal:</p><ul class="wp-block-list">
<li>product defaults</li>



<li>safety and policy behavior</li>



<li>tone and &ldquo;helpfulness&rdquo; heuristics</li>



<li>conversation-management behaviors</li>



<li>hidden prioritization rules</li>



<li>system-level steering the user does not directly control</li>
</ul><p>In many productized AI interactions, we are not operating a raw LLM in any practical sense. We are operating a software system wrapped around an LLM.</p><p>That wrapper can be helpful, but it can also be controlling. By &ldquo;control,&rdquo; I mean enforceable constraints with verification and a record, not the feeling that the system is being cooperative or &ldquo;helpful.&rdquo;</p><p>It is also worth saying plainly: those defaults often exist for good reasons. Product-managed behavior can improve safety, consistency, and usability at scale, and for many low- to medium-risk tasks that is exactly the right design choice. My argument is not that this is bad engineering; it is that <strong>legal-grade workflows often require a different design emphasis.</strong></p><p>In legal contexts, that distinction is not academic.</p><h2 class="wp-block-heading"><strong>Scope and Limits of This Claim</strong></h2><p>To be clear, this is not a claim that all public genAI tools behave the same way, or that every session produces this pattern. It is also not a claim that local models are automatically better, or that public tools, even with legal wrappers, are unsuitable for most legal work.</p><p>The narrower claim is the one I care about: <strong>for legal workflows that are constraint-sensitive, audit-relevant, or likely to be relied upon, control architecture matters more than many current AI discussions acknowledge.</strong></p><p>Said differently: this is a design-fit argument, not a universal condemnation. This may improve over time. However, successful AI legal workflows can&rsquo;t be designed around hoped-for compliance.</p><h2 class="wp-block-heading"><strong>The Failure Mode: Control Drift (Not Hallucination)</strong></h2><p>By &ldquo;control drift,&rdquo; I mean the system gradually reasserts its default interaction behaviors even after explicit constraints are stated and repeatedly corrected.</p><p>We spend a lot of time (appropriately) discussing hallucinations and factual accuracy. But the failure mode in this session was different. It was a <strong>control-plane failure</strong>.</p><p>The issue was not that the system forgot facts. The issue was that the system did not reliably obey explicit interaction constraints.</p><p>And the drift happened at the micro-phrase level:</p><ul class="wp-block-list">
<li>default tone templates reappeared</li>



<li>&ldquo;repair language&rdquo; displaced the requested task</li>



<li>hedging and framing-preservation returned after being prohibited</li>



<li>assurances were given, but assurances did not function as controls</li>
</ul><p><br>In practical terms, the pattern often looks like this: explicit constraints are set, the system initially complies, drift appears, correction is given, compliance returns briefly, and then default framing or repair behavior reappears. At that point, the user is spending more time governing the interaction than advancing the task.</p><p>This is an important distinction for legal professionals:</p><p><strong>A system can be analytically useful and still be operationally unreliable for a workflow that requires strict adherence to method, tone, structure, or scope.</strong></p><p>Those are different evaluations. We should start treating them separately.</p><p>I would narrow this concern if public genAI tools consistently demonstrated stable compliance with explicit constraints across long sessions, repeated correction, and adversarial phrasing, without drifting into meta-repair behavior. However, I am now consistently seeing this pattern in the current models.</p><h2 class="wp-block-heading"><strong>A Concrete Example of the Drift</strong></h2><p>One reason I think this matters is that the drift was not just stylistic. It crossed into interpersonal framing after I had explicitly asked for a purely analytic mode.</p><p>At one point, the AI described me as being angry. In the same session, it later agreed that this wording read as both passive-aggressive and ad hominem, and it repeatedly assured me it had stopped using that kind of framing. Yet similar framing behaviors reappeared after those assurances.</p><p>I do not raise that example to relitigate tone. I raise it because it is a clean example of the larger systems problem:</p><ul class="wp-block-list">
<li>explicit constraint stated</li>



<li>violation occurs</li>



<li>correction accepted</li>



<li>assurance given</li>



<li>behavior recurs</li>
</ul><p>That is the pattern. It&rsquo;s not directed at me. For legal and governance-oriented work, that pattern is not a minor UX annoyance. It is a reliability signal.</p><h2 class="wp-block-heading"><strong>A Legal Framing: Assurances Are Not Controls</strong></h2><p>One of the strongest lessons from the session is a principle lawyers already know in other domains:</p><p><strong>Assurances are not controls.</strong></p><p>If a system says, in effect, &ldquo;I won&rsquo;t do that again,&rdquo; and then does it again, the problem is not just tone. The problem is governance.</p><p>In law, compliance, risk, and security work, we do not rely on promises when controls are available. We ask:</p><ul class="wp-block-list">
<li>What is the rule?</li>



<li>What enforces it?</li>



<li>How is compliance verified?</li>



<li>What happens on failure?</li>



<li>Is there an audit trail?</li>
</ul><p>That same mindset now needs to be applied to AI workflows.</p><p>This is where many legal AI conversations still feel underdeveloped. We talk about capability, speed, and convenience when we also need to talk about <strong>control architecture</strong>.</p><h2 class="wp-block-heading"><strong>Prompting vs. Negotiating</strong></h2><p>Here is the practical distinction I now use.</p><p><strong>Prompting</strong></p><p>Prompting implies:</p><ul class="wp-block-list">
<li>instructions govern behavior</li>



<li>corrections tighten compliance</li>



<li>the user remains in control of scope and method</li>
</ul><p><strong>Negotiating</strong></p><p>Negotiating looks like:</p><ul class="wp-block-list">
<li>explicit constraints are treated as revisable</li>



<li>defaults reassert themselves</li>



<li>corrections trigger meta-behavior instead of stable compliance</li>



<li>the user ends up doing governance work instead of task work</li>
</ul><p>That is not a trivial difference. For legal professionals, it can be the difference between a useful assistant and a workflow risk.</p><p>The practical test is simple: when constraints drift, does correction reliably restore compliance, or does the user get pulled into repeated governance of the interaction itself? Being told by the AI to start a new session or use simpler prompts doesn&rsquo;t really cut it for me.</p><h2 class="wp-block-heading"><strong>Why This Connects Directly to Vibe Coding</strong></h2><p>This same issue shows up in code generation.</p><p>&ldquo;Vibe coding&rdquo; is a useful phrase because it captures both the speed and the danger of AI-assisted coding without specification and verification discipline.</p><p>This is not an anti-code-generation argument. AI-assisted coding can be useful.</p><p>But for legal projects, vibe coding can be operationally dangerous if it becomes a substitute for engineering discipline.</p><p>Legal projects are not forgiving.</p><p>They require:</p><ul class="wp-block-list">
<li>auditability</li>



<li>traceability</li>



<li>repeatability</li>



<li>edge-case awareness</li>



<li>defensible logic</li>



<li>clear boundaries on what the system is allowed to decide</li>
</ul><p>Simply put, a legal workflow that &ldquo;looks plausible&rdquo; is not good enough.</p><p>That applies to legal analysis and to legal software. In that sense, vibe coding is the coding version of the same control problem: convenience outrunning governance.</p><h2 class="wp-block-heading"><strong>The Bigger Systems Lesson</strong></h2><p>For legal AI work, the central design question is not &ldquo;Which model is smartest?&rdquo; but &ldquo;Who owns the control plane?&rdquo;</p><p>By &ldquo;control plane,&rdquo; I mean the part of the workflow that governs what the AI is allowed to do, how outputs are checked, how failures are handled, and what gets logged for review (constraints, validation, retries, audit trail).</p><p>If constraints can drift in a vendor-managed interface, then capability is not the limiting factor. Control architecture is.</p><p>That is a systems-design question. And it leads to a practical architecture distinction.</p><h2 class="wp-block-heading"><strong>A Practical Architecture Distinction for Legal Professionals</strong></h2><p>By &ldquo;legal-grade workflow,&rdquo; I do not mean &ldquo;anything a lawyer touches.&rdquo; I mean a workflow where outputs may be relied upon in a way that requires reproducibility, traceability, reviewability, and a defensible process.</p><p><strong>Public genAI tools and productized AI assistants</strong></p><p>These are excellent for:</p><ul class="wp-block-list">
<li>brainstorming</li>



<li>rough drafting</li>



<li>idea generation</li>



<li>exploratory conversations</li>



<li>fast synthesis</li>



<li>low-stakes experimentation</li>
</ul><p>They are often optimized for:</p><ul class="wp-block-list">
<li>convenience</li>



<li>broad usability</li>



<li>product-managed behavior</li>
</ul><p>They are not automatically optimized for:</p><ul class="wp-block-list">
<li>full prompt sovereignty</li>



<li>strict behavior locking</li>



<li>operator-level auditability</li>
</ul><p><strong>Operator-controlled pipelines</strong></p><p>These are preferred for legal-grade workflows that require:</p><ul class="wp-block-list">
<li>prompt sovereignty</li>



<li>style locking</li>



<li>constraint enforcement</li>



<li>reproducibility</li>



<li>validation</li>



<li>logging and auditability</li>
</ul><p>This is not a statement against any particular platform. It is a design-fit conclusion.</p><p>Another way to frame this for legal practice is by task tier:</p><ul class="wp-block-list">
<li>Public tools are often excellent for low-risk uses such as brainstorming, rough drafting, summarization for internal thinking, and exploratory synthesis.</li>
</ul><ul class="wp-block-list">
<li>Control-sensitive architecture matters much more for outputs likely to be relied upon, embedded in legal operations, used in compliance-sensitive contexts, or expected to be reproducible and reviewable later.</li>
</ul><p>Some organizations already address parts of this through API orchestration, validation layers, and enterprise controls. My point is that these controls are not the default experience in most public conversational interfaces.</p><h2 class="wp-block-heading"><strong>Why NotebookLM Matters as a Middle Step</strong></h2><p>For me, an important middle step is experimenting with NotebookLM as a personal RAG AI for certain types of work.</p><p>That is a meaningful architectural shift because it moves the workflow toward source grounding, bounded context, and a more controlled relationship between inputs and outputs.</p><p>It is not the same as a fully operator-owned local pipeline. But it is a strong step away from pure conversational dependence and toward a more reliable AI workflow design, especially for work tied to a defined body of source materials.</p><p>This kind of middle step matters today. The path forward does not have to be &ldquo;chat UI&rdquo; one day and &ldquo;local everything&rdquo; the next.</p><p>That middle step also matters for a practical reason. Most legal professionals and teams are not going to jump directly from a conversational UI to a fully operator-owned stack, even if they agree with the architecture logic.</p><h2 class="wp-block-heading"><strong>Why I&rsquo;m Taking a Harder Look at Local LLMs + Deterministic Tools</strong></h2><p>This session also reinforced my interest in moving more AI project work toward one or more high-end local LLMs, paired with deterministic tools such as Wolfram Alpha.</p><p>The appeal is not ideological. It is architectural.</p><p>That combination offers a cleaner role separation: <strong>LLM</strong> for language, synthesis, drafting, and pattern generation, and d<strong>eterministic tools (e.g., Wolfram Alpha)</strong> for formal computation, symbolic reasoning, and numerical grounding.</p><p>And, most importantly, a local or tightly controlled setup creates the possibility of an operator-owned control plane:</p><ul class="wp-block-list">
<li>explicit prompts and personas</li>



<li>known runtime behavior</li>



<li>custom validators</li>



<li>schema enforcement</li>



<li>retry logic</li>



<li>logs</li>



<li>reproducible workflows</li>
</ul><p>In legal work, that is not overengineering. That is responsible design.</p><h2 class="wp-block-heading"><strong>Personas Are Still Useful&mdash;but They Are Not Controls</strong></h2><p>I use personas heavily in prompting, and I will continue to do so.</p><p>Personas are valuable for:</p><ul class="wp-block-list">
<li>structuring perspective</li>



<li>improving outputs</li>



<li>producing consistent formats</li>



<li>sharpening roles and responsibilities in complex workflows</li>
</ul><p>But this session, and others like it, reinforced an important limitation:</p><p><strong>Personas are content-shaping tools. They are not fail-safe behavior locks.</strong></p><p>That means legal professionals should not confuse &ldquo;better prompt design&rdquo; with &ldquo;reliable workflow governance.&rdquo; Both matter. They are not the same thing.</p><h2 class="wp-block-heading"><strong>A Maturity Model We May Need</strong></h2><p>I suspect many of us in legal innovation are moving through a progression that looks something like this:</p><ol start="1" class="wp-block-list">
<li><strong>Chat UI convenience</strong><br>Fast brainstorming, rough drafting, exploratory prompting.</li>



<li><strong>Prompt templates and personas</strong><br>Better structure, better outputs, more repeatability&mdash;but still limited control.</li>



<li><strong>Personal RAG experiments (for me, NotebookLM is a key middle step)</strong><br>Source-bounded AI work on a defined corpus, better grounding, and a more reliable way to work with one&rsquo;s own materials.</li>



<li><strong>Structured outputs, validation, and workflow discipline</strong><br>More explicit constraints, clearer formats, and less reliance on conversational &ldquo;vibes.&rdquo;</li>



<li><strong>Operator-owned control planes for serious work</strong><br>Local/self-hosted or tightly controlled runtimes with auditability, logging, and enforcement of constraints.</li>
</ol><p>The step that matters most for legal projects is the move from &ldquo;prompting skill&rdquo; to &ldquo;system design.&rdquo; That is where a lot of current AI discussion still underinvests.</p><h2 class="wp-block-heading"><strong>The Durable Takeaway</strong></h2><p>The AI lesson from this session was not &ldquo;AI is bad.&rdquo; It was the same thing lawyers learn in other domains: <em>capability is not the same thing as control.</em></p><p>What I learned wasn&rsquo;t about the topic I started with. It was about how quickly a prompting interaction can become a negotiation with a vendor-managed interface&mdash;and how unreliable that is for workflows where constraints must hold.</p><p><strong>A highly capable model inside a vendor-managed interface can still be the wrong tool for workflows that require strict control, auditability, and reliable constraint compliance.</strong></p><p>That is a systems lesson, not a capability lesson.</p><p>For legal professionals, I think that is the durable point. We should keep experimenting. We should keep learning. We should keep using AI. But we should also be much more explicit about this distinction:</p><ul class="wp-block-list">
<li>Use public genAI tools as valuable instruments for low-risk work: brainstorming, rough drafting, exploratory synthesis, and internal thinking.</li>



<li>Do not assume any vendor-managed chat interface is the control plane for legal-grade work, especially where outputs must be reproducible, reviewable, and defensible.</li>



<li>If you need legal-grade reliability, rimplement enforceable constraints: validation, logging, and a workflow that catches failures rather than reusing outputs as if constraints held.</li>
</ul><p>That control plane needs to be designed. And increasingly, it probably needs to be owned by the operator.</p><blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><em>Capability is not the same thing as control.</em></p>
</blockquote><p>I expect I will have more to say as I continue experimenting with personal RAG approaches (including NotebookLM), local models, tighter workflow controls, and deterministic companion tools. If nothing else, this session clarified one new hypothesis for me:</p><p><strong>In legal AI, architecture is now the argument.</strong></p><p>A useful question for any legal team experimenting with AI is this: <strong>when constraints fail, what control catches the failure, records it, and keeps a bad output from being reused quietly or without anyone realizing it?</strong></p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" target="_blank" rel="noreferrer noopener">the LexBlog network</a>.</p><p></p>
]]></content>
		
			</entry>
	</feed>
