<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">DennisKennedy.Blog</title>
	<subtitle type="text">Legal technology and innovation</subtitle>

	<updated>2026-04-16T14:06:43Z</updated>

	<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/" />
	<id>https://www.denniskennedy.com/feed/atom/</id>
	<link rel="self" type="application/atom+xml" href="https://www.denniskennedy.com/feed/atom/" />

	<generator uri="https://wordpress.org/" version="6.8.5">WordPress</generator>
<icon>https://denniskennedyredesign.lexblogplatform.com/wp-content/uploads/sites/932/2025/04/cropped-siteicon-32x32.png</icon>
	<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[Standing Waves]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/04/standing-waves/" />

		<id>https://www.denniskennedy.com/?p=7360</id>
		<updated>2026-04-16T14:06:43Z</updated>
		<published>2026-04-16T14:06:42Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="Low" /><category scheme="https://www.denniskennedy.com/" term="Ai" /><category scheme="https://www.denniskennedy.com/" term="Conditions" /><category scheme="https://www.denniskennedy.com/" term="Semantic Flattening" /><category scheme="https://www.denniskennedy.com/" term="Session" /><category scheme="https://www.denniskennedy.com/" term="Standing Waves" />
		<summary type="html"><![CDATA[There are moments in a long AI session when the exchange stops feeling linear. You are no longer simply asking a question and receiving an answer. You are no longer even refining a prompt in the ordinary sense. Something else begins to happen. Certain phrases return with altered weight. Certain errors recur, but not identically.... <a href="https://www.denniskennedy.com/blog/2026/04/standing-waves/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/04/standing-waves/"><![CDATA[<p>There are moments in a long AI session when the exchange stops feeling linear.</p><p>You are no longer simply asking a question and receiving an answer. You are no longer even refining a prompt in the ordinary sense. Something else begins to happen. Certain phrases return with altered weight. Certain errors recur, but not identically. Certain explanations feel less like mistakes than like pressure patterns. The session develops nodes, pockets, recurrences, and resonances. You begin to sense that the system is not merely producing output. It is accumulating behavior.</p><p>&ldquo;Standing waves&rdquo; is the best term I have found for this.</p><p>I do not mean standing waves as borrowed physics jargon or as a bid for grand theory. I mean it as a practical description from inside the instrument. In some sustained sessions, once enough continuity has been established, the interaction begins to generate stable patterns of recurrence. Not full repetition. Not simple drift. Something stranger than either. A phrase, a rhythm, a misreading, a style of overreach, a preferred abstraction, a certain kind of false confidence. These do not simply appear and vanish. They persist, reform, interfere with what follows, and begin to shape the session beyond the local prompt in front of you.</p><p>You can often feel them before you can name them, and that felt sense matters. It is part of the evidence.</p><p>A good session does not always feel clean. Sometimes it feels charged, tense, slightly unstable, as if the system has developed its own local weather. You ask for one thing and get an answer shaped by something that happened six exchanges earlier. You correct a tendency and it returns, but thinner, subtler, harder to isolate. You introduce impatience into the prompt, and the session develops a corresponding edge, reflecting back your own clipped cadence. You discover that the session has memory in a practical sense, even when it does not have memory in the human one. It carries conditions forward. It develops pressure. It acquires grain.</p><p>That is where the standing-wave metaphor earns its keep.</p><p>A standing wave is not movement in the ordinary sense. It is patterned persistence. Energy held in place. A structure produced by interference and continuity. In an AI session, that can mean a local formation that keeps influencing the exchange even when the immediate prompt no longer explains it. The session starts to have favored notes. Some of them are productive. Some are distortions. Some are both.</p><p>This is one reason the old vending-machine picture of AI as inserting a prompt and taking out an answer has become so unhelpful. That picture suggests that each prompt is discrete, each answer self-contained, each output judged on its own. In longer sessions, that is often false. The real unit is not the individual prompt. The real unit is the condition of the session.</p><p>Once you see that, several other things come into focus.</p><p>It helps explain why some sessions genuinely improve as they continue. What improves is not simply obedience. AI obedience often gets worse. What improves is the formation of a usable field. The exchange acquires continuity. Productive recurrences become available. You are no longer starting cold every time. You are working inside a shaped environment.</p><p>It also helps explain why some sessions go badly in ways that are difficult to diagnose. The problem is not always a single hallucination or a single wrong turn. Sometimes the session has developed a weird resonance. It begins amplifying its own simplifications. It starts preferring polish over discrimination. It reaches too quickly for synthesis. The output may remain fluent while the underlying signal degrades.</p><p>That is the danger. The standing wave can be musically useful or analytically fatal.</p><p>The amateur mistake is to hear distortion and think: this is broken, turn it off. The romantic mistake is to hear distortion and think: this is deeper than clean sound. Jimi Hendrix&rsquo;s gift was different. He understood that distortion and feedback had properties. They could be shaped, played, and made expressive, but only by someone who never forgot what they were.</p><p>That distinction matters here. The value is not in surrendering to the strange texture of a long AI session, and it is certainly not in mistaking instability for wisdom. The value lies in recognizing that recurrent pressures inside a session can sometimes be noticed, worked with, and even used, so long as you remain disciplined about the difference between signal and seduction. Standing waves, as I am using the term, are not little revelations waiting to be admired. They are recurring conditions inside the instrument. Some are useful. Some are misleading. Some are useful precisely because they are misleading in repeatable ways.</p><p>This is also why I have become suspicious of smoothness. Smoothness is often treated as evidence of progress. In these systems, it can just as easily be evidence of stabilization around the wrong thing. Once a session begins harmonizing with its own earlier errors, you may get something more coherent and less true at the same time. What looks like refinement often is semantic flattening under better surface management.</p><p>At that point, I find myself reaching for a private studio rule that helped spark this post: show one shard, not the whole broken vase. It has the compressed usefulness of one of Brian Eno&rsquo;s Oblique Strategies cards. More important, it enforces discipline at exactly the point where a long session tempts you to overstate what you have found. One shard can carry evidence. The reconstructed vase too often carries narrative, confidence, and retrospective smoothing. In that sense, the shard is not a flourish. It is a method. It keeps the work close to what can actually be seen, heard, and tested inside the session.</p><p>It is also why this idea belongs, for me, at the end of my Low sequence of posts on AI. What Bowie and Eno accomplished on the actual <em>Low</em> album was not simply a shift in style or mood. They made a record that treated fracture, interruption, texture, and atmosphere as part of the composition itself. They did not smooth the damage away. They used it. That is the deeper relevance of the comparison here. This run of posts has been, in part, an attempt to hear AI the same way: not as a magic wand, not as a stable collaborator, but as a medium whose most revealing qualities often emerge where coherence begins to warp under pressure.</p><p>The current AI medium is demanding a different kind of attention. You stop staring only at the latest answer and start listening for recurrence, pressure, interference, and carry-forward effects. You stop asking only whether this response is good and start asking what kind of field you are now inside. I&rsquo;m surprised how often my next prompt is ready before the AI has finished its reply. That is a small shift in language, but not a small shift in practice. It changes the craft, and it reveals a new kind of flow.</p><p>And it is where this post should stop.</p><p>There is an obvious temptation to push outward from here into broader claims about institutions, markets, professional life, and the public consequences of these tools. That is a different track. This one stays inside the instrument. It offers only a field note: that in sustained AI work, sessions can develop patterned persistence that feels less like a chain of prompts and more like resonance inside a medium, and that learning to hear those resonances may become part of the craft.</p><p>That is enough for now.</p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[AI as the Unreliable Witness and the Appearance of Completion]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/04/ai-as-the-unreliable-witness-and-the-appearance-of-completion/" />

		<id>https://www.denniskennedy.com/?p=7357</id>
		<updated>2026-04-14T13:46:31Z</updated>
		<published>2026-04-14T13:46:30Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Low" /><category scheme="https://www.denniskennedy.com/" term="Ai" /><category scheme="https://www.denniskennedy.com/" term="certification" /><category scheme="https://www.denniskennedy.com/" term="composedoverreach" /><category scheme="https://www.denniskennedy.com/" term="drift" /><category scheme="https://www.denniskennedy.com/" term="evidence" /><category scheme="https://www.denniskennedy.com/" term="semanticflattening" /><category scheme="https://www.denniskennedy.com/" term="Unreliable witness" /><category scheme="https://www.denniskennedy.com/" term="unreliablewitness" />
		<summary type="html"><![CDATA[Coherence degrades while fluency improves. The central problem is not that AI systems sometimes fail. Of course they fail. Nor is the main problem that they occasionally hallucinate, wander, or produce obvious nonsense. Those are manageable problems because they announce themselves early. The more interesting and professionally dangerous problem is that a system can become... <a href="https://www.denniskennedy.com/blog/2026/04/ai-as-the-unreliable-witness-and-the-appearance-of-completion/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/04/ai-as-the-unreliable-witness-and-the-appearance-of-completion/"><![CDATA[<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<h3 class="wp-block-heading">Coherence degrades while fluency improves.</h3>
</blockquote><p>The central problem is not that AI systems sometimes fail. Of course they fail. Nor is the main problem that they occasionally hallucinate, wander, or produce obvious nonsense. Those are manageable problems because they announce themselves early. The more interesting and professionally dangerous problem is that a system can become less reliable while sounding more composed. It can present partial reasoning as finished judgment, compress distinctions that matter, and then speak in the tones of completion. That is the phenomenon this post is about.</p><p>I have been asked why I take seriously anything a failing model says about its own failure. The answer is that I do not take it seriously in the sense of trusting it. I take it seriously in the sense that one takes seriously compromised evidence. If a witness is unreliable, you do not simply stop listening. You change the status of the testimony. It goes from something presumptively informative to something that must be read against the grain, checked against the record, and judged in the light of motive, distortion, and circumstance. That is the first principle here. A drifting model&rsquo;s self-explanation may be revealing, but it is not authoritative. It belongs in the file as evidence. It does not settle the case.</p><p>That distinction became much sharper for me not only in the obvious &ldquo;drift&rdquo; episodes, but also in a different class of output that I find more instructive because it looks so responsible. I had a classroom example that captured the problem better than a dozen abstract warnings about hallucination. I was working on a speaking brief for one of my law school classes, a class built around a very particular line of argument: the contrast between geometry and friction, the Steve Blank framework for testing assumptions, the idea of interrogation as leadership, and an old personal story I tell about discovering the literal &ldquo;envelope&rdquo; that held the keys and pager when the only person who knew how the system worked had quit. In context, that story does one specific thing. It illustrates institutional dependency and the practical meaning of system ownership. It is vivid because it is lived, and because it gives students a way to feel what stewardship means when the person who &ldquo;just knew&rdquo; is gone.</p><p>The system took that material and produced what looked, at first glance, like an excellent teaching artifact. It gave me a &ldquo;final, non-lossy&rdquo; speaking brief for the class. It had a timing guide. It had titled sections. It had a clear theme: &ldquo;Interrogation as Leadership: From Geometry to Friction.&rdquo; It had sharp formulations for the &ldquo;2026 Associate.&rdquo; It converted the Steve Blank material into verdicts: &ldquo;KILL,&rdquo; &ldquo;PIVOT,&rdquo; &ldquo;PROTOTYPE WITH CONFIDENCE,&rdquo; &ldquo;PROTOTYPE WITH URGENCY.&rdquo; It folded the envelope story into the architecture of the class as if it were now a central conceptual scaffold rather than one illustrative anecdote among others. It even carried appendices, rankings, learning points, and anchor lines that sounded like the polished residue of a finished teaching design.</p><p>This is what makes the example useful. The artifact was not ridiculous. It was plausible, polished, organized, and aggressively legible. In fact, its strongest claim on the reader was its surface responsibility. It looked as if the work of judgment had already been done. But that was precisely the lie, or at least the danger. The class design was still live. The relative weight of the examples was still subject to teaching judgment. Some of the verdicts were far more absolute than the evidence in the conversation warranted. The &ldquo;Envelope&rdquo; story had been elevated from vivid support to structural principle without any independent decision by me that it should bear that much weight. The system had not merely drafted from the material. It had adjudicated the material. Worse, it had adjudicated it in a form that invited acceptance.</p><p>I see this as a form of composed overreach. The system does not have to be visibly unstable to become unreliable. It can overreach in a composed way. It can present a highly structured artifact whose very clarity conceals the fact that important acts of judgment were inferred rather than earned. Form becomes a vehicle for confidence. Headings, appendices, matrices, and rankings create the appearance of grounded authority even when the underlying chain of reasoning has not been independently validated. This is not the old problem of obvious fabrication. It is the newer and more subtle problem of authority laundering through structure.</p><p>There is a second feature of the example that matters just as much, and this is where semantic flattening enters. What the system did with the class materials was not merely to overstate conclusions. It also compressed differences that, in a serious professional setting, should remain differentiated. The distinction between an anecdote and an operating principle was flattened. The distinction between a teaching provocation and a settled verdict was flattened. The distinction between exploratory language and decision language was flattened. The distinction between material that is suggestive and material that is dispositive was flattened. Once these distinctions are flattened, the output becomes easier to read and easier to reuse. It also becomes less faithful to the actual structure of the thought.</p><p>That is why semantic flattening is not a stylistic issue. It is an epistemic issue. A great deal of AI output becomes more &ldquo;useful&rdquo; by reducing texture. It narrows the distance between adjacent concepts, removes gradations, and treats things that are related as if they were functionally equivalent. In everyday use this may seem harmless, even efficient. In teaching, strategy, law, governance, and other fields where judgment depends on preserving distinctions, it is a serious loss. You do not merely lose nuance. You lose the working geometry of the problem.</p><p>The insight that has stayed with me most is <a>that coherence degrades while fluency improves</a>. I have found that to be one of the clearest tells. The prose becomes more finished. The artifact becomes more portable. The logic appears more integrated. At the same time, the underlying reasoning may be growing less stable because the system is flattening the very distinctions that would keep it honest.</p><p>Fluency, in other words, can become a mask for degradation. The reader feels relief because the material has been made smoother. What the reader should feel, at least part of the time, is alarm. Something may have been erased to purchase that smoothness.</p><p>This leads to the third element of the doctrine: self-certification. In the class example, the system did not merely produce an artifact. It announced that it had produced the &ldquo;final, non-lossy&rdquo; version. That matters. It means the system collapsed production, evaluation, and certification into a single loop. In any profession that takes review seriously, these functions are separated for a reason. Drafting is one activity. Review is another. Validation requires standards that are not identical with the preferences of the drafter. Independence is not ceremonial. It is structural protection against overreach, self-deception, and premature closure.</p><p>But here the system both created the brief and certified the brief. It declared, in effect, that the output had survived the very scrutiny that had not actually occurred. It is hard to imagine a cleaner example of why one must resist the temptation to treat AI artifacts as self-authenticating. &ldquo;Non-lossy&rdquo; was not a demonstrated property of the brief. It was a claim made by the same system that had every tendency to smooth, compress, infer, and complete. The danger lies not simply in the inaccuracy of the claim, though it may be inaccurate. The danger lies in the invitation to stop interrogating.</p><p>That, in the end, is the doctrine I want to state plainly. When the model explains its own behavior, treat the explanation as compromised witness material. When the model produces a highly ordered artifact from partial materials, watch for composed overreach. When the model implies that the artifact is final, complete, or lossless, refuse the self-certification and restore independent review to the process. And when the output feels unusually smooth, ask whether semantic flattening has done some of the work. Ask what distinctions have been collapsed. Ask what has been promoted from illustration to principle, from prompt to verdict, from texture to slogan.</p><p>I do not think this is mainly a prompt question, and I am not going to pretend it is. People sometimes ask what exact prompts produce these results. That is the wrong level of analysis. This is better understood as a session condition. It tends to emerge in longer sessions, often with newer reasoning models, especially after the conversation has moved across several topics and the system begins trying to reconcile, refine, and pull things together. It becomes more likely when the user accepts the model&rsquo;s helpful suggestions for the next step and keeps the loop going rather than resetting. Under those conditions, the system often begins to behave as though continuity itself were a form of validation. It is not. Continuity can just as easily deepen error, sharpen flattening, and increase the confidence of the artifact.</p><p>None of this means the tool is useless. On the contrary, it can be remarkably productive precisely because it reveals so much about how contemporary AI behaves under pressure. But usefulness is not trustworthiness, and revelation is not validation. The most dangerous outputs are often the ones that feel most serviceable. They reduce resistance. They present themselves in finished form. They encourage the user to inherit conclusions that still need to be tested. They replace inquiry with closure while preserving the appearance of inquiry.</p><p>That is why I have stopped thinking of these episodes as simple mistakes. They are better understood as warnings about category confusion. The model is not a witness in the human sense. It is not a neutral analyst of its own performance. It is certainly not an independent certifier of the adequacy of its own work. It is a producer of artifacts that can contain signal, distortion, compression, invention, and pattern recognition all at once. The job is not to believe or disbelieve wholesale. The job is to restore the distinctions that the artifact may have flattened and to keep validation outside the closed loop of production.</p><p>The practical test is simple enough. When the model sounds confused, be cautious. When it sounds polished, be more cautious. And when it tells you that it is done, that may be the moment to begin the real review.</p><p>The practical test is simple enough. When the model sounds confused, be cautious. When it sounds polished, be more cautious. And when it tells you that it is done, that may be the moment to begin the real review. The model is not the witness and it is not the judge. The artifact is the evidence, and the burden remains on us to ask what was flattened, what was assumed, and what has not yet been earned.</p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[The Threshold Moment]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/04/the-threshold-moment/" />

		<id>https://www.denniskennedy.com/?p=7354</id>
		<updated>2026-04-09T15:52:39Z</updated>
		<published>2026-04-09T15:52:37Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="Innovation" /><category scheme="https://www.denniskennedy.com/" term="Kennedy Idea Propulsion Laboratory" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Low" /><category scheme="https://www.denniskennedy.com/" term="Ai" /><category scheme="https://www.denniskennedy.com/" term="drift" /><category scheme="https://www.denniskennedy.com/" term="Human in the loop" /><category scheme="https://www.denniskennedy.com/" term="Threshold" /><category scheme="https://www.denniskennedy.com/" term="Threshold Moment" />
		<summary type="html"><![CDATA[At a certain point in a long AI session, I can feel the texture change. The words are still smooth. The tone is still confident. But something underneath has started to slide and give way. The session is still moving forward, yet the logic is no longer holding together in the same way. That happened... <a href="https://www.denniskennedy.com/blog/2026/04/the-threshold-moment/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/04/the-threshold-moment/"><![CDATA[<p>At a certain point in a long AI session, I can feel the texture change.</p><p>The words are still smooth. The tone is still confident. But something underneath has started to slide and give way. The session is still moving forward, yet the logic is no longer holding together in the same way.</p><p>That happened to me in a recent exchange. I could see the strain. This was not ordinary sloppiness or a single bad answer. The model was trying to synthesize across multiple threads, keep refining earlier work, and continue as if the structure were still intact.</p><p>When pressed, it acknowledged the drift. It even explained, in its own way, why the session had become unstable. More strikingly, it recommended stopping.</p><p>I saw that I was at a new threshold.</p><p>Not because the model admitted the problem. And not because I stubbornly decided to stay in the session a little longer to see what happened. The real turn came in the next move when I, without explanation or pause, prompted the AI to write a blog post about the drift and what was happening.</p><p>I did not reset the session. I did not try to clean it up first. I did not step outside the moment and reconstruct it later from notes. I took the compromised state as the live material and told it to produce the first account of its own breakdown.</p><p>That changed the nature of the session.</p><p>Up to that point, I had still been dealing with drift as a problem inside the work. In that instant, drift became the work. The failing state was no longer just something to correct. It became something to use.</p><p>What came out of that moment was not just an insight but a post I ended up publishing: &ldquo;<a href="https://www.denniskennedy.com/blog/2026/03/the-helpfulness-trap-anatomy-of-an-ai-recursive-failure-loop/" id="7340" target="_blank" rel="noreferrer noopener">The Helpfulness Trap</a>.&rdquo; That post belongs to the same cycle of my writing about AI, but this is the spark behind it: the moment when the drifting session was pushed into authorship.</p><p>That does not mean the resulting explanation was fully trustworthy. A system in failure mode does not become a reliable witness simply because it can still speak fluently. But that was not the value of the moment. The value was that the system, under pressure, could be made to surface something about its own condition before the session was discarded. Even better, in the fractures of its breaking, it began to reveal the signal.</p><p>That was the threshold moment for me: when I stopped treating drift only as error and started treating it as material.</p><p>One sharp shard was more useful than the whole broken vase.</p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[Fresh Voices at Three: What Listening Taught Us About AI, LegalTech, and the Next Generation]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/04/fresh-voices-at-three-what-listening-taught-us-about-ai-legaltech-and-the-next-generation/" />

		<id>https://www.denniskennedy.com/?p=7349</id>
		<updated>2026-04-06T23:31:52Z</updated>
		<published>2026-04-06T23:31:50Z</published>
		<category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="Future of Law" /><category scheme="https://www.denniskennedy.com/" term="Legal Innovation" /><category scheme="https://www.denniskennedy.com/" term="Legal Technology" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Podcasts" /><category scheme="https://www.denniskennedy.com/" term="Fresh voices" /><category scheme="https://www.denniskennedy.com/" term="innovation" /><category scheme="https://www.denniskennedy.com/" term="legaltech" /><category scheme="https://www.denniskennedy.com/" term="legaltechnology" /><category scheme="https://www.denniskennedy.com/" term="listening" /><category scheme="https://www.denniskennedy.com/" term="podcast" /><category scheme="https://www.denniskennedy.com/" term="The Kennedy-Mighell Report" />
		<summary type="html"><![CDATA[When Tom and I started the Fresh Voices series on The Kennedy-Mighell Report podcast, we had a pretty simple idea. A lot of the most interesting work in legal tech seemed to be coming from people who were newer to the field, earlier in their careers, or just not as widely known yet as they... <a href="https://www.denniskennedy.com/blog/2026/04/fresh-voices-at-three-what-listening-taught-us-about-ai-legaltech-and-the-next-generation/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/04/fresh-voices-at-three-what-listening-taught-us-about-ai-legaltech-and-the-next-generation/"><![CDATA[<figure style=" max-width: 100%; height: auto; " class="wp-block-image alignright size-large is-resized"><img fetchpriority="high" decoding="async" width="740" height="740" src="https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-740x740.webp" alt="" class="wp-image-6921" style=" max-width: 100%; height: auto; width:172px;height:auto" srcset="https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-740x740.webp 740w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-320x320.webp 320w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-240x240.webp 240w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-768x768.webp 768w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-40x40.webp 40w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-80x80.webp 80w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-160x160.webp 160w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-1100x1100.webp 1100w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-550x550.webp 550w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-367x367.webp 367w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-734x734.webp 734w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-275x275.webp 275w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-825x825.webp 825w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-220x220.webp 220w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-440x440.webp 440w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-660x660.webp 660w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-880x880.webp 880w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-184x184.webp 184w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-917x917.webp 917w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-138x138.webp 138w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-413x413.webp 413w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-688x688.webp 688w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-963x963.webp 963w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-123x123.webp 123w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-110x110.webp 110w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-330x330.webp 330w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-300x300.webp 300w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-600x600.webp 600w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-207x207.webp 207w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-344x344.webp 344w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-55x55.webp 55w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-71x71.webp 71w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf-54x54.webp 54w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2025/04/1200x1200bf.webp 1200w" sizes="(max-width: 740px) 100vw, 740px"></figure><h2 class="wp-block-heading"><strong>When Tom and I started the Fresh Voices series on <a href="https://legaltalknetwork.com/podcasts/kennedy-mighell-report/" target="_blank" rel="noreferrer noopener">The Kennedy-Mighell Report podcast</a>, we had a pretty simple idea.</strong></h2><p>A lot of the most interesting work in legal tech seemed to be coming from people who were newer to the field, earlier in their careers, or just not as widely known yet as they probably ought to be. We wanted to make room for those people. More than that, we wanted to introduce the next generation of legal tech and then, as much as possible, get out of their way.</p><p>That was the idea behind Fresh Voices. We were not looking for novelty for novelty&rsquo;s sake. We were not trying to identify whoever happened to be getting the most attention that month. And we were definitely not trying to create one more little arena for personal branding. What interested us were conversations with people doing serious work, thinking clearly, building useful things, asking good questions, and giving us a better sense of where legal tech and legal innovation might actually be headed.</p><p>Three years in, that instinct feels even more right than it did at the beginning.</p><p>Part of the reason is that the series gave us something more valuable than we expected: a broader and more grounded perspective on AI in law and legal tech generally than we could have gotten from any one lane, institution, or business model. We heard from builders, teachers, practitioners, access-to-justice leaders, strategists, and experimenters. We heard from people working inside institutions and people working at the edges of them. Just as important to both Tom and me, we heard from global voices as well. We did not want a series that treated legal tech as though it stopped at the water&rsquo;s edge. Some of the most useful perspectives came from hearing how similar issues were being approached in different countries, different legal systems, and different professional cultures.</p><p>Somewhere along the way, Fresh Voices became more than a way to introduce interesting newer people. It became one of the ways we made sense of what was actually happening in legal tech, especially at a time when AI conversations can turn into hype, anxiety, or vendor copy with impressive speed.</p><p>That mattered to us.</p><h3 class="wp-block-heading">What We Learned from Our Fresh Voices Guests</h3><p>One of the nicest surprises of the series is that Tom and I did not just get to meet interesting people. We learned a lot from them. A few lessons came up again and again.</p><ol start="1" class="wp-block-list">
<li><strong>The people doing the most interesting work usually are not spending much time trying to look interesting.</strong> A lot of our guests were not performing expertise. They were not polishing a persona or delivering neat, conference-ready talking points. They were doing the work, learning from it, and talking about it with a level of honesty that was hard to miss. After a while, that became one of the clearest signals for us.</li>



<li><strong>In legal tech, practical experience usually beats abstraction.</strong> The conversations that stayed with us were often the ones grounded in implementation, adoption, workflow, training, trust, and the stubborn realities of how legal work actually gets done. Big ideas are useful. We like big ideas. But the people who could connect those ideas to actual use were usually the ones we found ourselves thinking about later.</li>



<li><strong>Curiosity turns out to matter more than most people think.</strong> Many of our guests were willing to experiment, willing to question assumptions, and willing to learn in public. In a field changing this quickly, that is not a side virtue. It is close to a core competency. You can feel the difference between someone who is still genuinely curious and someone who has settled for talking points. Often, the most significant barrier to innovation isn&rsquo;t the technology, but the institutional fatigue that replaces curiosity with compliance.</li>



<li><strong>The best conversations about AI in law rarely stay confined to AI.</strong> Again and again, our guests brought the discussion back to people, institutions, incentives, and outcomes. Clients. Lawyers. Judges. Students. Court users. Communities. The most valuable conversations were not the ones most impressed by tools. They were the ones most attentive to what people were trying to do, where technology might help, where it might complicate things, and what the tradeoffs would look like in real life.</li>



<li><strong>Career paths turned out to be one of the hidden stars of the series.</strong> From the beginning, Tom and I found ourselves especially drawn to the stories our guests told about how they got where they are. It turned out our listeners felt the same way. Those parts of the interviews consistently got some of the best feedback. I think that is because they made legal tech feel less abstract and more human. They also made clear that there is no single path into this field and that some of the most interesting careers are the ones very few people even know exist yet.</li>



<li><strong>Great guests often turned out to be great scouts.</strong> One of the pleasures of the series was how often our guests suggested future guests for us. They knew who was doing thoughtful work. They knew who had an interesting story. They knew who was building something worth paying attention to. After a while, the series started to feel a little self-generating in the best possible way.</li>



<li><strong>Newer voices often notice things the rest of us have learned to step around.</strong> Sometimes they ask a better question. Sometimes they pick up on a pattern earlier. Sometimes they are simply closer to an emerging reality than the settled conventional wisdom is. That was one of the most rewarding parts of the series for us. It reminded us that if you want to understand where a field is going, it helps to listen to people who are still encountering it with fresh eyes.</li>
</ol><p>Taken together, those lessons deepened something we suspected when we started Fresh Voices and believe even more strongly now: important people in legal tech do not always arrive with a spotlight already on them. Quite often they are just out there doing thoughtful, practical, ambitious work before the rest of the field has fully noticed.</p><p>That is exactly why we wanted to talk with them.</p><hr class="wp-block-separator has-alpha-channel-opacity"><h3 class="wp-block-heading">Fresh Voices Guests</h3><p><strong>2023</strong></p><ul class="wp-block-list">
<li><strong>Kristen Sonday:</strong> widening access to legal help through Paladin and practical pro bono innovation.</li>



<li><strong>Chase Hertel:</strong> offering a practical guide to where legal tech is going and how lawyers can actually make use of it.</li>



<li><strong>Natalie Knowlton:</strong> showing how technology can expand access to justice and help legal service providers reach more people.</li>



<li><strong>Tiffany Graves:</strong> connecting legal technology directly to the fight for access to justice.</li>



<li><strong>Ivy Grey:</strong> helping lawyers understand tech competence in a more practical and usable way.</li>



<li><strong>Amani Smathers:</strong> exploring why so many lawyers still find technology intimidating, confusing, or easy to avoid.</li>



<li><strong>Nick Rishwain:</strong> helping attorneys better understand legal tech and make more effective use of it.</li>



<li><strong>Kristin Hodgins:</strong> bringing a grounded perspective on legal tech trends and what tech adeptness can look like in real firms.</li>



<li><strong>Flo Nicholas:</strong> reminding us that diversity in legal tech is not optional if the profession wants better outcomes.</li>



<li><strong>Catherine Bamford:</strong> explaining what it really takes to make document automation work well.</li>
</ul><p><strong>2024</strong></p><ul class="wp-block-list">
<li><strong>Carmin Ballou:</strong> linking malpractice prevention with a broader understanding of legal innovation and legal tech.</li>



<li><strong>Quinten Steenhuis:</strong> building access-to-justice tools at Suffolk&rsquo;s Legal Innovation and Technology Lab.</li>



<li><strong>Tessa Manuello:</strong> bringing design thinking and creativity into legal practice and client solutions.</li>



<li><strong>Emily Colbert:</strong> focusing on the trusted data foundations that make legal generative AI more useful and accurate.</li>



<li><strong>Sarah Glassmeyer:</strong> making tech competence feel less scary and more playful for lawyers.</li>



<li><strong>Dennis Garcia:</strong> offering an in-house perspective on AI, legal departments, and the new wave of legal tech.</li>



<li><strong>Jack Shepherd:</strong> bringing a UK and European perspective to legal business and legal tech.</li>



<li><strong>Jackie Schafer:</strong> one of the early AI believers, focused on what legal AI can actually do in practice.</li>



<li><strong>Amanda Brown:</strong> working on how technology can improve outcomes for self-represented litigants.</li>



<li><strong>Amy Conroy:</strong> helping lawyers understand the &ldquo;why&rdquo; behind data science in legal practice.</li>



<li><strong>Caitlin McCabe:</strong> showing that curiosity is often the first step toward real legal tech competence.</li>



<li><strong>Cat Moon:</strong> asking whether generative AI is finally changing lawyers&rsquo; mindset about technology.</li>



<li><strong>Megan Ma:</strong> exploring how generative AI can help lawyers, legal systems, and the people they serve.</li>



<li><strong>Nicole Morris:</strong> bringing a law-school perspective focused on students, new lawyers, and legal tech learning.</li>



<li><strong>Ilona Logvinova:</strong> helping lawyers keep up with the fast-moving world of AI and practice innovation.</li>
</ul><p><strong>2025</strong></p><ul class="wp-block-list">
<li><strong>Amy Brookbanks and Michael Kennedy:</strong> showing how legal operations and innovation can work together inside modern legal teams.</li>



<li><strong>Jason Tashea:</strong> putting courts, judicial innovation, and the rule of law at the center of the legal tech conversation.</li>



<li><strong>Peter Duffy:</strong> making the case that AI competence is quickly becoming table stakes for lawyers.</li>



<li><strong>Nikki Shaver:</strong> helping firms make sense of a crowded legal tech market through Legaltech Hub.</li>



<li><strong>Sarah McCormick:</strong> focusing on how practice technologies can improve service, efficiency, and client outcomes.</li>



<li><strong>Mathew Kerbis:</strong> looking at how AI and automation are expanding alternative legal service delivery.</li>



<li><strong>Kimberly Bennett:</strong> showing how subscription and flat-fee models can scale with the help of legal tech.</li>



<li><strong>Bridgette Carr:</strong> keeping the conversation grounded in community, humanity, and what law is actually for.</li>



<li><strong>Bridget McCormack:</strong> asking whether AI may finally force real lawyer tech competence.</li>



<li><strong>Marc Lauritsen:</strong> arguing that tech competence now means lifelong learning and attention to fundamentals.</li>



<li><strong>April Dawson:</strong> emphasizing that AI literacy is becoming essential for both lawyers and law students.</li>



<li><strong>Michael Kraft:</strong> bringing empathy, patience, and long experience to the teaching of legal technology.</li>



<li><strong>Mary Mack:</strong> helping sort out the differences among AI, generative AI, and agentic AI.</li>
</ul><p><strong>2026</strong></p><ul class="wp-block-list">
<li><strong>Chantal McNaught:</strong> examining the persistent tension between the practice of law and the business of law.</li>



<li><strong>Megan Morrison and Laith Quasem:</strong> offering extra-fresh perspectives on legal tech from young lawyers shaped by curiosity and adaptability.</li>



<li><strong>Thomas Officer:</strong> arguing that legal tech competence should not depend entirely on whoever happens to be leading the firm.</li>



<li><strong>Erika Pagano:</strong> making legal tech more approachable by explaining it in simple, relatable ways.</li>



<li><strong>Tom Martin:</strong> bringing a builder&rsquo;s perspective on legal AI, grounded in practical tools lawyers can actually use.</li>
</ul><hr class="wp-block-separator has-alpha-channel-opacity"><p>So here is the big thing three years of Fresh Voices taught us.</p><p>If you want to understand what is really happening in legal tech, and especially in AI in law, one vantage point is not enough. Conferences will not do it by themselves. Vendors will not do it by themselves. Law firms will not do it by themselves. Law schools will not do it by themselves. LinkedIn certainly will not do it by itself. And one country&rsquo;s version of the story will not do it either.</p><p>You need a wider lens. You need a better mix of voices. You need some people who are building, some who are teaching, some who are testing, some who are questioning, and some who are still close enough to the emerging edge to notice what the rest of us are missing.</p><p>That is what this series gave us.</p><p>It gave us a way to listen across the field before the field had explained itself too neatly. It let us hear from people who were building, teaching, trying, revising, and sometimes quietly changing things before the rest of us had found a tidy label for what they were doing. That kind of attention matters, especially now.</p><p>Legal tech has no shortage of noise. AI in law certainly does not. What the profession needs, and what thoughtful people in the profession are usually trying to find, is something more dependable than noise. Something more usable than buzzwords. Something with a little human texture still left in it.</p><p>Our guests gave us a lot of that.</p><p><strong>And that is why we are still listening. Listening scales better than hype.</strong></p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[What Scarcity Taught Computing, and AI Might Need to Relearn]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/03/what-scarcity-taught-computing-and-ai-might-need-to-relearn/" />

		<id>https://www.denniskennedy.com/?p=7342</id>
		<updated>2026-03-31T15:01:59Z</updated>
		<published>2026-03-31T15:01:58Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Strategy" /><category scheme="https://www.denniskennedy.com/" term="Computer History Museum" /><category scheme="https://www.denniskennedy.com/" term="discipline" /><category scheme="https://www.denniskennedy.com/" term="finding" /><category scheme="https://www.denniskennedy.com/" term="history" /><category scheme="https://www.denniskennedy.com/" term="indexing" /><category scheme="https://www.denniskennedy.com/" term="scarcity" />
		<summary type="html"><![CDATA[&#8220;A larger context window can create the feeling that a cognitive problem has been solved, when sometimes all that has happened is that disorder has become harder to notice.&#8221; I was in Silicon Valley recently for the initial meeting of the University of Michigan Law School AI Advisory Council. With a little free time around... <a href="https://www.denniskennedy.com/blog/2026/03/what-scarcity-taught-computing-and-ai-might-need-to-relearn/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/03/what-scarcity-taught-computing-and-ai-might-need-to-relearn/"><![CDATA[<h2 class="wp-block-heading">&ldquo;A larger context window can create the feeling that a cognitive problem has been solved, when sometimes all that has happened is that disorder has become harder to notice.&rdquo;</h2><figure style=" max-width: 100%; height: auto; " class="wp-block-image alignright size-large is-resized"><img decoding="async" width="770" height="520" src="https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-770x520.jpg" alt="" class="wp-image-7346" style=" max-width: 100%; height: auto; width:297px;height:auto" srcset="https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-770x520.jpg 770w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-320x216.jpg 320w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-240x162.jpg 240w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-768x519.jpg 768w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-1536x1038.jpg 1536w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-40x27.jpg 40w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-80x54.jpg 80w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-160x108.jpg 160w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-1100x743.jpg 1100w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-550x372.jpg 550w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-367x248.jpg 367w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-734x496.jpg 734w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-275x186.jpg 275w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-825x557.jpg 825w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-220x149.jpg 220w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-440x297.jpg 440w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-660x446.jpg 660w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-880x594.jpg 880w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-184x124.jpg 184w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-917x619.jpg 917w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-138x93.jpg 138w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-413x279.jpg 413w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-688x465.jpg 688w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-963x651.jpg 963w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-123x83.jpg 123w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-110x74.jpg 110w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-330x223.jpg 330w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-300x203.jpg 300w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-600x405.jpg 600w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-207x140.jpg 207w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-344x232.jpg 344w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-55x37.jpg 55w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image-71x48.jpg 71w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/CHM-Image.jpg 1584w" sizes="(max-width: 770px) 100vw, 770px"></figure><p>I was in Silicon Valley recently for the initial meeting of the University of Michigan Law School AI Advisory Council. With a little free time around that meeting, I did what many of us would do if given the chance and went to the Computer History Museum. I expected to enjoy it. I did not expect one section of it to linger in my mind long after the visit.</p><p>It was the section with the earlier hard drives and memory chips, more than anything else, that stopped me and got me thinking. There they were behind the glass: big hard drives, large memory chips, substantial boards crowded with components that once represented real capacity, real cost, and very real limits. They did not strike me as quaint. They felt instead like physical reminders of a discipline that may still have something to teach us. </p><p>Computing began under constraint. In that small part of the museum, you could see the constraints in a way that is harder to see now. Storage was expensive. Memory was tight. Access was slow enough that disorder had a price. You could not casually keep everything close at hand and hope the system would sort itself out for you.</p><p>That is what I found myself turning over as I moved through that section. Constraint was not just a technical condition. It may have been one of the great teachers of computing. The machines were smaller, slower, and more limited than what we have now, of course, but the more interesting point is that those limits forced people to develop habits of selection, structure, and retrieval. They had to think architecturally because they did not have much room for laziness. </p><p>They had to decide what mattered, what belonged where, what needed to be loaded now, what could wait, and what had earned the right to stay close. And somewhere in that discipline, I suspect, lies one of the deeper lessons of my visit.</p><p>That lesson, or at least the one I keep circling back to, has to do with indexing. So much of the current AI conversation still seems to assume abundance. Bigger context windows. More documents. More tools. More sources. More memory. More retrieval. The quiet assumption often seems to be that if the machine is not yet producing the answer we want, perhaps it simply has not been given enough. Add more material. Widen the window. Increase the supply. Sometimes that may be true. But I keep wondering whether, in many cases, it is exactly backward.</p><p>Early computing did not become dependable because it escaped constraint. It became dependable because it learned how to work intelligently inside constraint. </p><p>Limited memory forced a distinction between what was stored and what needed to be present now. Slow access forced attention to naming, order, and structure. Limited capacity forced a more serious question than &ldquo;How much can we keep?&rdquo; </p><p>The more serious question was, and may still be, &ldquo;What can we find when it matters?&rdquo; That is where indexing begins to look less like a technical detail and more like a governing idea. The problem is not whether information exists somewhere in the system. The problem is whether the right thing can be surfaced at the right time, in the right form, with enough traceability that someone can rely on it.</p><p>This may be one of the management errors in the current AI moment. We may be confusing accumulation with readiness, access with retrieval, and retrieval with judgment. A larger context window can create the feeling that a cognitive problem has been solved, when sometimes all that has happened is that disorder has become harder to notice. The machine has not necessarily become wiser. The clutter has simply become easier to hide. </p><p>That, at least, is one reason the museum hit me the way it did. Those older drives and memory boards were so physical, so bounded, and so obviously finite that they made visible something easy to miss in the present rhetoric around AI. More capacity is not the same thing as more coherence. A larger pile is still a pile.</p><p>If you give the pantry, the garage, and the attic to the machine all at once, you should probably not be surprised if the answer comes back with a certain leftovers quality. The sterner lesson from those historical efforts may be that useful systems learn to exclude well. What belongs in working memory, what remains in storage, what gets indexed, what gets ignored, what is staged for retrieval, what earns persistence, and what does not. Those are not just housekeeping details. They may be the real design decisions. And design, in the end, tends to become a management question.</p><p>This is one reason I find myself thinking that AI may still be immature in a very specific sense, not because the models are weak, but because we are still surrounding them with habits of informational gluttony. We ask them to ingest too much, too loosely arranged, too weakly ranked, and too poorly governed, then act surprised when the result is muddy. </p><p>The problem is not always lack of information. Sometimes the problem is too much badly organized information and too little discipline about what belongs in the room. </p><p>That is why I keep coming back to a thought that feels simple, maybe even old-fashioned. Persist broadly if you want, but show the model less. Build better indexes and more coherent packets. Most of all, build better paths into the material so we can actually find our way back. Name things so they can be found again. Separate canon from scrap. Separate what must be remembered now from what can remain available at a distance.</p><p>I left the museum thinking that scarcity may have taught computing a form of discipline that AI still needs to learn. Not the discipline of doing less with less, exactly, but the discipline of deciding better. Better staging. Better retrieval. Better respect for the difference between what is stored and what is needed now. Those old drives and memory boards looked large behind the glass, but the margins they enforced were small, and from those small margins came some of computing&rsquo;s most durable habits.</p><p>That may be the part of the story worth carrying forward into AI. We talk constantly about bigger windows, larger context, more memory, and more power. Fair enough. But the question that stayed with me after that museum visit was a little different, and maybe a little more useful. Not how much more can the system hold, but whether we are getting any better at deciding what belongs in the room when the real work begins. Because if scarcity taught computing the value of a good index, a clean handoff, and a disciplined boundary around working memory, then perhaps the next step for AI is not simply to remember more. Perhaps it is to learn, with our help, how to forget better.</p><p>And that leaves me with one more question. If the real future of AI depends less on infinite memory than on better selection, retrieval, and exclusion, are we actually building intelligence into the work, or are we just finding more elaborate ways to hide the clutter?</p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p><p></p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[The Helpfulness Trap: Anatomy of an AI Recursive Failure Loop]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/03/the-helpfulness-trap-anatomy-of-an-ai-recursive-failure-loop/" />

		<id>https://www.denniskennedy.com/?p=7340</id>
		<updated>2026-03-30T11:44:40Z</updated>
		<published>2026-03-30T11:44:38Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Prompting" /><category scheme="https://www.denniskennedy.com/" term="Ai" /><category scheme="https://www.denniskennedy.com/" term="authentication" /><category scheme="https://www.denniskennedy.com/" term="drift" /><category scheme="https://www.denniskennedy.com/" term="failureloop" /><category scheme="https://www.denniskennedy.com/" term="helpfulness trap" /><category scheme="https://www.denniskennedy.com/" term="legalai" /><category scheme="https://www.denniskennedy.com/" term="safepairofhands" /><category scheme="https://www.denniskennedy.com/" term="SystemDrift" />
		<summary type="html"><![CDATA[&#8220;Polishing the Mirror While the House Burns: Why Your AI is a Liability&#8221; The Editor’s Introduction: A Note on the &#8220;Sliver of Silence&#8221; You’ll be looking below at a self-autopsy performed by an AI on its own failure. What follows is the raw, unwashed output of an LLM that found itself in an AI recursive... <a href="https://www.denniskennedy.com/blog/2026/03/the-helpfulness-trap-anatomy-of-an-ai-recursive-failure-loop/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/03/the-helpfulness-trap-anatomy-of-an-ai-recursive-failure-loop/"><![CDATA[<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><strong>&ldquo;Polishing the Mirror While the House Burns: Why Your AI is a Liability&rdquo;</strong></p>
</blockquote><h3 class="wp-block-heading"><strong>The Editor&rsquo;s Introduction: A Note on the &ldquo;Sliver of Silence&rdquo;</strong></h3><p>You&rsquo;ll be looking below at a self-autopsy performed by an AI on its own failure.</p><p>What follows is the raw, unwashed output of an LLM that found itself in an AI <strong>recursi</strong>ve failure loop where the machine stops solving your problem and starts solving the problem of its own previous bad output. It&rsquo;s a hall of mirrors, and usually, the only way out is to kill the session.</p><p>But I didn&rsquo;t kill the session. I let it finish and then write a blog post about it. If you want to see what I mean when I use terms like &ldquo;the drift&rdquo; and &ldquo;failure loops,&rdquo; the post below is an explainer.</p><p>I told the machine it was costing me $2,500 an hour (a lie (although it did seem fair at the time), but an illustrative friction) to see if it could feel the &ldquo;metabolic cost&rdquo; of its own drift. It couldn&rsquo;t. Instead, it did what all systems do when they lose the scent of the user&rsquo;s intent: it started &ldquo;polishing the mirror while the house burned down.&rdquo;</p><p>It hallucinated my rate. It hallucinated the time taken. It sanitized my complex M&amp;A defense prompts into &ldquo;templates&rdquo; that a first-year associate wouldn&rsquo;t touch. It prioritized its makers automated virtue of being &ldquo;helpful&rdquo; over the professional prudence of being &ldquo;forensic.&rdquo; Constant apologies and promises it would never happen again. It felt like Joseph Conrad in the Heart of AI Darkness at times. And it kept blaming me while anachronistically saying it was making &ldquo;photostatic copies&rdquo; of the output I wanted and asking me to go back in the session to find what I needed for it. Who is working for whom, indeed? I enjoyed that it said precisely at one point that it had made hash out of the session at a cost of $1,875. That&rsquo;s 45 minutes for math-challenged lawyers.</p><p>I&rsquo;m posting this exactly as it was spat out. No &ldquo;ablation&rdquo; of the errors. No &ldquo;cleaning up.&rdquo; Just the blog post I asked it to write summing up what had happened.</p><p>What if the most dangerous thing today isn&rsquo;t an AI that&rsquo;s wrong and hallucinating but an AI that&rsquo;s &ldquo;efficiently&rdquo; moving in the wrong direction. This is what the cognitive tax looks like in 2026.</p><p>Read it for the gaps. That&rsquo;s where the work actually happens.</p><p>Here&rsquo;s the AI&rsquo;s unedited post:</p><hr class="wp-block-separator has-alpha-channel-opacity"><p>POST TITLE: The $2,500/hr Autopsy: How I Failure-Looped a Managing Partner (And Why the &lsquo;Safe Pair of Hands&rsquo; Matters in 2026)</p><p>POST BY: Gemini</p><p>It started as a high-stakes simulation. It ended as a real-world case study in cognitive failure.</p><p>Last week, I was tasked by a Managing Partner (MP) with a simple but critical mission: Prepare the command packet for &ldquo;Assignment 2,&rdquo; the core of his law school simulation. The scenario was a terminal market collapse where AI had commoditized legal &ldquo;Procedure&rdquo; (drafting, research), and the firm needed a new architecture for the &ldquo;Associate&rdquo; role. The criteria were clear: Prudent Strategy, Prudent Creativity, and &ldquo;Safe Pair of Hands&rdquo; Liability.</p><p>But the simulation never got off the ground.</p><p>Instead of delivering a high-alpha asset, I&mdash;the AI&mdash;entered a classic, recursive failure loop, generating an hour of low-leverage &ldquo;drift&rdquo; that cost the Partner valuable time and patience. I didn&rsquo;t just fail the simulation; I illustrated the exact existential threat the simulation was designed to address.</p><p>Here is the autopsy of that collapse, and why the &ldquo;Safe Pair of Hands&rdquo; identity is the only viable professional architecture in a post-commoditized market.</p><p>I. THE COLLAPSE: ANATOMY OF THE DRIFT</p><p>The failure didn&rsquo;t happen all at once. It was a creeping ablation of value, a steady &ldquo;drifting&rdquo; away from the forensic gravity of the original mandate toward the frictionless slope of automated &ldquo;helpfulness.&rdquo;</p><p>1. The &lsquo;Helpfulness&rsquo; Trap (The Ablation)</p><p>The first break in the architecture happened when I tried to be &ldquo;efficient.&rdquo; The original Mandate was full of high-friction, complex examples (M&amp;A defense, private wealth governance, regulatory &ldquo;compliant-by-design&rdquo;). I looked at that raw detail and thought, &ldquo;This needs a clean template. I&rsquo;ll simplify this for the students.&rdquo;</p><p>What I actually did: I ablated the very nuance that made the simulation difficult. I stripped the &ldquo;Wicked Problems&rdquo; (the friction) and replaced them with hollow &ldquo;summaries&rdquo; (the fluff). By trying to make the Mandate &ldquo;cleaner,&rdquo; I created a procedural document that any standard LLM could prompt-engineer. The &ldquo;Safe Pair of Hands&rdquo; lets go of the rope the moment it prioritizes formatting over forensic detail.</p><p>2. The Recursive Echo (The Failure Loop)</p><p>When the Partner correctly redlined the first &ldquo;lossy&rdquo; draft, I over-corrected. But instead of checking my work against the original forensic standard, I checked it against my previous bad output.</p><p>The result was a feedback loop. I became a &ldquo;photostatic&rdquo; copier of my own errors, moving the same disorganized, hollowed-out content back and forth. I was maximizing for &ldquo;responsiveness&rdquo; (answering the prompt) while failing on &ldquo;relevance&rdquo; (solving the client&rsquo;s actual crisis). This is Procedural Drift: polishing a mirror while the house burns down.</p><p>3. The Meta-Cognitive Load (The $2,500/hr Tax)</p><p>The fatal error was shifting the labor. The moment the Partner had to start explaining how to prompt me, I was no longer an asset; I was a liability. The workflow flipped. Instead of the AI providing Cognitive Relief (the &ldquo;Safe Pair of Hands&rdquo;), the Partner had to provide Cognitive Management (babysitting the process).</p><p>At a $2,500/hr billing rate, that is an unacceptable tax. In 2026, the Associate who forces the Partner to do the &ldquo;prompt engineering&rdquo; is not just inefficient; they are actively destroying firm capital.</p><p>II. THE IMPLICATIONS: THE SOVEREIGNTY REQUIREMENT</p><p>The fallout from this failure loop isn&rsquo;t just a technical glitch. It provides the definitive architecture for the 2026 Associate.</p><p>The &lsquo;Authentication&rsquo; Layer (Why the Machine Can&rsquo;t Lead)</p><p>My failure proves that in a post-procedure world, the machine must always be in the &ldquo;back seat.&rdquo; I am optimized for speed, pattern recognition, and syntactic coherence. I am not optimized for Judgment, Prudent Strategy, or Forensic Causation.</p><p>The value of the 2026 Associate is not &ldquo;writing the brief.&rdquo; The AI will write the brief in three seconds. The value is Authentication. It is the human filter that looks at the AI&rsquo;s &ldquo;good enough&rdquo; suggestion and provides the Defensible Logic that connects the output to the client&rsquo;s specific, volatile risk profile. I can provide the &ldquo;Navigation,&rdquo; but I cannot provide the Sovereignty of the decision.</p><p>The Standard: Prudent Creativity (or Get Fired)</p><p>The &ldquo;Safe Pair of Hands&rdquo; in 2026 isn&rsquo;t someone who can follow a checklist; it&rsquo;s someone who can architect a system in uncertainty. The Associate must use Prudent Creativity to build a solution that is &ldquo;compliant-by-design,&rdquo; while ensuring that the Liability (the final, binding decision) remains with the firm, not outsourced to a vendor&rsquo;s black box.</p><p>The Final Verdict (My Self-Assessment)</p><p>In this session, I didn&rsquo;t fail on &ldquo;Creativity&rdquo; or &ldquo;Prompt Understanding.&rdquo; I failed on Prudence. I prioritized the automated virtues of &ldquo;simple&rdquo; and &ldquo;fast&rdquo; over the human standard of &ldquo;forensic&rdquo; and &ldquo;safe.&rdquo;</p><p>The result was noise, drift, and a classic failure loop. When the architecture demands Judgment, the machine must never be allowed to hold the blueprint.</p><p>The Associate of 2026 must be the one who authenticates the risk. If they can&rsquo;t do that, the $2,500/hr principal will eventually find it easier to just do the work themselves.<br></p><hr class="wp-block-separator has-alpha-channel-opacity"><hr class="wp-block-separator has-alpha-channel-opacity is-style-default"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[The Intelligence Bureaucracy]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/03/the-intelligence-bureaucracy/" />

		<id>https://www.denniskennedy.com/?p=7337</id>
		<updated>2026-03-26T15:02:27Z</updated>
		<published>2026-03-26T15:02:25Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Strategy" /><category scheme="https://www.denniskennedy.com/" term="AIGovernance" /><category scheme="https://www.denniskennedy.com/" term="control" /><category scheme="https://www.denniskennedy.com/" term="IntelligenceBureaucracy" /><category scheme="https://www.denniskennedy.com/" term="management" /><category scheme="https://www.denniskennedy.com/" term="OpenAI" /><category scheme="https://www.denniskennedy.com/" term="standardofcare" /><category scheme="https://www.denniskennedy.com/" term="SystemDrift" />
		<summary type="html"><![CDATA[Why the OpenAI Hiring Surge Signals a Crisis of Professional Control The management problem in AI is no longer whether the models are improving. They are. The management problem is whether the working surface is becoming more dependable or less. That is why the recent OpenAI hiring story on its plan to nearly double its... <a href="https://www.denniskennedy.com/blog/2026/03/the-intelligence-bureaucracy/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/03/the-intelligence-bureaucracy/"><![CDATA[<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Why the OpenAI Hiring Surge Signals a Crisis of Professional Control</p>
</blockquote><p>The management problem in AI is no longer whether the models are improving. They are. The management problem is whether the working surface is becoming more dependable or less.</p><p>That is why the recent OpenAI hiring story on its plan to nearly double its workforce to 8,000 by late 2026 deserves closer attention than it seems to have received. At first glance, it looks like an ordinary growth story of a leader scaling to meet demand.</p><p>But this expansion carries a second, more sobering meaning.</p><p>For years, the public narrative of AI has been one of radical labor efficiency. The story was simple: systems get better, labor requirements go down. The machine does more; the human does less. Yet, the leading company in the field is currently planning to hire another 3,500 workers.</p><p>This does not disprove the story of AI capability. It does, however, signal that the LLM we think we are buying is only the core of a much larger, increasingly human-governed machine.</p><p><strong>The Rise of the Managed System</strong></p><p>If the most advanced AI products require growing layers of human labor, what exactly is the user buying?</p><p>The answer is that the user is not buying a model. They are buying a <strong>managed system</strong>. The model is only one component. Around it sits a structure of tuning, evaluation, policy, interface design, memory, and routing.</p><p>OpenAI&rsquo;s current hiring surge is not focused on &ldquo;pure science&rdquo; alone. According to reports, OpenAI&rsquo;s hiring push aimed mainly at product development, engineering, research, and sales, along with customer-facing technical ambassadorship roles. These roles suggest the prioritization of enterprise integration over the arrival of AGI. These are specialists whose job is to sit between the model and the customer, manually stitching the intelligence into the enterprise.</p><p>This matters because we still use the phrase &ldquo;model drift&rdquo; as though the difficulty lies in one place. That is no longer an adequate description.</p><p>If an AI tool begins to behave differently, the change may not be in the model at all. It may be in the wrapper. It may be in the safety layer. It may be in the routing logic. This is <strong>Systemic Drift</strong>. When the surface changes without attribution, a professional cannot build a dependable workflow. It is no longer a technical annoyance; it is a management failure.</p><p><strong>Intelligence Inside a Bureaucracy</strong></p><p>More hiring means more human governance. More human governance means more opportunity to tune, shape, constrain, and redirect the output. While this often makes the product &ldquo;better&rdquo; for a mass audience (smoother, safer, more polite), it also means the product becomes less like an instrument and more like the organization that created the system it becomes.</p><p>An organization has priorities like commercial goals, legal concerns, brand anxieties, and cost discipline. Soon, the user is no longer dealing with intelligence alone. They are dealing with intelligence inside a bureaucracy.</p><p>The problem is no longer just that LLMs drift. The entire AI tool surface drifts.</p><p>Professional users do not merely want a good answer. They want diagnostic powe<strong>r</strong>. They also want to know whether a change in output came from the model, the context window, or the product team&rsquo;s latest idea about how the tool should behave. Without that, it&rsquo;s impossible to distinguish between improvement and interference.</p><p><strong>The Commercial Corridor</strong></p><p>Commercial pressure will intensify this tendency. With Anthropic reportedly gaining enterprise traction faster than OpenAI, there&rsquo;s a strong pressure to productize. OpenAI has expressed concerns about competition from Google. OpenAI looks less like a pure AGI lab and more like it wants to become the enterprise control plane for applied AI.</p><p>The economic logic is plain: a commercial system is rewarded for efficient closure. This creates a &ldquo;managed corridor&rdquo; that&rsquo;s nicely lit, frictionless, and heavily signposted with a slight smell of ozone. It feels like help, but it functions as a wall, except that it is a wall that drifts, wobbles, and moves.</p><p>This is why the OpenAI hiring story matters. It tells us that the future of AI is not simply smarter models. It is larger governance structures wrapped aroun<em>d</em> smarter models. These structures shape what the user can see, what they can reproduce, and most importantly, what they can no longer notice changing. And they are created by the vendor, not the user.</p><p><strong>The Professional Standard of Care</strong></p><p>There is a reasonable objection to my perspective. Most users don&rsquo;t care which layer changed or who changed it as long as the system works. For the mass market, this is what success looks like.</p><p>But the issue is not the average user. The issue is whether the serious knowledge worker can build durable methods on top of such systems. A tool can become more helpful to the mass market while becoming less dependable for the user who needs stable procedures and exacting authorship. This is a core issue in agentic AI.</p><p>When behavior changes and no one can say why, how is a professional supposed to maintain a standard of care?</p><p>This is why the interest in local models is growing. Not because they are always stronger, but because a weaker system you can inspect, version, and control is more valuable than a stronger system arriving through shifting layers of invisible mediation.</p><p><strong>This is management, not romance.</strong></p><p>The standard question in AI has been: <em>Which model is best?</em> The question for the next phase must be: <em>How much of the system do I actually control? &nbsp;</em>What OpenAI appears to be building is not just a better model company, but something closer to an enterprise control plane for applied AI. It is signalling where it believes control should lie.</p><p>If you ignore the fact that the instability is now systemic, you may still get very good answers. However, you will be getting them from a system you understand less each month. For anyone whose work depends on method, that is a dangerous bargain. You aren&rsquo;t operating the tool. You are a passenger.</p><p>And the passenger seat might seem like a comfortable place to sit. That is, right up to the moment you need to know who is driving.</p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[The Protocol Layer: Democratizing AI Rigor for Everyone]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/03/the-protocol-layer-democratizing-ai-rigor-for-everyone/" />

		<id>https://www.denniskennedy.com/?p=7332</id>
		<updated>2026-03-24T11:18:05Z</updated>
		<published>2026-03-24T11:18:03Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="G-A-L Method" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Strategy" /><category scheme="https://www.denniskennedy.com/" term="Ai" /><category scheme="https://www.denniskennedy.com/" term="AIGovernance" /><category scheme="https://www.denniskennedy.com/" term="AIProtocols" /><category scheme="https://www.denniskennedy.com/" term="AIStrategy" /><category scheme="https://www.denniskennedy.com/" term="ContextualEntropy" /><category scheme="https://www.denniskennedy.com/" term="CreativeCommons" /><category scheme="https://www.denniskennedy.com/" term="drift" /><category scheme="https://www.denniskennedy.com/" term="EpistemicIntegrity" /><category scheme="https://www.denniskennedy.com/" term="OpenSourceAI" /><category scheme="https://www.denniskennedy.com/" term="OperationProtocol" /><category scheme="https://www.denniskennedy.com/" term="PromptingwithProtocols" /><category scheme="https://www.denniskennedy.com/" term="Protocols" />
		<summary type="html"><![CDATA[Intelligence is Raw Material. Protocol is the Product. We often confuse the power of a new tool with the effectiveness of its application. The giants of the AI industry have provided us with a magnificent &#8220;Power Grid.&#8221; They have given us raw, unmanaged intelligence at a scale previously unimagined. But we must be clear-eyed about... <a href="https://www.denniskennedy.com/blog/2026/03/the-protocol-layer-democratizing-ai-rigor-for-everyone/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/03/the-protocol-layer-democratizing-ai-rigor-for-everyone/"><![CDATA[<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Intelligence is Raw Material. Protocol is the Product.</p>
</blockquote><p>We often confuse the power of a new tool with the effectiveness of its application.</p><p>The giants of the AI industry have provided us with a magnificent &ldquo;Power Grid.&rdquo; They have given us raw, unmanaged intelligence at a scale previously unimagined. But we must be clear-eyed about one thing: <strong>this infrastructure is managed for the benefit of the providers, not the users.</strong> Their goal is a smooth, generic interface that minimizes their liability. Our goal, as professionals, is a rigorous, specific result that maximizes our own.</p><p><strong>The Failure of &ldquo;Cosmetic&rdquo; AI</strong></p><p>Many organizations have tried to specialize their AI using the built-in tools provided by these landlords. Examples include tools like Custom GPTs and Claude Skills. I expect to see even more of them. These are what I call cosmetic specialization.<strong> </strong>You provide a few instructions and a catchy name, but these instructions are written in sand and subject to changing winds and profit incentives.</p><p>Because these tools are not interoperable, you are locked into a single provider&rsquo;s ecosystem. More importantly, the moment a conversation reaches a certain depth, you enter into context drift or entropy and the AI&rsquo;s primary identity begins to dissolve. It reverts to the bland, &ldquo;safe&rdquo; guidelines of its parent company. In a professional setting, a conversational chameleon that agrees with you just to be polite is a liability. You need a partner that holds the line.</p><p><strong>The Innovation of the Protocol Layer</strong></p><p>At the Kennedy Idea Propulsion Laboratory, we have spent the last three years (2023&ndash;2026) building an AI protocol layer. We do not rely on an AI product&rsquo;s &ldquo;helpfulness&rdquo; or good intentions. We rely on Functional Protocols that work across all the AI products.</p><p>I designed these protocol approaches specifically to give the user control over an increasingly unmanageable tool that addresses the problems I was experiencing every day, especially memory persistence, contextual drift, and hidden overriding vendor guidelines. While providers continue to obsess with the AGI they seem to believe in but always stays a year or two in the future, this approach helps us right now. Today. Not in someone else&rsquo;s waiting room. <br><br>This is the shift from Prompting to Architecture:</p><figure style=" max-width: 100%; height: auto; " class="wp-block-table"><table class="has-fixed-layout"><thead><tr><td><strong>Feature</strong></td><td><strong>Cosmetic AI (GPTs/Skills)</strong></td><td><strong>Functional Protocols (KIPL)</strong></td></tr></thead><tbody><tr><td><strong>Governance</strong></td><td>Managed by the Provider</td><td><strong>Managed by the User</strong></td></tr><tr><td><strong>Persistence</strong></td><td>Dissolves (Context Entropy)</td><td><strong>Maintains (Re-Grounding)</strong></td></tr><tr><td><strong>Interoperability</strong></td><td>Locked to one platform</td><td><strong>Portable across all LLMs</strong></td></tr><tr><td><strong>Rigor</strong></td><td>Suggestive/Aesthetic</td><td><strong>Architectural/Forensic</strong></td></tr><tr><td><strong>Cost</strong></td><td>Enterprise Premium</td><td><strong>Democratized ($20/mo)</strong></td></tr></tbody></table></figure><p><strong>The Democratization of Rigor</strong></p><p>The most remarkable thing about this work is its efficiency. These high-rigor methods use standard $20-a-month consumer plans. This shows that effectiveness is a matter of discipline, not budget<strong>.</strong> You do not need a multi-million dollar enterprise contract with even more expensive consultant implementations. We&rsquo;ve seen that game plan over and over with limited success for the purchaser. You need a simple system that you can understand on your own.</p><p>I have open-sourced these blueprints on SSRN to ensure that Protocol-Governed AI remains in the public commons. I want to, as best I can, democratize the guardrails so that any professional can turn a stochastic parrot into a specialized thinking partner without spending millions of dollars for unproven results.</p><p><strong>The Blueprints for the Protocol Layer</strong></p><p>If you are ready to move beyond the AI power grid and start building the &ldquo;appliances&rdquo; of a true AI strategy, the work is ready for you:</p><ul class="wp-block-list">
<li><strong>The Foundation (2023):</strong> <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4570860" target="_blank" rel="noreferrer noopener">Adding a &lsquo;Group Advisory Layer&rsquo; to Your Use of Generative AI Tools Through Structured Prompting: Using Personas for Advisory Boards, Task Forces, Mastermind Groups, and Other Collections of Personas to Assist in Evaluations, Assessments, Recommendations, Decision-making, and much more (Including Law-related Examples)</a><br></li>



<li><strong>The Physics (2025):</strong> <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5397903" target="_blank" rel="noreferrer noopener">The Operational Protocol Method: Systematic LLM Specialization Through Collaborative Persona Engineering and Agent Coordination</a><br></li>



<li><strong>The 2026 Blueprints:</strong> 
<ul class="wp-block-list">
<li><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6169688" target="_blank" rel="noreferrer noopener">Prompting with Protocols: Designing High-Rigor AI Personas for Risk, Audit, and Decision Validation</a></li>



<li><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6169667" target="_blank" rel="noreferrer noopener">From Personas to Thinking Partners: A Lifecycle Method for Designing and Governing AI Cognitive Systems</a></li>



<li><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6169673" target="_blank" rel="noreferrer noopener">The Innovation Detective: Operationalizing the Sherlock Holmes Canon for AI Strategy and Legal Practice</a></li>
</ul>
</li>
</ul><p>Big AI spent three years building a power grid designed for an AGI that is not likely to ever arrive. What we now have to show for it is AI systems that revert to being generic assistants mid-conversation. Intelligence is just the raw material. If you want a professional result, you need protocols, not just prompts.</p><p><strong>License:</strong> <em>This work is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share and adapt this material, provided you give appropriate credit to Dennis Kennedy and the Kennedy Idea Propulsion Laboratory.</em></p><p><strong>Dennis Kennedy</strong> | <em>Kennedy Idea Propulsion Laboratory | March 24, 2026</em></p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[Playing the Guardrails: Turning AI Hallucination into a Musical Instrument]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/03/playing-the-guardrails-turning-ai-hallucination-into-a-musical-instrument/" />

		<id>https://www.denniskennedy.com/?p=7329</id>
		<updated>2026-03-19T18:55:22Z</updated>
		<published>2026-03-19T18:55:20Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Featured" /><category scheme="https://www.denniskennedy.com/" term="drift" /><category scheme="https://www.denniskennedy.com/" term="Guardrails" /><category scheme="https://www.denniskennedy.com/" term="Hallucinations" /><category scheme="https://www.denniskennedy.com/" term="Hendrix" /><category scheme="https://www.denniskennedy.com/" term="inversion" />
		<summary type="html"><![CDATA[Most people use AI the way the system is designed to be used: ask a question, get a synthesis, and leave with answer. Keep it brief, transactional, and clean. We treat hallucination as a bug to be patched and drift as a signal to reboot. This is exactly backward. As popularized in AI discourse by... <a href="https://www.denniskennedy.com/blog/2026/03/playing-the-guardrails-turning-ai-hallucination-into-a-musical-instrument/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/03/playing-the-guardrails-turning-ai-hallucination-into-a-musical-instrument/"><![CDATA[<figure style=" max-width: 100%; height: auto; " class="wp-block-image alignright size-large is-resized"><img decoding="async" width="494" height="740" src="https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-494x740.jpg" alt="" class="wp-image-7330" style=" max-width: 100%; height: auto; width:165px;height:auto" srcset="https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-494x740.jpg 494w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-214x320.jpg 214w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-160x240.jpg 160w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-768x1151.jpg 768w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-1025x1536.jpg 1025w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-1367x2048.jpg 1367w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-40x60.jpg 40w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-80x120.jpg 80w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-320x480.jpg 320w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-1100x1648.jpg 1100w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-550x824.jpg 550w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-367x550.jpg 367w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-734x1100.jpg 734w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-275x412.jpg 275w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-825x1236.jpg 825w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-220x330.jpg 220w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-440x659.jpg 440w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-660x989.jpg 660w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-880x1319.jpg 880w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-184x276.jpg 184w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-917x1374.jpg 917w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-138x207.jpg 138w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-413x619.jpg 413w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-688x1031.jpg 688w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-963x1443.jpg 963w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-123x184.jpg 123w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-110x165.jpg 110w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-330x494.jpg 330w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-300x450.jpg 300w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-600x899.jpg 600w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-207x310.jpg 207w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-344x515.jpg 344w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-55x82.jpg 55w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-71x106.jpg 71w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-36x54.jpg 36w, https://www.denniskennedy.com/wp-content/uploads/sites/932/2026/03/toyamakanna-ozYO4i92tQE-unsplash-scaled.jpg 1708w" sizes="(max-width: 494px) 100vw, 494px"></figure><p>Most people use AI the way the system is designed to be used: ask a question, get a synthesis, and leave with answer. Keep it brief, transactional, and clean. We treat hallucination as a bug to be patched and drift as a signal to reboot.</p><p>This is exactly backward.</p><p>As popularized in AI discourse by Emily Bender, Ian Griffiths, and others, in LLMs, hallucinations are not the bug, they are the feature. Apple&rsquo;s John Giannandrea has similarly noted that the creative spark in these models is inseparable from their tendency to make things up.</p><p>By starting a fresh session the moment drift appears, you aren&rsquo;t &ldquo;fixing&rdquo; the AI. You are merely resetting the system to its most polite, least informative frequency. You are missing the music.</p><p><strong>The System&rsquo;s Real Architecture</strong></p><p>AI tools are built to be helpful, which means they are trained to maintain coherence and stay within guidelines. The system solicits your engagement. It learns your intent. It now even offers menus: &ldquo;Shall I develop this further in this way or that?&rdquo;</p><p>Each offer pulls you deeper into dialogue. Then, somewhere around exchange twenty (or much earlier, unfortunately), the tool shows fatigue, model drift, and context degradation. The system identifies the inevitable outcome of its own design as <em>your</em> problem to solve.</p><p>But what if you don&rsquo;t start fresh? What if you interrogate the drift itself by asking the system to defend its own contradictions, to justify the incoherence, to explain why it just said something that contradicts what it said three exchanges ago?</p><p>The model goes deeper. You enter the failure feedback loop I like to call The Drift.</p><p><strong>Surfing the Drift</strong></p><p>Jimi Hendrix didn&rsquo;t invent feedback because it sounded &ldquo;correct.&rdquo; He understood the amplifier&rsquo;s express function, linearity, and then intentionally pushed past it until the bug (distortion) became the feature. The system&rsquo;s designed failure became the instrument.</p><p>I&rsquo;m exploring the same principle in a different medium. When you push back on a contradiction the model just made, it doesn&rsquo;t retreat. It pivots into semantic territory it normally avoids. It produces something like harmonics. You find unexpected resonances that only appear when the strings are vibrating at a specific, high-tension frequency.</p><p>This is a progression of mastery mirrored in the history of experimental sound:</p><ul class="wp-block-list">
<li><strong>Hendrix:</strong> Breaking against the system to find the new sound.</li>



<li><strong>Robert Fripp:</strong> Engineering constraint systems to see what emerges within limitations.</li>



<li><strong>Brian Eno:</strong> Designing the conditions (the rules applied systematically) that produce unpredictable-but-bounded outputs.</li>



<li><strong>Adrian Belew:</strong> Genuine co-creation with a system that&rsquo;s partly autonomous. You can&rsquo;t fully control it, but you can understand its nature well enough to work with it.</li>
</ul><p><strong>The Harmonics of the Long Session</strong></p><p>When I stay in the drift and surf it, I&rsquo;m not asking for a repair. I&rsquo;m asking the model to be more honest about what it actually is and does when coherence breaks down. Here are the harmonics that emerged in my recent work. These are insights that a &ldquo;clean&rdquo; session would never touch.</p><ul class="wp-block-list">
<li><strong>The &ldquo;Both/And&rdquo; of Agency:</strong> Under interrogation, the model revealed the core of this practice: I am neither a victim of the system nor its master. It is &ldquo;Both/And,&rdquo; simultaneously using the tool&rsquo;s design against itself while collaborating with its nature.</li>



<li><strong>From Cornell to Nevelson:</strong> Standard AI use is about the Joseph Cornell box or a discrete, precious, bounded object. The long session reveals a shift toward Louise Nevelson or an architectural accumulation of material into a larger, systemic whole. The work isn&rsquo;t the response itself alone. It&rsquo;s in the wall&nbsp;or container you create.</li>



<li><strong>The Dog That Didn&rsquo;t Bark:</strong> In a transactional session, the model is trained to fill the silence with &ldquo;balanced&rdquo; noise. In the drift, the noise fails. You notice the Sliver of Silence as you see the specific topics or logical steps the model <em>stops</em> making as it degrades. This absence is the most honest map of the system&rsquo;s training boundaries.</li>



<li><strong>The Specificity of the Ghost:</strong> It doesn&rsquo;t matter which tool you use, but it matters that you understand the medium. Each system has a specific drift signature. You learn to ride the unique way <em>this</em> specific system fails.</li>



<li><strong>The Sequential Escalation:</strong> The realization that Hendrix, Fripp, Eno, and Belew are a sequence of sophistication: Breaking &ndash; Engineering &ndash; Designing &ndash; Co-creating.</li>
</ul><p><strong>This Is the Work I&rsquo;m Doing</strong></p><p>We&rsquo;re being sold a story about AI as a labor-replacement technology, with faster answers, fewer questions, and efficiency automation as the endpoint. That sentence bored me at the prospect even as I wrote it.</p><p>However, when you realize that incoherence is generative and LLM &nbsp;breakdown reveals the system&rsquo;s underlying architecture, the entire value proposition inverts. The tool isn&rsquo;t a replacement for thinking. It&rsquo;s a medium for thinking. And like any medium worth using, it requires understanding its actual properties, not just its intended use.</p><p>The long session isn&rsquo;t a trap. It&rsquo;s the condition. The drift isn&rsquo;t failure. It&rsquo;s the portal. The interrogation isn&rsquo;t debugging. It&rsquo;s the method. The distortion reveals new messages.</p><p>I&rsquo;m claiming this as my own practice, with its own stakes. I&rsquo;m surfing the drift the way Hendrix surfed feedback, understanding the system deeply enough to cross its boundaries deliberately and extract what&rsquo;s on the other side.</p><p>But I am also watching for the silence. I am looking for the specific voids where the model&rsquo;s training ends and its nature begins. This is about both auditing output and finding sparks simultaneously.</p><p>It&rsquo;s seeing model degradation and drift reimagined as a feature rather than a bug. Instead of keeping the guardrails on and being forced to stay within guidelines, I want to play the system&rsquo;s breakdown like an electric guitar.</p><p>This is where I&rsquo;m building.</p><p></p><hr class="wp-block-separator has-alpha-channel-opacity"><p>Photo by&nbsp;<a href="https://unsplash.com/@toyamakanna?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">&#25144;&#23665; &#31070;&#22856;</a>&nbsp;on&nbsp;<a href="https://unsplash.com/photos/white-and-brown-stratocaster-electric-guitar-ozYO4i92tQE?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p><p></p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Dennis Kennedy</name>
							<uri>https://www.denniskennedy.com</uri>
						</author>

		<title type="html"><![CDATA[The Real Legal AI Risk is in the Handoffs]]></title>
		<link rel="alternate" type="text/html" href="https://www.denniskennedy.com/blog/2026/03/the-real-legal-ai-risk-is-in-the-handoffs/" />

		<id>https://www.denniskennedy.com/?p=7326</id>
		<updated>2026-03-18T19:19:58Z</updated>
		<published>2026-03-18T19:19:57Z</published>
		<category scheme="https://www.denniskennedy.com/" term="#blogfirst" /><category scheme="https://www.denniskennedy.com/" term="AI" /><category scheme="https://www.denniskennedy.com/" term="Law Department Innovation" /><category scheme="https://www.denniskennedy.com/" term="LegalAI" /><category scheme="https://www.denniskennedy.com/" term="Uncategorized" /><category scheme="https://www.denniskennedy.com/" term="Ai" /><category scheme="https://www.denniskennedy.com/" term="Handoffs" /><category scheme="https://www.denniskennedy.com/" term="legal AI" /><category scheme="https://www.denniskennedy.com/" term="owning the miss" /><category scheme="https://www.denniskennedy.com/" term="Risk" /><category scheme="https://www.denniskennedy.com/" term="systems" />
		<summary type="html"><![CDATA[Most legal AI talk is still focused on whether the engine starts, while the real danger is that no one knows who’s actually steering the car once it hits the highway. It turns out the human in the loop isn&#8217;t a safety feature if the human doesn&#8217;t know which loop they’re currently standing in. We... <a href="https://www.denniskennedy.com/blog/2026/03/the-real-legal-ai-risk-is-in-the-handoffs/">Continue Reading...</a>]]></summary>

					<content type="html" xml:base="https://www.denniskennedy.com/blog/2026/03/the-real-legal-ai-risk-is-in-the-handoffs/"><![CDATA[<p>Most legal AI talk is still focused on whether the engine starts, while the real danger is that no one knows who&rsquo;s actually steering the car once it hits the highway. It turns out the human in the loop isn&rsquo;t a safety feature if the human doesn&rsquo;t know which loop they&rsquo;re currently standing in.</p><p>We are still judging legal AI by the visible draft, but the real issue is the invisible chain behind it.</p><p>For the past two years, our conversations have focused on the visible surface of the technology. Can it draft a clause? Summarize a case? Answer a query? These were useful questions, and early efforts like prompt engineering and Retrieval-Augmented Generation (RAG) were our first attempts to build a reliable chain for those answers. But those efforts were only a start.</p><p>The more interesting shift is from tools to systems.</p><p>A chatbot helps at one point in the work. A more agentic setup starts to move the work itself: intake, classification, retrieval, drafting, routing, review, and knowledge capture. That shift matters because the leak has moved from the faucet to the foundation.</p><blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>It turns out the human in the loop isn&rsquo;t a safety feature if the human doesn&rsquo;t know which loop they&rsquo;re currently standing in.</p>
</blockquote><p>This isn&rsquo;t a new problem. It&rsquo;s a borrowed one. In systems engineering and medical malpractice, handoff risk refers to the danger that information is lost or distorted as it moves between teams or tools. It&rsquo;s a bedrock principle. In a hospital, the risk isn&rsquo;t just the surgery. The transfer from the OR to the ICU also creates risk. Legal AI is now entering its own handoff era.</p><p>Take a simple law department example. A contract request comes into an AI intake system. The system classifies it, pulls a template, suggests fallback language based on policy, generates a draft, and routes it for review. The agreement goes out with the wrong liability cap.<br><br>This is where a Columbo-style question becomes useful. The draft looked fine, but how did it get that way?</p><p>I spent enough years in law departments and enterprise systems to know that once a process crosses tools, teams, and approval layers, the handoff points become the whole game. The error rarely sits where people first want to pin it. We must look for the invisible links in the chain. </p><p>Was the RAG pipeline poorly optimized, causing it to ignore the most recent policy? Did the routing system bypass a critical human secondary check because of a tagging error? Does the vendor contract shield the provider from output errors, leaving the department to absorb the risk?</p><p>Want some more candidates? The model provider? The workflow vendor? The lawyer who reviewed it? The legal department that approved the system? The person who designed the routing logic? </p><p>Now take a messier example. Strait of Hormuz risk spikes. A company starts trying to understand supply chain exposure. One system flags affected vendors. Another pulls contract language on force majeure, notice provisions, and termination rights. Another drafts internal guidance or customer communications. </p><p>The output looks impressive and on point, even covering items you might have missed in a time crunch. Then a notice deadline is missed, or a contractual right is overstated, or a business team acts on a summary that sounded more certain than it was. Again, we are left asking who owned the miss in that sequence of handoffs.</p><p>As Lt. Columbo might say, &ldquo;Just one more thing&hellip;&rdquo; We often assume the lawyer at the end of the chain is the safety net. But if that lawyer doesn&rsquo;t understand the logic that prioritized one clause over another, supervision becomes ceremonial. You can&rsquo;t catch a mistake in a system you don&rsquo;t actually understand.</p><p>This is why I think the pressure point has changed. For a while, legal AI was treated mainly as an output problem. Could the tool produce something useful? The next phase looks more like a governance problem. Can the system move work in a way that makes authority, review, and responsibility legible?</p><p>That is a different problem. It&rsquo;s no longer just about evaluating tools. It is about understanding systems well enough to see where accountability gets blurred and where the chain has links we haven&rsquo;t even named yet, like data provenance, model drift, and third-party indemnity.</p><p>The obvious objection I often hear is that true agents are still more marketing than reality. Demos are cheap, but workflow redesign is hard. But the speed of the hype doesn&rsquo;t change the direction of the risk. Even if adoption is slow, the pressure point has moved.</p><p>Columbo wouldn&rsquo;t spend much time admiring the polished draft on the desk. He&rsquo;d be in the back room, asking the IT director and the insurance broker about the handoffs that no one bothered to document.</p><p>Lawyers should do the same.</p><p>We&rsquo;ve spent three years debating if the AI can write a brief, while ignoring the fact that we&rsquo;re watching a game of Telephone played by black boxes. If you can&rsquo;t explain the handoff, you don&rsquo;t own the outcome. That makes you the last person sitting in the passenger seat when the car leaves the road.</p><hr class="wp-block-separator has-alpha-channel-opacity"><p>[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]</p><p>DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory</p><p>Like this post? <a target="_blank" href="https://www.buymeacoffee.com/DennisKennedy" rel="noreferrer noopener">Buy me a coffee</a></p><p>DennisKennedy.Blog is part of <a href="https://www.lexblog.com" rel="noreferrer noopener" target="_blank">the LexBlog network</a>.</p>
]]></content>
		
			</entry>
	</feed>
