<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

<channel>
	<title>3 Geeks and a Law Blog</title>
	<atom:link href="https://www.geeklawblog.com/feed" rel="self" type="application/rss+xml"/>
	<link>https://www.geeklawblog.com/</link>
	<description>Where legal technology, innovation, and creativity is discussed.</description>
	<lastBuildDate>Tue, 05 May 2026 02:44:11 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.5&amp;lxb_maple_bar_source=lxb_maple_bar_source</generator>

 
	<itunes:explicit>no</itunes:explicit><copyright>(c) 2025</copyright><itunes:image href="https://is1-ssl.mzstatic.com/image/thumb/Podcasts112/v4/c4/f7/3a/c4f73a4a-8060-6a14-ca34-49f95f10ad04/mza_4553798616761549775.jpg/300x300bb.webp"/><itunes:keywords>3 Geeks, TGIR, Geek in Review, Legal Technology, Legal Tech, Legal AI</itunes:keywords><itunes:summary>Greg Lambert and Marlene Gebauer discuss technology, innovation, and creativity in the legal industry.</itunes:summary><itunes:subtitle>Where Innovation Meets the Legal Industry</itunes:subtitle><itunes:category text="Technology"><itunes:category text="Podcasting"/></itunes:category><itunes:author>Greg Lambert</itunes:author><itunes:owner><itunes:email>xlambert@gmail.com</itunes:email><itunes:name>Greg Lambert</itunes:name></itunes:owner><item>
		<title>Flatiron Law Group’s Lennie Nuara on Talent-First AI, M&amp;A Workflows, and the Future of Legal Practice</title>
		<link>https://www.geeklawblog.com/2026/05/flatiron-law-groups-lennie-nuara-on-talent-first-ai-ma-workflows-and-the-future-of-legal-practice.html</link>
		
		
		<pubDate>Tue, 05 May 2026 02:44:11 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Flatiron Law]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Law firm strategy]]></category>
		<category><![CDATA[legal AI]]></category>
		<category><![CDATA[Legal Innovation]]></category>
		<category><![CDATA[legal technology]]></category>
		<category><![CDATA[M&A workflows]]></category>
		<category><![CDATA[podcast]]></category>
		<guid isPermaLink="false">https://www.geeklawblog.com/?p=19269</guid>

					<description><![CDATA[This week on The Geek in Review, we talk with Lennie Nuara, co-founder of Flatiron Law Group, about what it means to build a talent-first, AI-powered legal practice. Nuara brings a rare mix of lawyer, technologist, operator, and systems thinker to the conversation, drawing from decades of experience using technology to improve legal work, from... <a href="https://www.geeklawblog.com/2026/05/flatiron-law-groups-lennie-nuara-on-talent-first-ai-ma-workflows-and-the-future-of-legal-practice.html">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p data-start="114" data-end="530">This week on The Geek in Review, we talk with <a href="https://www.linkedin.com/in/leonardnuara/">Lennie Nuara</a>, co-founder of <a href="https://flatironlaw.ai/">Flatiron Law Grou</a>p, about what it means to build a talent-first, AI-powered legal practice. Nuara brings a rare mix of lawyer, technologist, operator, and systems thinker to the conversation, drawing from decades of experience using technology to improve legal work, from early portable computers and databases to today&rsquo;s generative AI tools.</p><p data-start="532" data-end="992">Nuara explains why he resists the phrase &ldquo;AI-first&rdquo; in legal practice. For him, legal work begins with talent, judgment, and expertise. AI enters as a force multiplier, not the driver. At Flatiron, the firm&rsquo;s model was already built around flat fees, lean staffing, process discipline, and structured data before generative AI entered the picture. AI now adds more horsepower to a system already designed to reduce waste, repeat touches, and unclear workflows.</p><p data-start="994" data-end="1528">Much of the discussion focuses on M&amp;A due diligence, where Flatiron rethinks the deal life cycle from intake through closing. Instead of throwing documents into a massive repository and hoping AI sorts it out, Nuara describes breaking work into smaller pieces: diligence questions, responses, documents, clauses, topics, closing checklists, and reports. That structure lets lawyers use AI for deduplication, extraction, clause comparison, first-pass drafting, and issue spotting while keeping human judgment between higher-risk steps.</p><p data-start="1530" data-end="2064">Nuara also warns against getting seduced by polished AI output. He describes generative AI as persuasive, fluent, and sometimes dangerously average. The bigger risk, in his view, is less hallucination and more &ldquo;model monoculture,&rdquo; where legal drafting drifts toward sameness because models train from overlapping bodies of public material. In complex private transactions, average language is often the wrong answer. Lawyers still need to understand leverage, client priorities, risk allocation, and where to push beyond market terms.</p><p data-start="2066" data-end="2722">The episode closes with a look at pricing, training, and the future structure of law firms. Nuara argues that AI will pressure the billable hour, change junior lawyer training, and force firms to rethink the traditional pyramid. He also raises a practical concern from the early Westlaw and Lexis days: the cost of the tool matters. Flatiron tracks AI usage down to the clause level, treating tokens as part of matter economics. For legal professionals watching AI reshape transactions, this conversation offers a grounded reminder: better tools matter, but better process and better judgment still decide the outcome.</p><p data-start="1979" data-end="2573"><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true"><strong>Listen on mobile platforms:&nbsp;&nbsp;</strong></span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://podcasts.apple.com/us/podcast/the-geek-in-review/id1401505293" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Apple Podcasts&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span>&#8288;</a><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true"><strong>&nbsp;|&nbsp;&nbsp;</strong></span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://open.spotify.com/show/53J6BhUdH594oTMuGLvANo?si=XeoRDGhMTjulSEIEYNtZOw" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Spotify&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span>&#8288;</a><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&nbsp;|&nbsp;</span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://www.youtube.com/@thegeekinreview" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;YouTube&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span></a>&nbsp;|&nbsp;<a href="https://thegeekinreview.substack.com/">Substack</a></p><p><span data-slate-node="text"><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">[Special Thanks to&nbsp;</span></span><a class="Link-sc-k8gsk-0 feDGbw e-9652-text-link sc-jWfcXB gQGioO" href="https://www.legaltechnologyhub.com/" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">Legal Technology Hub</span></span>&#8288;</a><span data-slate-node="text" data-slate-fragment="JTVCJTdCJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmNoaWxkcmVuJTIyJTNBJTVCJTdCJTIydGV4dCUyMiUzQSUyMiU1QlNwZWNpYWwlMjBUaGFua3MlMjB0byUyMCUyMiU3RCUyQyU3QiUyMnR5cGUlMjIlM0ElMjJsaW5rJTIyJTJDJTIydXJsJTIyJTNBJTIyaHR0cHMlM0ElMkYlMkZ3d3cubGVnYWx0ZWNobm9sb2d5aHViLmNvbSUyMiUyQyUyMnRhcmdldCUyMiUzQSUyMl9ibGFuayUyMiUyQyUyMnJlbCUyMiUzQSUyMnVnYyUyMG5vb3BlbmVyJTIwbm9yZWZlcnJlciUyMiUyQyUyMmNoaWxkcmVuJTIyJTNBJTVCJTdCJTIydGV4dCUyMiUzQSUyMkxlZ2FsJTIwVGVjaG5vbG9neSUyMEh1YiUyMiU3RCU1RCU3RCUyQyU3QiUyMnRleHQlMjIlM0ElMjIlMjBmb3IlMjB0aGVpciUyMHNwb25zb3JpbmclMjB0aGlzJTIwZXBpc29kZS4lNUQlMjIlN0QlNUQlN0QlNUQ="><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">&nbsp;for their sponsoring this episode.]</span></span></p><p><iframe title="Spotify Embed: Flatiron Law Group&amp;apos;s Lennie Nuara on Talent-First AI, M&amp;A Workflows, and the Future of Legal Practice" style="border-radius: 12px" width="100%" height="152" frameborder="0" allowfullscreen allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy" src="https://open.spotify.com/embed/episode/6NqNVDwVPNpCxfBTP53YGR?si=3b1gjfB3RuuourfjeCsF7g&amp;utm_source=oembed"></iframe></p><p><a href="https://www.youtube.com/watch?v=XoswheQpnpU"><img style=" max-width: 100%; height: auto; " src="https://www.geeklawblog.com/wp-content/uploads/sites/528/embed_thumbs/XoswheQpnpU.png"></a></p><p>&#8288;&#8288;&#8288;&#8288;&#8288;Email: geekinreviewpodcast@gmail.com<br>
Music: &#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Jerry David DeCicca&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</p><h5>Transcript:</h5><p><span id="more-19269"></span></p><p>Greg Lambert (00:00)<br>
Hey everyone, I&rsquo;m Greg Lambert with the Geek in Review and I&rsquo;m here with our friend Nikki Shaver from Legal Technology Hub. And Nikki, you have a new premium content layer out with the litigation case management. you mind giving us some insight on that?</p><p>Nikki Shaver (00:16)<br>
Yeah,</p><p>absolutely. Hi Greg, hi everyone. So one of the interesting things over the past year is that while when Generative AI first launched into legal, we saw a massive uptake of solutions in contracts and transaction management, as well as the broad AI legal assistant solutions. It took a little while for litigation to follow suit, but for those who have been listening for a while, you&rsquo;ll know that.</p><p>sort of midway through last year, we saw a massive rise, all of a sudden kind of an explosion in litigation solutions. And that&rsquo;s been really exciting. A lot of things coming out that really go beyond what was available pre-gen AI, know, things that can manage facts and provide you with insights into where there might be inconsistencies and testimony, all kinds of tools that frankly, I wish I&rsquo;d had available to myself when I was a litigator. So we, as many of you know, we publish</p><p>what we call premium categories on Legal Tech Hub. These are collections of really in-depth content that allow buyers to review solutions in a particular category, evaluate them, provide the tools with which to evaluate solutions in that category. And good news is that in May, we are launching our premium category for litigation case management solutions, including a lot of these newer generative AI driven solutions, as well as</p><p>some of the incumbents that now have Gen.ai features. So log in to legaltechnologyhub.com and if you&rsquo;re a premium subscriber, you&rsquo;ll be able to access all of that good content around new litigation solutions. And if you are not yet a premium subscriber, you can reach out to us and you know what? You can easily become one and access that content.</p><p>Greg Lambert (02:01)<br>
Well thanks, Nikki. It&rsquo;s &#8275; good. The litigators always felt a little left out on the AI tools, so this is good news.</p><p>Nikki Shaver (02:06)<br>
Yeah,</p><p>yeah exactly now they get to catch up.</p><p>Marlene Gebauer (02:18)<br>
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I&rsquo;m Marlene Gebauer</p><p>Greg Lambert (02:25)<br>
And I&rsquo;m Greg Lambert and today Marlene, we are joined by Lennie Nuara, who&rsquo;s co-founder of Flatiron Law Group and a nationally recognized authority on technology, internet law, cybersecurity, privacy, M&amp;A and complex commercial litigation. I&rsquo;m sure there&rsquo;s a longer list than that.</p><p>Marlene Gebauer (02:43)<br>
There is a longer list. He does it all.</p><p>Lennie brings a rare mix of lawyer, operator, technologist, and problem solver experience from rebuilding a law firm&rsquo;s technology infrastructure after 9-11 to helping launch regulated trading platforms and now building an AI first model for legal practice. So Lennie, welcome to the Geek in Review.</p><p>Lennie Nuara (03:05)<br>
Thank you. Thank you both very much for having me on.</p><p>Greg Lambert (03:08)<br>
Well, I remember when we had Conrad Everhard on the co-founder, he would constantly refer to Lennie did this, Lennie did that. So it&rsquo;s great to see the infamous Lennie on the show here. So, but when Conrad joined us before, we talked a lot about Flatiron&rsquo;s flat-fee M&amp;A model and how it was challenging the traditional Big Law pyramid.</p><p>With Flatiron.ai, feels like the model seems to have evolved from just this alternative fee structure plus technology into something closer to what you had referred to us before we started recording as a talent-first AI-operated system for legal work. So Lennie, I want to ask you, when you say talent-first AI-operated, what does that actually mean for</p><p>for the practice.</p><p>Lennie Nuara (04:05)<br>
Yeah,</p><p>so yeah, because a lot of people say you know AI native and I don&rsquo;t I don&rsquo;t like the emphasis on the AI first. It&rsquo;s the practice of law is all about talent. That&rsquo;s the number one thing that anyone hires. You have a problem you want to find the best that you can afford to handle that issue. You don&rsquo;t hire AI and I&rsquo;m not trying to malign anybody with regard to the AI first label. That&rsquo;s not the point, but the point is is its talent and its talent that has to drive the AI. So as a firm we started.</p><p>over nine years ago now, and we were essentially leveraging technology significantly before the advent of AI. But still it was talent driven and then I started building systems. have a degree in computer science, and I&rsquo;ve been playing with these tools for a long long time since since the creation of that machine in the behind me. And the idea is is to take a tool like that one or AI today. It&rsquo;s a long way &#8275; forward and.</p><p>leverage that tool in the practice every single way you can. And I&rsquo;ve been doing that in my practice for the 42-some-odd years I&rsquo;ve been doing it is where I can use tech to enhance the practice. I use it. It started with word processing later on using databases to find embezzlers and get the money back and building flat fee litigation support.</p><p>Nationwide we did flat fee litigation in asbestos didn&rsquo;t really matter where, but I tried to leverage the use of the technology, but it&rsquo;s always driven by talent. What we do as counsel has to be not just the in the loop. I just don&rsquo;t like that phrase. We&rsquo;re not just in the loop. We should be driving it so the AI in the practice. I&rsquo;ll call her a talent first AI powered is another phrase that I use is.</p><p>using AI to support the lawyers, which you guys know how that works, okay? But it&rsquo;s in the nature of the architecture where the AI comes up. So, transactions, which we do the most of, Complex transactions, M M&amp;A is our first area, but we do other complex transactions, software development, commercial agreements. And we always say, okay, well, what needs to happen in this deal? There&rsquo;s a repetitive aspect, and we&rsquo;ll talk about that later as we dig into this.</p><p>But there&rsquo;s repetitive aspect of things. Well, you know, we don&rsquo;t have to type anymore the same way we used to. We can we can generate things. We can essentially break things into pieces and use AI to analyze that. There&rsquo;s a variety of things that are available, and the AI is an absolutely fantastic tool. It&rsquo;s also an incredibly dangerous tool in the wrong hands like like a Lamborghini and a 16 year old hands. It&rsquo;s a great car. It&rsquo;s not the car&rsquo;s fault.</p><p>that it wrapped around a tree. It&rsquo;s the kid who drove it into the tree</p><p>Marlene Gebauer (06:53)<br>
you</p><p>Greg Lambert (06:55)<br>
I know some 57</p><p>year olds that probably shouldn&rsquo;t be driving.</p><p>Marlene Gebauer (06:57)<br>
hahahaha</p><p>Lennie Nuara (06:58)<br>
So</p><p>it&rsquo;s a tool, it&rsquo;s a great tool. And if you use it, so within our firm, we&rsquo;re taking that tool, which we started with paralegals nine years ago. I was using paralegals that were not really paralegals. I called them that, but they were honestly stay at home moms that did things remotely for me and built out my database on every transaction we did. I would extract data.</p><p>from every agreement or every document, every invoice, whatever, with people typing and people reviewing. And they cost 35 to $50 an hour versus what it would have been within a firm for a client. And we just rolled that into our flat fee. And that was an example of architecture and structure and workflow that didn&rsquo;t even have AI. And then as AI came into being, and by the way, it was that manually, then it was.</p><p>Done with databases, we use a lot of different tools that are commercially available. And now AI just feeds that whole process with, we call it higher horsepower, the Lamborghini level, which is great.</p><p>Greg Lambert (07:59)<br>
And just for the people that are just listening to this, the machine behind Lennie is an old Osborne, I think they call it a portable computer, right? &#8275;</p><p>Marlene Gebauer (08:08)<br>
Yes.</p><p>Lennie Nuara (08:08)<br>
Yes, yes, it was 25. It is.</p><p>It still is 25 pounds and I had the 37 inch sleeves to prove the fact that I carried it through law school. OK, I honestly do. My sleeves are 37 and it&rsquo;s a. It&rsquo;s just an example of a tool that was used in my practice early on and actually in my school and and AI is another tool and it&rsquo;s a wonderful wonderful addition to our our portfolio.</p><p>Marlene Gebauer (08:11)<br>
Portable.</p><p>Greg Lambert (08:13)<br>
Yeah.</p><p>Hahaha</p><p>Marlene Gebauer (08:17)<br>
Hahaha.</p><p>So Lennie, let&rsquo;s talk a little bit about workflows. When you talk to people at firms, it seems that they&rsquo;re trying to bolt AI onto an existing process. So it would help me do my diligence memo. It helped me do a contract summary. It&rsquo;s &rsquo;s helping me with drafting But Flatiron, as you&rsquo;ve noted, is starting from a different place. You&rsquo;re redesigning M&amp;A workflow.</p><p>around the assumption that AI is part of the matter from intake through closing. So what parts of the deal life cycle have had to be rethought? And where did you discover like the old workflow simply didn&rsquo;t make any sense anymore?</p><p>Lennie Nuara (09:20)<br>
All right, so that&rsquo;s a great question. So let me break it apart a little bit. &#8275;</p><p>Marlene Gebauer (09:24)<br>
It&rsquo;s a hard question</p><p>because I think people are having a hard time wrapping their heads around that.</p><p>Lennie Nuara (09:30)<br>
So let&rsquo;s let&rsquo;s let&rsquo;s do some level setting. First thing we didn&rsquo;t rethink like deal team strategy or negotiation. Balls or. What the read is on a counterparty or you know what the calibration of risk is.</p><p>Greg Lambert (09:43)<br>
You don&rsquo;t have their</p><p>&#8275; agent talk to your agent and then just come up with it.</p><p>Lennie Nuara (09:48)<br>
Yeah, no, I do not. In fact,</p><p>I wouldn&rsquo;t even want to try to create another agent of what is Conrad. That would be, you know, like. Yes, so you wouldn&rsquo;t want another one. I always said if there was an if I had an agent, I&rsquo;d still be working in my agent would be on the beach. OK, so it&rsquo;s like it doesn&rsquo;t always work, but so first it&rsquo;s AI powered practice.</p><p>Marlene Gebauer (09:54)<br>
If anybody knows Conrad, that&rsquo;s really funny.</p><p>Greg Lambert (10:05)<br>
Yep.</p><p>Lennie Nuara (10:11)<br>
We&rsquo;re not taking any of those away, and this is true in litigation. It&rsquo;s in pure transactional work, regulatory work, whatever. There&rsquo;s a core that doesn&rsquo;t change, but in terms of the practice and the flow that what we&rsquo;ve done is we did re-architect I like Paul re engineered the practice of the way we do deals and so from the very, very beginning &#8275; in our workflow we break things into smaller pieces. So for example, due diligence.</p><p>Most of the products that you see out there focus on analyzing the diligence, what&rsquo;s in. We start even before that. We get the diligence on behalf of the clients by side, sell side. We put all the questions up on the platform. People respond to the questions on the platform and they&rsquo;re tracked on the platform from the very beginning of the deal, the start. And so immediately we have all this data with regard to what&rsquo;s done, what&rsquo;s not done, who&rsquo;s done it, what documents relate to what questions.</p><p>And then and then we take those documents and we burst them and we extract data from every single document. No different than what I did with paralegals nine years ago. We&rsquo;re doing that now with AI and HI, but it&rsquo;s it&rsquo;s human driven. The lawyers pick what they&rsquo;re worried about. What the issues are, how to extract, and they always view everything that happens in pieces. One of the things that we don&rsquo;t do and we&rsquo;ll talk about it probably a couple times as we don&rsquo;t boil the ocean. We don&rsquo;t throw all the documents into a database.</p><p>and say, OK, search this, search that. We break things down from the very beginning. There are areas, categories, subcategories, topics, the questions, the documents that relate to that stream. And then we break those down into the individual clauses or elements that are part of that request and those documents and so on. So it&rsquo;s small, small, really small, tiny microscopic all the way down so that</p><p>You&rsquo;re not at the word level, but you may be at the sentence or the phrase level, and now you have data points on all that. And now you can start to build essentially answers to things like requests. Reps and warranties. Closing checklists and so on. So yes, we redrafted the or recreated what we do when we do it and how we do it from beginning to end. So it starts with the diligence.</p><p>not the diligence documents, the actual creation of the response to the diligence through to looking at that diligence to generate reports, helping the client essentially produce a report that might respond to which contracts can be assigned or can&rsquo;t be assigned or need consent to other touch points that may be, these are very valuable customers and so on. All that used to be done by humans and it still will be, but in a very, we&rsquo;ll call it the sliding scale.</p><p>But the more volume there is, you may use more AI to extract all that stuff. It&rsquo;s a small case. You may not use a lot of AI, just a little bit, because there&rsquo;s not a lot of documents. But on average for us, it&rsquo;s 500 to 1,000 diligence questions per deal. And that used to be handled by a lot of human talent. We don&rsquo;t need as much of that anymore. That&rsquo;s not a bad thing. It&rsquo;s just a different thing.</p><p>No different than the mechanization of any manufacturing line. They used to build cars one at a time. They used to build houses one at a time. Now they&rsquo;re being automated with regard to construction of homes, constructions of buildings. You bring in equipment, you can do it faster and better. Some of the early class might be displaced, but ultimately they&rsquo;ll be pushed off to do other things and other things well. That&rsquo;s just an example.</p><p>Greg Lambert (13:38)<br>
It sounds like</p><p>one of the things that you&rsquo;re doing that I know Marlene and I talk a lot about that firms still struggle with is you&rsquo;ve got to start with a solid set of data to work with. if if you start with a messy processes and data, you&rsquo;re going to amplify that messy process and that messy data rather than clean it up.</p><p>Lennie Nuara (14:02)<br>
Yes.</p><p>Greg Lambert (14:04)<br>
that&rsquo;s you know you&rsquo;re doing M&amp;A you&rsquo;re doing due diligence and you&rsquo;re breaking down into pieces and you can apply that to pretty much any practice and that&rsquo;s something that I think people need to understand you know kind of wrap their heads around how to break that into those pieces and clean that up.</p><p>Lennie Nuara (14:20)<br>
Yeah, it flows, absolutely, it flows specifically from the talent, right? So I did work, a friend of mine, he&rsquo;s in the financial services sector, not legal at all. And he saw what I was doing years ago with regard to databases and extracting data from documents, I called it turning documents into data. And he hired me on the side in a not legal capacity to help build a database for them where we were extracting the data.</p><p>and all the reports, the analytics that they were generating within the firm. And it gave them a view of something that they never had before. And it just took some time. I needed their help. said, well, what&rsquo;s important to you? What are you looking for? &#8275; we&rsquo;re trying to find this report we did in that. they&rsquo;re all buried in 30 years of writing reports in the financial services sector. And it&rsquo;s the same thing that we can do as counsel. There is data in all that we do, data in terms of the steps.</p><p>but data in terms of the things that we&rsquo;re moving around, the words, the paragraphs, the documents, that data can be captured and used in a variety of ways to improve the practice of law and the accuracy of what we&rsquo;re doing on a go-forward basis down that transaction scheme or through litigation or other.</p><p>So, so one last piece of that. So it required taking a step back and the step back happened in the following manner, right? Conrad was doing deals for his whole career. I was always riding on the outside of deals. We were at different firms many years ago, but we were friends and we joined forces when we formed Flatiron. He&rsquo;s like, hey, Lennie, can you help me run these deals? And I&rsquo;m like, okay, I got my practice, but I&rsquo;ll help you do the M&amp;A stuff. You know, I said, yeah, we&rsquo;re going to, we&rsquo;re going to flat fees. I&rsquo;m like, okay. And then I look.</p><p>Greg Lambert (15:37)<br>
Cheers.</p><p>Lennie Nuara (16:03)<br>
at the way he did it. And it was incredibly inefficient because they would touch the documents once and then touch the documents again and touch the document. So you know, during pre before the LOI or during the pre-LOI phase and the diligence, then during reps and warranties, we look at the documents again, then we look at them again for closing checklists and then post closing integration. I&rsquo;m like, this is nice. You&rsquo;re going to go look for that same documents four times. So I started reengineering the process out of it. You know, basically the desire to be more efficient and then allowing us to really hone that.</p><p>those flat fee quotes. And that just required, you know, a closer analysis of what&rsquo;s important and what&rsquo;s not. And as I said earlier, that can happen in any domain, right? It can be transactional work, it could be litigation, could be regulatory. It&rsquo;s just that if it&rsquo;s driven by the right driver, somebody that has the intellect of what happens, and they take a step back and say, hey, is there another way to do this that&rsquo;s more efficient? Then it should be applied. And the tools&hellip;</p><p>I&rsquo;ve gotten significantly better where you can do that. Now I did it initially with paralegals and databases. Now I&rsquo;m doing it with AI and databases, but still databases because everything we do as lawyers lawyers don&rsquo;t want to hear that. But everything we do as lawyers is data driven. It really is. It&rsquo;s we&rsquo;re not. We&rsquo;re not significantly different than than the street. I used to work on Wall Street, but we&rsquo;re not significantly different.</p><p>Greg Lambert (17:21)<br>
Well, I mean, so far in all of this, I mean, we&rsquo;ve touched a little bit on the periphery about what you&rsquo;re doing with the AI. But really, a lot of your foundational work here was, again, cleaning up the data, getting your processes right, understanding how many times you need to touch a document, and reducing that overall.</p><p>So as you develop this M&amp;A tool around that style of model, now we want to understand how do you bring in the AI part of it? Are you looking at&hellip;</p><p>kind of compacting the different steps or how are you throwing the AI at the process that you&rsquo;ve already seemed to have made very efficient.</p><p>Lennie Nuara (18:10)<br>
Okay, so in the first instance, there are different ways to use the AI. So I wouldn&rsquo;t want to say it&rsquo;s all AI, it&rsquo;s AI in this style here, in that style there, and another style someplace else. So for example, you can use AI to de-duplicate, for example, all the due diligence questions. In the 500 to 1000 questions, I guarantee you there&rsquo;s like at least 10 and sometimes a 30 % overlap.</p><p>which is nuts. And I&rsquo;ve had clients literally just collapse under the weight of that. And so just deduping things. And actually I have a scale and the scale runs from exact match to similar to, &#8275; you know, not exact, but still on the same topic and so on. And I can present that. And then I push it back into the due diligence layout that we have so that the buy side sees</p><p>by the way, this is the same question you&rsquo;ve asked now four times, but it refers back to this question. And so our answer is going to be different. But then we, know, everybody can see that and they see the numbers. So that&rsquo;s one way. If you&rsquo;re drafting documents, it&rsquo;s another use of AI. Much higher risk profile than finding duplicates, right? If you miss a duplicate, OK, someone says, damn, I got to answer the same question twice. OK, OK, or if you point to something as a duplicate and it&rsquo;s not.</p><p>Greg Lambert (19:08)<br>
you</p><p>Lennie Nuara (19:33)<br>
somebody comes in and says, hey, no, they&rsquo;re different questions. So please answer them both. Nobody so far has said, please answer them both even though they&rsquo;re the same. Usually we just point them to the other answer. But if you&rsquo;re drafting or if you&rsquo;re contract lifecycle management, you&rsquo;re doing review, or you&rsquo;re trying to go against a model like a playbook or something like that, the risk is significantly different. And you have to know that you&rsquo;re using the tool differently for different things.</p><p>So in the first instance, we apply AI wherever we can. It&rsquo;s the very simplest thing of organizing the information during the due diligence process of collecting and analyzing. But then as you go further downstream, you have a higher and higher risk value that you would place on a potential mistake or the misuse of the AI. And so I look at that as still very valuable, but how do I use that?</p><p>We use that tool, AI, from let&rsquo;s say comparing clauses or using the tool to help draft a version of a document, and we evaluate that the lawyer that&rsquo;s working on it can look. So for example, closing checklist has 30 different documents, 50 different documents that have to be generated from the assignment consent letters to FERPTA letters, these letters that have to go out to confirm certain regulatory compliance.</p><p>whole litany of things. Some of those letters are standard fare. They really are. And if you can extract the to and the from and the section of the agreement and so on and so forth, which then can be eyeballed by a partner or an associate, that they&rsquo;re correct and they can go out, that&rsquo;s great use of the tool. In another realm, if you&rsquo;re actually drafting the master agreement, that&rsquo;s a lot more difficult ask of the AI.</p><p>And maybe you will, but you&rsquo;re going to give it a lot of feedstock, a lot of documents that will essentially frame up what you&rsquo;re looking for. And then you&rsquo;re going to have to have a really serious analysis of the quality of it. I found that, this is two years ago, let&rsquo;s say, you could look at a document that&rsquo;s generated by AI. I swear, it looks fabulous. It reads incredibly well. And as counsel,</p><p>We get sucked into that. It&rsquo;s like, oh my God, this is done. We&rsquo;re done. You take a step back and say, wait a minute, it didn&rsquo;t address this. It didn&rsquo;t address this. It did it backwards. It&rsquo;s like, oh my God. Oh yeah, it said something really well. It&rsquo;s the art. It&rsquo;s the articulate con man. And I used the phrase last year at Legal Innovators. Everyone&rsquo;s hot in the back room getting high on gen AI. It&rsquo;s like, oh, this is awesome.</p><p>Greg Lambert (21:55)<br>
Didn&rsquo;t actually say anything. But it said it well. But it said something well. I&rsquo;m not sure what that something is.</p><p>Marlene Gebauer (21:58)<br>
It said something, but it didn&rsquo;t say everything you needed to know. Yeah, yeah, yeah.</p><p>Greg Lambert (22:14)<br>
Ha ha</p><p>ha!</p><p>Lennie Nuara (22:15)<br>
Holy cow. Wow. It&rsquo;s like, wait a minute, wait a minute, you know, like, you know, I&rsquo;m not experimenting with drugs. I really know what I&rsquo;m doing. No, no, you&rsquo;re not. so AI can, can have that tendency. So as you move down the spectrum of, its, of its use, if you recognize how it&rsquo;s going to be used, and then you take a moment. One of the things that we do is that we break things into pieces. I&rsquo;ve mentioned it before, we parse and break the smaller the pieces that you give an AI.</p><p>Greg Lambert (22:24)<br>
Ha ha ha ha.</p><p>Lennie Nuara (22:44)<br>
The higher likelihood of success that you will have with it, the bigger task that you give it, the worse it will be. And the reality is, is that&rsquo;s the same with humans. If I give an associate 10 things to do, some of them are going to come back wrong. Okay. If I say, just do this, just do that and so on. And I&rsquo;m, I&rsquo;m been known to be a micromanager as you can probably already tell. And I get a lot of grief for that. But the reality is, that when you micromanage,</p><p>It takes more of my time, but ultimately the product and the training to the student is or the mentee or the or the associate is incredibly more valuable. Yes, they can. Wander in the in the forest on their own for hours or weeks at a time and and produce materials that then I would edit and so on. But I find if you do things in pieces, it works out much better with humans and it works out much better with AI. I thought it lost more from a long time and I&rsquo;ve.</p><p>used to run hiring at some of the firms I was at, so on. So it&rsquo;s a big thing to me to help bring up the youngers and bring them through. And it just takes, and you got to deal with AI the same way. And so they&rsquo;re parallel.</p><p>Greg Lambert (23:50)<br>
How do you, Lennie, as</p><p>a self-proclaimed micromanager, how do you know when to stop? It&rsquo;s almost like doing research. When do you know when, okay, think that, at least for right now, this is where we need to stop, because otherwise we&rsquo;re getting diminishing returns on.</p><p>Lennie Nuara (24:11)<br>
I</p><p>can&rsquo;t give you a quantum. That&rsquo;s a quality issue that the H.I., the human intelligence factor, is critical on. I will tell you that a first, second, third, or fourth year will say, OK, it&rsquo;s done. Then a fourth, fifth, sixth, or seventh year will say, no, no, it needs to be fixed. And the partner says, you&rsquo;re both wrong. It&rsquo;s still not done. And that comes from wisdom. And it&rsquo;s the same thing. I would treat A.I. the same way you treat young associates.</p><p>but also break it into smaller pieces. So that way you can essentially trust but verify, right? You can trust them to do something, but if you do it in small pieces, you&rsquo;ll find the mistake in that one place. So maybe it&rsquo;s finding the mistake in the hallucinated citation or in the logic that someone missed. I have a whole other big speech and article that I&rsquo;m working on about, it&rsquo;s not so much hallucinations that are the problem. It&rsquo;s a race to the mean with regard to the use of AI, which</p><p>Maybe maybe you&rsquo;ll ask me a question later. I&rsquo;ll let that pop out later.</p><p>Greg Lambert (25:14)<br>
Ha</p><p>Marlene Gebauer (25:16)<br>
I like how you were describing this. Like if it&rsquo;s, if it&rsquo;s more complex, you know, you have more chance of, of problems using AI. but I, I, two things like, think it&rsquo;s, it&rsquo;s sometimes challenging to, for, for attorneys or to explain to attorneys, like certain types of things. Like are more complex than other types of things and that you&rsquo;re going to have more of a chance of.</p><p>of not getting the results that you want doing one thing versus doing another thing. And so I&rsquo;m curious how you make that determination or you explain that to people that are kind of working with you and working with the tool. And also what about sort of a genetic workflows? And I mean, is that tackling some of this because you are able to take a something that is more complex and kind of break it into steps.</p><p>Lennie Nuara (26:08)<br>
Yeah, so I&rsquo;ll do the second half first. So breaking things in the steps is what I was talking about, right? And you can do that with agentic components. The key, my perspective, is to stop and have HI, human intelligence, in between the steps. Many people are building agentic workflows. Great, OK, but there&rsquo;s no verification opportunity between the agents. That&rsquo;s no different than saying you&rsquo;re not going to look at each of the steps from the first associate.</p><p>the senior most associated junior partner. That&rsquo;s just a recipe for disaster. You break things into pieces and then you can verify that. It&rsquo;s hard to judge upfront all the steps, but. If you can reward the senior talent, the partner with the ultimate. Goal of more efficiency later, they&rsquo;re going to have to invest more time.</p><p>early to break down their process into smaller pieces. They know all the steps. They know where the issues or the problems will erupt. And if they take the time to look at their process with a critical eye and break it into pieces, that&rsquo;s an efficiency hit on them. They&rsquo;re not efficient, but hopefully it will be a multiplier for later when they invest the time now.</p><p>&#8275; It&rsquo;s no different than investing time in an associate. You invest the time now and build the process in small increments along the way. Then you can build out the technology again with verified steps in between to build it out and create reliability over time. But it is a time sync and that&rsquo;s something that you I&rsquo;ve spent an inordinate amount of time the past couple of years. So you know building deal driver and dealing with Megan Ma at Stanford Deal Mentor.</p><p>which is a negotiation simulation that&rsquo;s agentic based and so on. And the amount of time that we&rsquo;ve been devoted to those, and I&rsquo;m not bragging, it&rsquo;s just, if you want it to be right, you must spend the time. And I have the flexibility because I don&rsquo;t have the labor stack that existed when I was a partner at Greenberg Troward or Thatcher Profit or any of the other firms I worked at. I can do that. And if I was at a firm, they&rsquo;d either say, okay, go for this, we&rsquo;re gonna switch your role, your numbers are gonna be different and so on and so forth.</p><p>hopefully we&rsquo;ll generate efficiency from live. But we just we did that investment into our firm and into Dealmentor. It&rsquo;s a hard thing to swallow for many firms. don&rsquo;t begrudge them at all. It&rsquo;s hard if you&rsquo;re at a big firm and you&rsquo;re grinding through and you&rsquo;re making your numbers and you&rsquo;re doing well. Why switch? You got to be kidding me. I&rsquo;m not going to switch that. I&rsquo;m not going to change my comp. I don&rsquo;t want to. I&rsquo;ll help a little bit. I mean, I remember I was laughing at it.</p><p>conference I went to and the firm, which remains nameless, was bragging about the fact that, you know, they give whatever 50 hours a year or 100 hours a year to the associates to think of operationally how to make things better when I can do that in a month. I could spend an extra 200 hours in a month. Sounds insane, but I will. And I&rsquo;ll do that because it&rsquo;s giving us tremendous operational efficiency later. But I can do that.</p><p>Firms need to look at that and they can do that with their ops group, but at some point the real talent needs to spend that time. And that&rsquo;s hard to get, it&rsquo;s understandable.</p><p>Marlene Gebauer (29:30)<br>
So you were mentioning this sort of judgment calls and some of these AI enhanced &#8275; workflows. So the HI, why it matters, what&rsquo;s the next step layer that sits above what the AI is actually doing. As you&rsquo;re designing the M&amp;A tool and a broader type of AI-powered workflow,</p><p>I know you say that people know like, know, seasoned practitioners know what the steps are, but also you&rsquo;re saying that sometimes it&rsquo;s hard to figure out the steps. And I have, I have experienced that too, because it&rsquo;s like, it&rsquo;s in your head, but when you have to sort of document each step, that is a little more challenging. And then at that point, you know, when do you decide, you know, AI can take the first pass and then when does human judgment, you know, have to set step in, like, you know, where are you going to draw the line?</p><p>between the AI assisted execution and where the business judgment comes in.</p><p>Lennie Nuara (30:27)<br>
I think it&rsquo;s again, it&rsquo;s a great question and keep going back to it&rsquo;s in parts, right? So and by the way, a trick can be if you spend time with a lawyer, they want to know, I&rsquo;ll throw myself back into the old days. They want to just dictate out the flow of a deal from beginning to end. See it written down once, just dictate it or or tell it to someone and that they can take notes. OK, do it a second time.</p><p>Let them fill in more, do it a third time. Okay, now put that into AI and say, write out this process and show me the steps in the process. You can put that into the AI or then it&rsquo;ll give a 10 page or a 20 page list of the tasks that have to be accomplished. And you keep feeding that. That&rsquo;s not a risk event. Someone now can look at that flow and say, but they forgot this. They forgot that. They forgot that. Lawyers are really good at finding a problem with your stuff.</p><p>not telling you in advance what the problem will be, but reacting to something on paper. So that&rsquo;s just a little trick that I use to force myself saying, I know what I want to do. Bang, bang, bang, bang, bang, bang. I&rsquo;ll get five things and then I&rsquo;ll put it in the AI. And then I get 30 back and I&rsquo;m like, but you missed this, this and this, you idiot. You&rsquo;re useless, Mr. AI. And then I put more in and all of sudden the AI, because they&rsquo;re sycophantic, will come back and say, &#8275; good catch. Let me add those ideas now.</p><p>which is a good, it&rsquo;s a wonderful experience, but they suck up to us so much.</p><p>Greg Lambert (31:51)<br>
done. You&rsquo;re right. You&rsquo;re right. There are 10 Rs in strawberry. Well,</p><p>Marlene Gebauer (31:56)<br>
ha.</p><p>Greg Lambert (31:56)<br>
it</p><p>sounds a lot like what we&rsquo;ve had Wendy Jepson from Let&rsquo;s Think On where she talks about</p><p>taking advantage of having the partner walk through, talk through that process multiple times without realizing that they&rsquo;re talking through the process multiple times. it&rsquo;s an art, think.</p><p>Lennie Nuara (32:18)<br>
It is, it&rsquo;s an art. It&rsquo;s, it&rsquo;s not significantly different than your art. Okay. When interviewing me, okay. You guys spent some time in advance things you wanted to cover. You wrote it down, you created your outline and now you can ask me the questions or preparing for a deposition. They know what they have to get to. They got to nail things. Okay. I remember the early deposition I took. I didn&rsquo;t even go at all into the concept of damages. All I do is focus on liability and the partner said, well, okay. And so what were their damages? I&rsquo;m like, Oh.</p><p>I didn&rsquo;t get into that. You know, I a second year associate taking one of my early depths and big lesson to learn. Okay. can cover it all, right? But you got to think in advance. So you&rsquo;ll do that, break things down, but then applying the judgment of, of what you use for, for what comes down to, well, what&rsquo;s the risk factor for that? As I said earlier, right? The risk factor for deduping something is significantly different than the risk factor for directing the indemnification provision with regard to an M&amp;A deal.</p><p>&#8275; or with regard to, let&rsquo;s say, the risk profile, and that will have an impact on the reps and warranties and the disclosure statements and so on and so forth. The drafting of those things are critical. And yeah, you might make something to a first pass. It&rsquo;s very easy for some partner to react to the first pass, great, or even an associate can react to the first pass and so on. How do you pick where those issues are? It will vary significantly. I can&rsquo;t give you a magic wand and say, works, and I know the AI doesn&rsquo;t.</p><p>I know it&rsquo;s a good first pass on many, things, even a second or third pass. can, you know, I joke with my wife, you know, I can say, Claude is like the greatest associate I&rsquo;ve ever had. Never complains, never, it&rsquo;s never, never late, always on time, delivers things in minutes, not days and so on and so forth. But it&rsquo;s, it&rsquo;s imperfect. And I accept, I accept that. That&rsquo;s okay. you know, and it, it will not make the judgment call. It will not. it will make an offer to me, but I just treat them as an associate. But</p><p>The deeper risk when it comes to drafting, drafting isn&rsquo;t hallucinations. I don&rsquo;t care about hallucinations. That will ultimately clean up over time. It&rsquo;s conformity. It&rsquo;s this constant, all the models are converging at the lowest level of commonality. So you&rsquo;ll get what everyone else did. That&rsquo;s not what you need in your deal. I said in the very beginning.</p><p>Greg Lambert (34:33)<br>
It will own the</p><p>mediocre.</p><p>Lennie Nuara (34:37)<br>
Yes, and</p><p>that&rsquo;s a much bigger risk. OK, now you&rsquo;re you&rsquo;re essentially abdicating your responsibility as counsel to constantly give your client what everybody else gave. You have leverage and both sides have leverage. The buyer wants to buy and the seller wants to sell. But there are points that are different for each one of them. One of them might say, I don&rsquo;t care about the indemnity. I know my cap table is clean. I don&rsquo;t care. I&rsquo;ll indemnify up and down, left and right.</p><p>And yeah, we had a cybersecurity breach, but I know what the breach was and I&rsquo;ll indemnify it for that. just had that and it&rsquo;s happened to me, you know, numerous times with clients and say, that&rsquo;s fine. And then, and all of a the buyer&rsquo;s like, the full value. Yeah, sure. The full value of the transaction. I&rsquo;ll indemnify. Okay. So you, that is a judgment call, right? But you can do that on a item by item basis later on in the transaction as the, cause the risk gets higher and higher and higher all the way through.</p><p>Marlene Gebauer (35:23)<br>
you</p><p>Lennie Nuara (35:34)<br>
Yeah, an early problem, you know, in use of AI, it happened. But when it comes to drafting agreements, okay, &#8275; and producing output, you can&rsquo;t expect the machine, AI, to leverage your client&rsquo;s position. You might say to it, hey, we are going to take a hard stance on X or Y or Z, but you have to drive that. Again, talent first, okay, and then AI second.</p><p>Essentially, you&rsquo;re never going to get the tall blade of grass out of an AI. You&rsquo;re going to get the nice, smooth, Augusta level golf course. There&rsquo;s not a single blade of grass too high or too low. They&rsquo;re all the same. Marvelous. That&rsquo;s pretty, but that doesn&rsquo;t help your client. You&rsquo;ll give up on certain points. You&rsquo;ll have to get others. Some clients will walk away. I have people that have walked away from deals. And that intelligence, that wisdom really changes</p><p>the use of your AI, if you recognize it, an associate won&rsquo;t, maybe some, but most won&rsquo;t. And partners that are time-pressured might not see it instantly. When they take a step back, take a breath, and then they read the output, they&rsquo;ll be like, &#8275; man, this is slop. This is not helpful. And the problem is that my role is often, okay, you&rsquo;re right.</p><p>Greg Lambert (36:46)<br>
You</p><p>Lennie Nuara (36:54)<br>
It&rsquo;s slop right now. Let&rsquo;s go back and ask for more pointed answers on X, or Z, or pointed drafting on X, and Z. And the slop then becomes better. The point is that the device, AI, writes really well. But it just doesn&rsquo;t know what to write. So if you say, it to me, it will give you the standard. If you recognize the standard is not what you want, then you have to drive it to give you the in.</p><p>Nice pros, nice pros, but you have to tell it what you want to drive for.</p><p>Greg Lambert (37:27)<br>
Are you giving it like playbooks to help it get a little better at the mediocrity?</p><p>Lennie Nuara (37:34)<br>
Well,</p><p>I&rsquo;m not a big fan of playbooks because again, it&rsquo;s that&rsquo;s another version of mediocrity at some level. And two, we&rsquo;re not a corporate, you know, we&rsquo;re not an enterprise. OK, if I was representing the same client all the time, always I might I might revert to that, but that&rsquo;s just not what our practice is. The deals are somewhat unique all the time, but I will give it a stack of things that we&rsquo;ve done that push the envelope a certain direction. Let&rsquo;s say, you know.</p><p>The calculation of the matter, but the there&rsquo;s certain calculations that have to happen and you want them to break a certain way. So the working capital calculation is really what it is. It has to break a certain way.</p><p>Greg Lambert (38:12)<br>
Are</p><p>you finding the AI is getting better or worse when you give it things to do?</p><p>Lennie Nuara (38:19)<br>
It&rsquo;s right now it&rsquo;s about</p><p>the same no matter how many times and sometimes I play one against the other. Our platform lets you use four if you want to put another AI on there. We can just put in API and off we go. And you can play against one another, but from what I&rsquo;m reading from the various sources, they&rsquo;re all being trained on the same corpus. Much of it is the same purpose and particularly with regard to private transactions opposed to public deals.</p><p>Very troublesome, there&rsquo;s not a lot of data on the private transaction. So your experience base is really great. So that&rsquo;s where we&rsquo;ll, we do the old fashioned way. We pull our old deals and we put them in and it&rsquo;ll create a first pass, but it is still literally just still a first pass. That may change over time in larger firms than mine. You know, the mega firms, the big law, they have a corpus that they can point to that are significantly larger than what we have.</p><p>And they might be able to create, you know, through RAG, right? Retrieval of a generation. They can push that data in for drafting purposes. That would be great.</p><p>Greg Lambert (39:18)<br>
So with flat iron, mean, do you have, since almost day one, you&rsquo;ve looked at putting pressure on both the bilbel hour and the staffing pyramid style that you see.</p><p>in Big Law and it&rsquo;s kind of interesting because right now everyone in Big Law is talking about how they&rsquo;re anticipating a change in the model but at the same time they&rsquo;re now recruiting 1Ls before they even take their first semester exams. it&rsquo;s almost like they&rsquo;re doubling down on the existing model while knowing that there&rsquo;s a change on the horizon.</p><p>So how do you see the, I guess with the M&amp;A practice specifically, is there like a new apprenticeship model? How are you seeing the industry bringing along not just new talent but existing talent as the models seem to, I think they&rsquo;re gonna change, they&rsquo;re gonna have to, I think.</p><p>Lennie Nuara (40:17)<br>
I think they will. In the first instance, we are extremely lean as a firm, right? We don&rsquo;t have associates. We will bring them in. There&rsquo;s lots of talent that the big firms have trained for us that are sitting that don&rsquo;t want to be in the big firms for a variety of reasons. Usually we&rsquo;re not, we don&rsquo;t bring in very young associates, but lately I&rsquo;ve been building at least internally a model out where we will start with</p><p>Basically, the idea is, you know, what is it? A barbell. You&rsquo;ll have a ton of senior talent here. You have a ton of very young lawyers here and in the middle you may have a thinner path. The young the young side at one sense, let&rsquo;s say one to zero. In other words, they&rsquo;re not in law school at all, but with one to three years. So you have talent that you can utilize that are smart people, but didn&rsquo;t want us, you know, drop 100 or $200,000 for law school.</p><p>But they&rsquo;re very smart, OK? And you can call them paralegals or whatever. And I think we can build a model with if I&rsquo;ve done it on paper and I&rsquo;ve had these people working for me, &#8275; the paralegals that I&rsquo;ve used and it worked out quite well before AI. Where you spend the time letting them essentially trudge through the mud doing an amount of work that is. Something that will give them an understanding of.</p><p>the legal issues or the things that they need to spot. And they spend time not on AI to do those things. So they have to walk through the mud and get dirty and sweat it out. And some of them will progress into the practice and so on for later years and so on. The middle years, three to 10 or whatever it is now, the bigger firms, I think unless they adapt quickly,</p><p>are more challenged because a lot of what their work was doing was overseeing the various, you know, the years below them. So I think if we spend more time at the front end training, we have our deal mentor, &#8275; school that we did with Megan and other methodologies where we train the very earliest ones, but you train them, I&rsquo;m sorry, the old fashioned way, where they&rsquo;re not dependent upon the AI so that when, they are using it, they</p><p>can learn from and take some of their hard earned knowledge and wisdom and then apply it and use it to oversee the AI that they&rsquo;re using. Now you can apply what I just said to fourth, fifth, sixth, seventh and eighth year lawyers now. The difficulty is to break them out of the mold that they&rsquo;re in now, which is the traditional pyramid. And that&rsquo;s a risk issue that firms have to face that they don&rsquo;t want to.</p><p>Look, firms don&rsquo;t like flat fees because there&rsquo;s risk, but we take it all the time. I don&rsquo;t care. And so, you know, I don&rsquo;t make as much money. I make this versus this, but I still made this. OK, so it&rsquo;s just the height of how much and how we value our time and so on. So the flat fees at big firms are harder to do because no one wants to stick their neck out and take risk that the thousand hours that they build across seven people really was replaced by X.</p><p>And now what are they going to do with those bodies? And what are they going to do if you know? In a variety of ways, how do they handle? Hey, you don&rsquo;t have the thousand hours, so what you but you finish the deal so you don&rsquo;t have the hours the hours weren&rsquo;t built. Am I going to get compensated? That should drive them towards a different fee structure. But right now it&rsquo;s very, very difficult to. It&rsquo;s very difficult to, know, to do a U turn in Queen Elizabeth 2. OK, you just you can&rsquo;t just.</p><p>turn around, know, the steamship takes a while to turn. And I think that&rsquo;s a problem. But again, it&rsquo;s people hired talent.</p><p>Greg Lambert (43:47)<br>
I got an unrelated, well,</p><p>I got a semi-related question to this. And I think it&rsquo;s one of the things that I&rsquo;ve been talking with other firms. When it comes to AI, I think the last couple of years, there hasn&rsquo;t been too much worry about the amount of money that we&rsquo;re spending on the AI tools. But I think you are probably seeing it as well that the token</p><p>costs or in the amount especially as you may be throwing more agentic processes in there. I mean there&rsquo;s some software companies where people are spending as much as their salaries and just token costs. So are you somewhat afraid that you&rsquo;re just shifting price, know, cost of an associate over to the cost of the tokens?</p><p>Is that something you&rsquo;re thinking about?</p><p>Lennie Nuara (44:43)<br>
Well, yes, I definitely think about it. If you were to. Go on deal driver, there&rsquo;s actually a tracking of the cost per clause. Her model that you&rsquo;re spending so you can call up that you looked at document X and you can see that you&rsquo;ve ran 14 different agentic flows and what the cost of each run is for the clause in that one document. And then there&rsquo;s an aggregate of what your spend is across.</p><p>the full document and then across the entire screen of all the documents. And the reason being is that, again, I come from a long time ago where legal research was incredibly expensive. And on a weekly basis, maybe when I first started practicing at 84, actually no, 86, because the first firm I was at didn&rsquo;t have Westlaw or Lexis, but I got to a firm in 86 and they did. And for a couple of years, there&rsquo;d be a partner running down the hall screaming.</p><p>That four page memo cost me $9,000 because of you. What did you do? Because they didn&rsquo;t know how to do Boolean searching or so on and so forth. And I&rsquo;ve never forgotten that. And the cost of a tool is part of the economics of the transaction and how you quote. So we track it literally to the clause and you can use any model you want. You can use three different models on the same clause to do compare.</p><p>Marlene Gebauer (45:43)<br>
Ha ha ha ha ha ha.</p><p>Greg Lambert (45:46)<br>
Ha ha ha.</p><p>you</p><p>Lennie Nuara (46:07)<br>
And you can track all that. the reason is exactly what you just said, because I see the change. Now, my experience has been at least the past two years, the token cost is going down, but the usage of how many agentic events that you have is going up. So I wanted to track that and it&rsquo;s, been fine so far. It&rsquo;s not out of line. And I do believe overall, it will be significantly more efficient than the people.</p><p>doing that work, whatever that work is that we assign. Significantly so, like by an enormous factor. It&rsquo;s like, you know, a hundred to one, but it&rsquo;s still, you have to recognize that that&rsquo;s a cost and many vendors do not expose that. They just give you a bill &#8275; and it&rsquo;s, know, yeah, you used our platform and your AI upcharge is, and you have no way of knowing, was it Marlene? Was it Greg? Or was it Lennie or someone else that did that?</p><p>So.</p><p>Greg Lambert (47:01)<br>
It was Marlene.</p><p>Marlene Gebauer (47:02)<br>
I&rsquo;m listening to him I&rsquo;m getting flashbacks of the many conversations I&rsquo;ve had with people about like, yeah, you spent this much money to do this. It&rsquo;s like, because you didn&rsquo;t know what you were doing.</p><p>Greg Lambert (47:08)<br>
your Westlaw bill was $50,000. &#8275;</p><p>Lennie Nuara (47:13)<br>
Right. And it&rsquo;s,</p><p>and it matters. It&rsquo;s the idea is, is to give people all the tools necessary to be more efficient, not just, you know, this blanket, I&rsquo;ll get, throw everything here, get an output there and know where we&rsquo;re going. It&rsquo;s going to change. going to, it&rsquo;s changing every quarter, let alone sometimes every other week. But I mean, honestly, it&rsquo;s, it&rsquo;s ramping and it&rsquo;s great. I love tech, but let&rsquo;s recognize what it is. It&rsquo;s a tool and let&rsquo;s manage that tool and our talent and then produce better results.</p><p>Marlene Gebauer (47:41)<br>
So Lennie, you mentioned monoculture monoculture a little bit earlier. Can you expand upon that a little bit for us?</p><p>Lennie Nuara (47:49)<br>
Yeah, and I think I touched on it a little bit. It&rsquo;s the model model culture is basically saying that, you know, all the models are pulling a significant amount of their content from the same sources. Example, they&rsquo;re all looking at the Edgar database from the SEC for contract clauses. Yes, they can have others, most of them are not. And over time, if you&rsquo;re drafting based upon those, it&rsquo;s a drive to mediocrity. And if you don&rsquo;t recognize that issue, it&rsquo;ll</p><p>bite you in the ass at the end. That&rsquo;s the problem is that, you think this is a good example. This is a good document. As I said, it&rsquo;s written well, but that&rsquo;s market. Okay, but you don&rsquo;t want to be market. You want to use your leverage in your deal. It&rsquo;s a way of thinking that we get paid for. Get me the best deal you can get based upon my circumstances. Don&rsquo;t get me what&rsquo;s standard. Now,</p><p>You know, in the VC world, for example, there&rsquo;s the NVCA, the National Venture Account, that&rsquo;s your capital association that has standard agreements. But if you look to see what&rsquo;s actually done on those deals, they&rsquo;re all tweaked. They start with that, but then they all tweak them. And they&rsquo;re not all filed, by the way. Some are, but they&rsquo;re not all filed. &#8275; And so this concept of, you know, all the models giving the same answers, yes, on our platform, you can actually run</p><p>different models and different versions of the producer. So various versions of Anthropic, various versions of OpenAI, various versions of Gemini. I usually pick the best one and just spend the money, but you can pick and choose what you want to do and see what results you get. But you want to see some cross and you want to see differences. And if people aren&rsquo;t cognizant of that, I think it&rsquo;s a greater risk to our wisdom. Our biggest issue in our practice is our wisdom is our value.</p><p>Our talent is our value. If it&rsquo;s just cookie cutter stuff, well then fine. You don&rsquo;t need, you know, super elite lawyers. It&rsquo;s truly cookie cutter and you&rsquo;re not, and you don&rsquo;t want to be working on that work anyway. I think all of us are at firms that are doing more difficult work than many other firms. It doesn&rsquo;t mean that the other firms are unimportant. They aren&rsquo;t, but they don&rsquo;t need to be spending at that level. There&rsquo;s lots of other issues. But if now all those big firms are reliant on, you know,</p><p>LMS models that are all pulling from the same base of agreements and you&rsquo;re expecting them to give you the spin that you need, don&rsquo;t. Expect them to give you what is flat, know, mediocrity. And mediocrity might be okay for certain clauses, but not for most. It might be fine for, you know, getting an assignment letter, you know, out to the landlord to get their consent on something. Okay, there&rsquo;s no risk there. Does it ask for consent? Does the person sign it? Yeah, you might want to&hellip;</p><p>in a couple of the clauses. don&rsquo;t want to miss that issue. This monoculture thing is, and people call it model monoculture. I&rsquo;m not sure I really love the word, I adopted it because it&rsquo;s out there. There&rsquo;s research that&rsquo;s been done that says that people are missing this completely. There&rsquo;s research in Stanford and other places. think it&rsquo;s called, there&rsquo;s two labs that I&rsquo;ll get a lot of information from, Dr. Nick and Maz&rsquo;s lab.</p><p>which is a Lift Lab at Stanford and then Reg, I think it&rsquo;s Reg Lab, I forget. But Stanford&rsquo;s got some great stuff. Some of the other schools have it as well. You gotta be looking to see where the models are. because that&rsquo;s basically the associate pool that you&rsquo;re pulling from. You hire associates because there&rsquo;s a spark or something in them that you really wanted. You don&rsquo;t just say, me a first year, give me a second year. That&rsquo;s not what you hire. You hire somebody for.</p><p>that spark that you see that they&rsquo;re really going to be good. They wrote something creative. They write really well. Okay, well, the model can do that, but they wrote something that I didn&rsquo;t expect. There&rsquo;s, I used to look for the fire in the belly of the associates that I was interviewing. I wanted to see that fire. If I didn&rsquo;t see it, I&rsquo;m like, great kid, great statistics, you know, great grades. They&rsquo;re not going to do it. They&rsquo;re not going to cut it.</p><p>Marlene Gebauer (51:51)<br>
So Lennie, you actually got a little ahead of my next question, but I&rsquo;m sure that I know you&rsquo;re huge reader, you&rsquo;re a huge ingester of information, you&rsquo;re a lawyer technologist. So you mentioned a couple things that you go to for staying ahead of the business of law, but what are maybe some other go-to resources that you use?</p><p>Lennie Nuara (51:50)<br>
You</p><p>Greg Lambert (51:56)<br>
Hahaha</p><p>Lennie Nuara (52:15)<br>
one of my favorite things to read is the information, which is a newsletter of, from Silicon Valley. &#8275; they track all technology companies, all the latest, but not just the startups, which they do. they, they&rsquo;re not legal tech. They&rsquo;re just tech. spend most of my time reading about tech. Cause I buy tools, right. And I use tools. don&rsquo;t care if they&rsquo;re legal tech tools. They&rsquo;re just tools. I like databases. know, most lawyers don&rsquo;t care to read about databases. I did. was, I was an early adopter of.</p><p>than after that, Airtable. I thought Airtable was marvelous. It&rsquo;s a relational database. You say that to most people and they&rsquo;ll just fall asleep. They don&rsquo;t care. They don&rsquo;t want to know. But I track, so I track the traditional technology sector very, very closely. Between that and LinkedIn and my feed on LinkedIn, those are two go-to sources. But I also track like Bloomberg and Wall Street Journal, their coverage on the tech industry and trading. Why? was trading.</p><p>essentially funds the tech industry, which then funds the innovation. So I look for that, I don&rsquo;t want to say virtuous look, but that relationship really matters. Obviously I mentioned, know, Megan, who&rsquo;s a great friend and our partner on Deal Mentor, but also the work that she&rsquo;s doing out of the Lyft Labs is fabulous to see what&rsquo;s coming and essentially frontier work. It&rsquo;s about frontier, you know, models.</p><p>Marlene Gebauer (53:17)<br>
relationship.</p><p>Lennie Nuara (53:38)<br>
But her work is truly on the frontier, which is just so much fun. spent, I see her a couple of times, six times a year, I&rsquo;m out there working with her on deal mentor and other stuff. Other things, tech industry publications like PC Magazine all the way through to Info Week and the cyber security journals that are out there. And then the more mundane for the legal profession, I looked at the sanctions, ethics litigation.</p><p>where someone hallucinated the letters, you know, on law.com and others that, know, what happened to the unnamed top five law firm recently with their filing, &#8275; with hallucinations in it. &#8275; and the other ones that were down in Alabama and so on, those are informative. find it laughable though, that they&rsquo;re complaining so much about hallucinations. Do you know that early on for email that sending an email was a violation of the ethics opinions? Which I was just like laughing. I was like,</p><p>Greg Lambert (54:32)<br>
Thank</p><p>Lennie Nuara (54:36)<br>
When I saw that, I&rsquo;m like, my God, you people are so, look, I come from a world where there were these, remember these things, okay? This is what fed the machine behind me. Of course I did, because it&rsquo;s right behind me. But you know, it&rsquo;s like email, my God, Lennie, you&rsquo;re sending an email, that&rsquo;s a violation of our ethics, I&rsquo;m like, no, it&rsquo;s not, I don&rsquo;t care, I&rsquo;ll take that fight, nobody ever sued me. So I&rsquo;ll take it, but you know, so, but you gotta track that stuff.</p><p>Greg Lambert (54:41)<br>
Good. 5.25-inch foppy disc. into your Osborne.</p><p>Marlene Gebauer (54:44)<br>
like that you have them handy because you need it for that.</p><p>Greg Lambert (54:58)<br>
All right,</p><p>we peaked into the past with your 5.25 inch floppy disk in your Osborne portable computer. So now let&rsquo;s peek into the future with your crystal ball. So what do you think over the next short period of time that the legal industry needs to be prepared for? What&rsquo;s your take?</p><p>Lennie Nuara (55:09)<br>
Hahaha</p><p>goodness.</p><p>Well, I already mentioned the model model culture that that is critical, and it depends on where you are in using the stack, right? So if you&rsquo;re using it, the pedestrian stuff, you&rsquo;re not going to be worried about if you&rsquo;re using it for CLM, you know, contract, life cycle management, contract review, drafting. You have to be worried about that. You have to be looking much more carefully at that. So and if you&rsquo;re not, you&rsquo;re toast. I mean, like you&rsquo;re really in trouble. You&rsquo;re missing a big issue bigger in my mind than hallucinations.</p><p>And so we&rsquo;ll see, it will probably present itself when you see more of, it&rsquo;s not slop, but it or more mediocrity in the output that you&rsquo;re getting from the machine. that&rsquo;s item one. Simulation training, that&rsquo;s not just a plug for Deal Mentor, although it&rsquo;s a great product, but we need more simulation training to create a more engaging environment for younger lawyers to learn. And AI can do that. We created dialogue, you know, it&rsquo;s,</p><p>It&rsquo;s a simulator, but it&rsquo;s not based upon any prewritten dialogue at all. Okay. The dialogue comes from a language model and it&rsquo;s innovative beyond words. And we should see more of that. Hopefully just everybody will buy a deal mentor, but I&rsquo;m not here to sell that. The point is, is that you need that kind of inspiration to get people to really learn better because otherwise we&rsquo;re going to be skipping a chunk of years and leaving them behind. And we don&rsquo;t want that.</p><p>Private equity is gonna drive the spend at law firms, because they see this innovation. They&rsquo;ll push harder than the enterprises will, although enterprises will push, but private equity will push even harder. They&rsquo;re much more expense focused, and they&rsquo;re gonna say, are we spending X, Y, or Z on counsel for whatever it is? Any kind of operational expense of lawyers, when they know they&rsquo;re using it to write, you know.</p><p>essentially memos on multi hundred million dollar acquisitions. They can do that. Why aren&rsquo;t our council using that? And so that will drive a spend cycle that&rsquo;s going to compress the big firm ability to bill. And then maybe they&rsquo;ll switch to flat fees or otherwise. And then, you know, I don&rsquo;t want to say the pyramid breaks, but the pyramid</p><p>You may use those bricks in not a pyramid style. You may have silo, silo, or I said before barbells or whatever. I think there has to be a change there. It&rsquo;s going to change. AI is going to absorb a lot of junior level work. The only other thing that might happen that will ameliorate that is just the nature of the practice of law. When I started practicing 40 some odd years ago, 84, there was nowhere near the amount of regulatory practices there is today.</p><p>the regulatory practices has exploded as a percentage of what a law firm does and the complexity on every transaction around the regulatory sector. No one really anticipated that, but we kind of grew into it. The existence of AI will help with regard to that. But the point is, that everybody thought, well, know, the practice of law will get more and more efficient and so on and so forth. It didn&rsquo;t really, it created new sectors of things to look at.</p><p>&#8275; And so smart firms will figure out that maybe we have to shift people around and train them differently and so on. I don&rsquo;t think they&rsquo;re going to break, but what people do and how they do it is definitely going to change. And the smart firms will figure out how to repurpose people in different ways, change the dynamics and so on. I think if they continue to do what they&rsquo;ve always done, which is put talent first,</p><p>smart minds will figure out how to deploy that talent with the right tech. I&rsquo;m not trying to undermine or speak poorly of tech because obviously I love tech, right? Okay, but that&rsquo;s where it will go. Ultimately is that if they&rsquo;re smart, they&rsquo;ll adopt it and use it properly and successfully. And if they don&rsquo;t, the dinosaurs will die. mean, although the dinosaurs did rule for, you</p><p>a couple of hundred million years and we&rsquo;ve only been here for a few. So anyway, so.</p><p>Greg Lambert (59:26)<br>
They had a good run. &#8275; All right.</p><p>Well, Lennie Noir from Flatiron Legal, want to thank you for coming in and nerding out with us today and showing us some of the old tech and then peeking into the future with us.</p><p>Lennie Nuara (59:45)<br>
Thank you so, much. I really appreciate it, Greg and Marlene. It was really a fun time. Let me blather on about things. I appreciate it and look forward to seeing you guys more on this great podcast.</p><p>Marlene Gebauer (59:45)<br>
Thanks, Lennie.</p><p>Yeah, well, thank you, Lennie. And thanks to all of you, our listeners, for taking the time to listen to the Geek in Review podcast. if you enjoy our dive into this data, please share it with a colleague.</p><p>Greg Lambert (1:00:07)<br>
And Lennie, if people want to learn more about you or Flatiron, where&rsquo;s the best place for them to go?</p><p>Lennie Nuara (1:00:13)<br>
Our</p><p>website is now flatironlaw.ai and my email address and contact details are there. So it&rsquo;s flatironlaw.ai.</p><p>Greg Lambert (1:00:22)<br>
catch.</p><p>Marlene Gebauer (1:00:23)<br>
And as always, the music you hear is from Jerry David DeCicca Thank you, Jerry. And bye everybody.</p><p>&nbsp;</p>
]]></content:encoded>
					
		
		
			<dc:creator>xlambert@gmail.com (Greg Lambert)</dc:creator></item>
		<item>
		<title>Shadow UX and the Upcoming Fight over Legal Research</title>
		<link>https://www.geeklawblog.com/2026/04/shadow-ux-and-the-upcoming-fight-over-legal-research.html</link>
		
		
		<pubDate>Wed, 29 Apr 2026 11:44:23 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[legal research]]></category>
		<category><![CDATA[Shadow UX]]></category>
		<category><![CDATA[SUX]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[UX]]></category>
		<guid isPermaLink="false">https://www.geeklawblog.com/?p=19265</guid>

					<description><![CDATA[I have a prediction that I want to share with you. This is something that I envision happening just a few short weeks from now. I imagine seeing an associate at a law firm doing something that will make every product manager at Thomson Reuters and LexisNexis choke on their morning coffee. She has a... <a href="https://www.geeklawblog.com/2026/04/shadow-ux-and-the-upcoming-fight-over-legal-research.html">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>I have a prediction that I want to share with you. This is something that I envision happening just a few short weeks from now. I imagine seeing an associate at a law firm doing something that will make every product manager at Thomson Reuters and LexisNexis choke on their morning coffee. She has a contract dispute question. A Real one. There will be a partner waiting. And the clock is ticking.</p><p>She won&rsquo;t open Westlaw. She won&rsquo;t open Lexis. She won&rsquo;t open her browser at all.</p><p>She types her question into a work-approved AI chat window. Twenty-four minutes later she has a memo, citations included, sent off to the partner. Entered 0.4 hours on her time entry system. And she is done.</p><p>The KeyCite red flag that Thomson Reuters spent generations building? Never saw it. The Shepard&rsquo;s signal? Didn&rsquo;t see that either. The annotated treatise hierarchy that some editor in Eagan, Minnesota agonized over? Came through as a flat blob of text in a JSON response that the model summarized into a single sentence.</p><p>Don&rsquo;t get me wrong. Westlaw and Lexis were in the research process. She just didn&rsquo;t noticed they were there. And that, friends, is the near-future I want to talk about.</p><p>I&rsquo;ve been privately calling this &ldquo;<em>Shadow UX.</em>&rdquo; Think of it as the user-experience cousin of Shadow IT. We all know the effects of Shadow IT, right? That was when the marketing team started using Dropbox without telling the IT department, and three years later IT realized the entire company&rsquo;s roadmap was sitting on someone&rsquo;s personal account. Shadow UX is the same thing at the interface layer. An unauthorized layer sitting between the user and the vendor&rsquo;s product, and the vendor doesn&rsquo;t control it, doesn&rsquo;t design it, and increasingly doesn&rsquo;t even know it exists.</p><p>For legal information vendors, the Shadow UX layer is mostly an LLM with a few tool calls bolted on. There are other versions out there too: browser extensions that re-skin search results, paralegals building Notion dashboards off APIs, scraping wrappers feeding firm intranets. The AI agent is the one eating everyone&rsquo;s lunch though.</p><p>Here&rsquo;s why I&rsquo;ve been thinking about Shadow UX so much lately.</p><p>For thirty years vendors competed on the browser/dashboard. The fancy charts. The little visual icons. The hover states. Pixel-perfect interfaces designed for a human eye scanning a screen. In 2026, the user is increasingly something else. It&rsquo;s a model reading a JSON schema at inference time. If you&rsquo;ve optimized your product for an audience that&rsquo;s becoming the minority of your traffic, you&rsquo;re going to find out the hard way.</p><p>OK so why now? Three things had to happen at the same time, and they all did within about eighteen months.</p><p>First, the models actually got good. The 2024 models couldn&rsquo;t handle jurisdictional nuance. The 2026 models draft memos that pass partner review. Not every time, sure, though often enough that associates are using them anyway.</p><p>Next, the billable hour math became impossible to ignore. We bill in six minute increments. Any tool that turns a ninety minute task into nine minutes is going to get used, with or without IT&rsquo;s blessing. (Sound familiar? Hello again, Shadow IT.)</p><p>And Finally, the Model Context Protocol showed up. MCP is the part of this story that doesn&rsquo;t get enough attention. Imagine if every database, every research platform, every internal wiki spoke a common language to AI agents. That&rsquo;s MCP. Companies like NetDocuments and Midpage adopted it. Specialized vendors are rolling out MCP servers for everything from patent search to legislative tracking. Once the protocol got standardized, the vendor&rsquo;s UI stopped being a moat and started being a speed bump.</p><p>Now here&rsquo;s the part that should worry legal information providers. The editorial work that built these companies, the headnotes, the Key Number system, KeyCite, Shepard&rsquo;s, all of that gets flattened.</p><p>In a portal, a KeyCite red flag is loud. It&rsquo;s red. It&rsquo;s literally a flag. You see it before you see anything else on the page. In the Shadow UX layer, it&rsquo;s a token in a JSON field. If the model&rsquo;s summarization logic doesn&rsquo;t promote it, the user never sees it. The signal is technically still there. It&rsquo;s just invisible.</p><p>The headnote tree is worse. Editors spent generations nesting these things to show legal relationships. Models hate hierarchies. They flatten them into bullet lists, or worse, into prose. The categorical context disappears.</p><p>And then there&rsquo;s the provenance problem, which is the one that actually worries me. When an agent synthesizes ten cases into one paragraph, the user gets a confident narrative. They don&rsquo;t see that eight came from KeyCite-validated sources and two came from a sketchy public database the model decided to trust. The vendor&rsquo;s brand was always the proxy for &ldquo;this is reliable.&rdquo; When the brand is invisible, the proxy is gone.</p><p>I&rsquo;ll put it bluntly. If you&rsquo;re a research vendor, your brand value is currently being laundered through someone else&rsquo;s chat interface, and you&rsquo;re not getting credit for it.</p><p>The pricing model is the other shoe about to drop.</p><p>Seat-based pricing is the deal we&rsquo;ve all lived with since the 90s. You pay per lawyer. The lawyer logs in. Everybody understands. Now&hellip; an AI agent doesn&rsquo;t log in. It doesn&rsquo;t have a seat. It can do the work of fifteen associates in an afternoon though. So vendors are watching seat counts flatten while their compute costs spike. The infrastructure bill goes up while the revenue line goes sideways. That&rsquo;s not a sustainable shape.</p><p>The industry is wobbling toward usage-based and outcome-based pricing. Pay per query. Pay per resolved research task. Pay per drafted clause. Salesforce and Zendesk are already doing this in their own categories. The math makes sense for vendors. The problem is that law firms hate metered bills. CIOs cite cost forecasting as the number one headache with consumption pricing. Nobody wants their Westlaw bill to look like an AWS invoice.</p><p>Here&rsquo;s where the real fight is going to happen, and I haven&rsquo;t seen anybody talk about it openly yet.</p><p>Put yourself in the chair of a Westlaw or Lexis sales VP. You&rsquo;re watching seat utilization drop. Associates are logging in less. Partners barely log in at all. The minutes-per-seat metric you&rsquo;ve been using internally to justify renewals is collapsing. Meanwhile your compute costs are spiking because the firm&rsquo;s MCP-connected agents are hammering your APIs and MCPs at three in the morning to draft research memos.</p><p>What do you do?</p><p>I&rsquo;ll tell you what you do. You add an AI agent access fee on top of the seat license. Premium tier. &ldquo;Enterprise agentic access.&rdquo; Whatever the marketing team lands on. And you keep raising the per-seat price every renewal cycle. Because if each seat is getting cheaper for the firm to actually use, your only path to flat or growing revenue is to charge more for each one. Double dip. Seats plus agents. Stack them.</p><p>Now flip the chair. You&rsquo;re a firm CIO or a law firm library director. Your usage data shows seat logins dropping. Your associates are no longer going directly to Westlaw or Lexis. The vendor calls to renew, the price per seat is up 10%, and now there&rsquo;s a separate line item for &ldquo;agent access&rdquo; that wasn&rsquo;t on last year&rsquo;s quote. You ask why you&rsquo;re paying more for less. The vendor explains, with a straight face, that the value sits in the data, the agent extracts more value per query, and the bill reflects that. You disagree.</p><p>That&rsquo;s the battle.<span id="more-19265"></span></p><p>Firms have leverage they haven&rsquo;t quite figured out how to use yet. If a general-purpose model with a good MCP integration can produce a defensible memo using free public data plus a couple of specialized vendor pipes, the firm doesn&rsquo;t need a full Westlaw or Lexis subscription anymore. They need a few targeted pipes. Maybe federal cases, maybe a particular state&rsquo;s regulatory feed, maybe a specialized treatise. The bundled subscription that&rsquo;s been the vendors&rsquo; moat for thirty years is exactly the thing the agentic ecosystem can unbundle.</p><p>The firm-side response is going to come in three flavors. First, the headcount-only firms: &ldquo;we&rsquo;ll keep paying for seats at last year&rsquo;s rate, take it or leave it, and we&rsquo;re not paying a separate agent fee on top.&rdquo; Second, the audit-and-cut firms: &ldquo;show us actual usage data, justify the renewal price against actual logins, or we cut the seat count to match.&rdquo; Third, the route-around firms: &ldquo;we&rsquo;ll keep a small premium subscription for the editorial signals we can&rsquo;t get anywhere else, and we&rsquo;ll point our agents at public data plus a few targeted MCP feeds for everything else.&rdquo; Each of those is a different kind of headache for the vendor, and each one has a different ceiling on what the vendor can actually charge.</p><p>The vendors who win this fight will be the ones who can credibly argue their MCP server delivers something the agent can&rsquo;t get anywhere else. KeyCite citator data, validated public-records overlays, proprietary treatises, expert witness analytics, the stuff that took fifty years of editorial labor to assemble. That&rsquo;s the moat that survives. The vendors who try to hold the line on seat prices while gating their best data behind a separate agent fee will find their customers routing around them, because at that point the firm just buys the agent-access tier and treats the seats as a courtesy login for partners who still like the old interface.</p><p>My prediction: the first major firm to publicly announce they&rsquo;re cutting Westlaw or Lexis seat count by 40% while keeping their MCP-tier subscription will set off an industry panic. Somebody is going to do this. Watch for it.</p><p>Now let&rsquo;s talk about the verification tax, because this is where the AI evangelists get quiet.</p><p>The better the models get, the harder they are to audit. Sounds backwards, although it&rsquo;s true. When errors are common, you spot them. When errors are rare, you stop looking, and that&rsquo;s exactly when one slips through and ends up in a brief. There&rsquo;s actual statistics on this. Researchers proved that the cost of estimating calibration error grows as models improve. For a ten-step agent loop, the verification cost can be a thousand times the cost of a single-step model. Lovely.</p><p>Then there&rsquo;s what MIT called the Confidence Paradox. Models use more confident language when they&rsquo;re hallucinating than when they&rsquo;re stating facts. Thirty-four percent more, according to their 2025 work. So the smoothest, most reassuring chunk of your AI memo? Statistically, that&rsquo;s the part most likely to be wrong.</p><p>Friends, this is a malpractice waiting room.</p><p>The worst version of the Verification Tax happens when associates trust the agent because it sounded confident, partners trust the associate because the memo looks clean, and the bar trusts the firm because nothing got flagged. Mariana Trench of false confidence. Somebody is going to get sanctioned, and the case is going to read like a horror story.</p><p>So what should the vendors do? I get asked this a lot lately, and my answer probably annoys them.</p><p>Throw out the human-first (or at least human-only) design playbook. The audience is the model now.</p><p>That sounds heretical to a UX designer, I know. Every editorial signal needs to be a structured field in the response payload. KeyCite/Shepard status should be a typed enum with a confidence score and a direct citation to the underlying authority. The model can then promote that signal in the summary, because it&rsquo;s data instead of decoration.</p><p>Ship real MCP servers. Press releases about &ldquo;AI partnerships&rdquo; don&rsquo;t count. Actual production grade tool surfaces with rate limits, auth, and schemas the model can read at inference. If you&rsquo;re not in the agent&rsquo;s toolbox, you&rsquo;re not in the workflow.</p><p>Build provenance into every response. Hash-pinned citations. Quote spans with character offsets. Per-claim source attribution. Make hallucination expensive for the model to generate and easy for the human to detect. This turns the Verification Tax from a tax on the user into a feature for the vendor.</p><p>And reprice. Just reprice. Seat pricing is over. The metering infrastructure you&rsquo;ll have to build is annoying, although it&rsquo;s the only way the math closes.</p><p>The Legalweek 2026 lineup told us where the incumbents have landed. Thomson Reuters rebuilt CoCounsel on the Claude Agent SDK and is trying to &ldquo;own the shadow&rdquo; by being the agent itself. LexisNexis is leaning the other way, embedding Cowork into Prot&eacute;g&eacute; and treating Lexis as the &ldquo;primary connection point&rdquo; for content. Two strategies, same underlying bet, which is that lawyers will choose a curated agentic environment over a general-purpose model with specialized pipes.</p><p>I&rsquo;m not sure they will. Lawyers use what works. If the general purpose model with a good MCP integration produces a better memo in less time, the walled garden becomes a walled relic.</p><p>If you don&rsquo;t create solid MCP integration, your users will create workarounds that will get them what they need. I haven&rsquo;t even scratched the surface of things like Codex Computer Use or Perplexity Computer in this article. But, trust me, tools like that will make it very easy for creative lawyers to just have the AI interact with the legal information. It&rsquo;s just too much to try to cover here, but at least I&rsquo;ll mention it for those vendors who think they control all the access points to their product.</p><p>For the lawyers reading this, here&rsquo;s my unsolicited advice. Audit the grounding. When the AI summarizes a case, ask explicitly whether it checked subsequent treatment, and demand it surface the KeyCite or Shepard&rsquo;s signal verbatim. Verify the pipes. Know which sources your agent is actually calling, because a &ldquo;research result&rdquo; from a web-search plugin and a research result from an MCP-connected professional database are very different animals. And keep the judgment. The application of law to fact and the strategic counsel you give a client cannot be delegated. That&rsquo;s the part of the workflow that has to stay outside the shadow.</p><p>By 2030 the dashboard is dead. In fact, the browser may be dead. The most important UX hire at a major legal information vendor won&rsquo;t be drawing pixels. She&rsquo;ll be writing tool descriptions and tuning system prompts so an agent representing a lawyer she&rsquo;ll never meet can find, validate, and cite the right authority on the first call. The portal becomes the back office. The pipe becomes the product.</p><p>Some vendors will accept this and build the best possible pipes. Those vendors will keep the editorial moat and figure out how to charge for it in a usage-based world. The ones who refuse will end up as the &ldquo;Intel Inside&rdquo; of legal research. Real. Important. Invisible. Priced like a commodity.</p><p>So here&rsquo;s where I land. Shadow UX isn&rsquo;t coming. It&rsquo;s already in your firm, right now, and it&rsquo;s growing. The interface your customers actually use is one you didn&rsquo;t build, and the experience you spent decades polishing is being rendered, badly, through somebody else&rsquo;s chat window.</p><p>You can&rsquo;t fight it. The remaining choice is whether you&rsquo;d rather be a great pipe or an irrelevant portal.</p><p>What are you going to do about it?</p>
]]></content:encoded>
					
		
		
			<dc:creator>xlambert@gmail.com (Greg Lambert)</dc:creator></item>
		<item>
		<title>Orbital CTO Andrew Thompson on Practice Area AI, Real Estate Law, and the Future of Legal Work</title>
		<link>https://www.geeklawblog.com/2026/04/orbital-cto-andrew-thompson-on-practice-area-ai-real-estate-law-and-the-future-of-legal-work.html</link>
		
		
		<pubDate>Mon, 27 Apr 2026 10:00:09 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[AI Agents]]></category>
		<category><![CDATA[Andrew Thompson]]></category>
		<category><![CDATA[legal AI]]></category>
		<category><![CDATA[legal technology]]></category>
		<category><![CDATA[Orbital]]></category>
		<category><![CDATA[podcast]]></category>
		<category><![CDATA[property transactions]]></category>
		<category><![CDATA[real estate law]]></category>
		<guid isPermaLink="false">https://www.geeklawblog.com/?p=19260</guid>

					<description><![CDATA[This week on The Geek in Review, we talk with Andrew Thompson, CTO of Orbital, about why legal AI built for a specific practice area has a strong claim in a market crowded by general-purpose models. Thompson explains how Orbital focuses on real estate law, using AI, spatial intelligence, and legal workflow design to support... <a href="https://www.geeklawblog.com/2026/04/orbital-cto-andrew-thompson-on-practice-area-ai-real-estate-law-and-the-future-of-legal-work.html">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<div class="relative basis-auto flex-col -mb-(--composer-overlap-px) pb-(--composer-overlap-px) [--composer-overlap-px:28px] grow flex">
<div class="flex flex-col text-sm">
<div class="" data-turn-id-container="request-WEB:274fa81f-197e-4bc4-9259-d6fc3f7e1d4c-3" data-is-intersecting="true">
<section class="text-token-text-primary w-full focus:outline-none [--shadow-height:45px] has-data-writing-block:pointer-events-none has-data-writing-block:-mt-(--shadow-height) has-data-writing-block:pt-(--shadow-height) [&amp;:has([data-writing-block])&gt;*]:pointer-events-auto R6Vx5W_threadScrollVars scroll-mb-[calc(var(--scroll-root-safe-area-inset-bottom,0px)+var(--thread-response-height))] scroll-mt-[calc(var(--header-height)+min(200px,max(70px,20svh)))]" dir="auto" data-turn-id="request-WEB:274fa81f-197e-4bc4-9259-d6fc3f7e1d4c-3" data-testid="conversation-turn-2" data-scroll-anchor="false" data-turn="assistant">
<div class="text-base my-auto mx-auto pb-10 [--thread-content-margin:var(--thread-content-margin-xs,calc(var(--spacing)*4))] @w-sm/main:[--thread-content-margin:var(--thread-content-margin-sm,calc(var(--spacing)*6))] @w-lg/main:[--thread-content-margin:var(--thread-content-margin-lg,calc(var(--spacing)*16))] px-(--thread-content-margin)">
<div class="[--thread-content-max-width:40rem] @w-lg/main:[--thread-content-max-width:48rem] mx-auto max-w-(--thread-content-max-width) flex-1 group/turn-messages focus-visible:outline-hidden relative flex w-full min-w-0 flex-col agent-turn">
<div class="flex max-w-full flex-col gap-4 grow">
<div class="min-h-8 text-message relative flex w-full flex-col items-end gap-2 text-start break-words whitespace-normal outline-none keyboard-focused:focus-ring [.text-message+&amp;]:mt-1" dir="auto" data-message-author-role="assistant" data-message-id="86c3c321-37cd-4ae7-8c62-a2d3a070a7b2" data-turn-start-message="true" data-message-model-slug="gpt-5-5-thinking">
<div class="flex w-full flex-col gap-1 empty:hidden">
<div class="markdown prose dark:prose-invert w-full wrap-break-word dark markdown-new-styling">
<p data-start="119" data-end="868">This week on The Geek in Review, we talk with <a href="https://www.linkedin.com/in/andrewmatthewthompson/">Andrew Thompson</a>, CTO of <a href="https://www.orbital.tech/">Orbital</a>, about why legal AI built for a specific practice area has a strong claim in a market crowded by general-purpose models. Thompson explains how Orbital focuses on real estate law, using AI, spatial intelligence, and legal workflow design to support transactions involving property portfolios, title review, survey analysis, and complex documentation. With more than 200,000 property transactions processed and a major $60 million, Series B investment fueling its U.S. expansion, Orbital sits at the center of the debate over whether the future of legal AI belongs to broad model platforms or tools built for the messy details of actual legal work.</p>
<p data-start="870" data-end="1425">Thompson&rsquo;s path into legal technology brings a practical operator&rsquo;s mindset to the conversation. Before Orbital, he worked across software, fintech, proptech, and real estate marketplaces, where speed, accuracy, and operational friction shaped business outcomes. That background informs his view that successful legal AI starts with the work itself rather than the model alone. For Orbital, the key is teaching AI to think like a real estate lawyer at the right level of abstraction, then pairing the model with domain-specific tools, data, and workflows.</p>
<p data-start="1427" data-end="2057">The conversation gets especially interesting when Thompson walks through Orbital&rsquo;s use of spatial intelligence. Real estate law often turns written legal descriptions, old maps, title documents, surveys, and boundaries into high-stakes decisions about physical land. Thompson explains the challenge of moving from words on a page to points, lines, curves, and property boundaries on a map. This leads to a broader discussion of large language models, visual language models, OCR, and classical machine learning, with Thompson making clear that the best current systems still require a toolbox rather than blind faith in one model.</p>
<p data-start="2059" data-end="2601">We also explore Thompson&rsquo;s concept of the &ldquo;prompt tax,&rdquo; the hidden maintenance burden created when model behavior changes faster than product teams expect. Thompson describes Orbital&rsquo;s mantra of &ldquo;betting on the model,&rdquo; which means building for where AI capabilities are heading while still delivering value today. He separates durable domain expertise from brittle prompt tricks, arguing that legal AI companies need reusable legal knowledge, strong evaluation habits, and a willingness to rebuild assumptions as models improve.</p>
<p data-start="2603" data-end="3238" data-is-last-node="" data-is-only-node="">Looking ahead, Thompson sees the impact of AI arriving faster than the standard three-to-five-year forecast. He points to software engineering as an early signal for what legal work might experience next, with professionals increasingly orchestrating humans and AI agents together. The billable hour, client value, accountability, empathy, and judgment all come under pressure as AI handles more cognitive labor. For real estate lawyers and legal technologists, Thompson&rsquo;s message is direct: the winners will be those who understand the work deeply, build with technical humility, and know when the map matters as much as the document.</p>
</div>
</div>
</div>
</div>
<div class="z-0 flex min-h-[46px] justify-start">
<p data-start="1979" data-end="2573"><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true"><strong>Listen on mobile platforms:&nbsp;&nbsp;</strong></span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://podcasts.apple.com/us/podcast/the-geek-in-review/id1401505293" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Apple Podcasts&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span>&#8288;</a><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true"><strong>&nbsp;|&nbsp;&nbsp;</strong></span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://open.spotify.com/show/53J6BhUdH594oTMuGLvANo?si=XeoRDGhMTjulSEIEYNtZOw" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Spotify&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span>&#8288;</a><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&nbsp;|&nbsp;</span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://www.youtube.com/@thegeekinreview" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;YouTube&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span></a>&nbsp;|&nbsp;<a href="https://thegeekinreview.substack.com/">Substack</a></p>
<p><span data-slate-node="text"><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">[Special Thanks to&nbsp;</span></span><a class="Link-sc-k8gsk-0 feDGbw e-9652-text-link sc-jWfcXB gQGioO" href="https://www.legaltechnologyhub.com/" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">Legal Technology Hub</span></span>&#8288;</a><span data-slate-node="text" data-slate-fragment="JTVCJTdCJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmNoaWxkcmVuJTIyJTNBJTVCJTdCJTIydGV4dCUyMiUzQSUyMiU1QlNwZWNpYWwlMjBUaGFua3MlMjB0byUyMCUyMiU3RCUyQyU3QiUyMnR5cGUlMjIlM0ElMjJsaW5rJTIyJTJDJTIydXJsJTIyJTNBJTIyaHR0cHMlM0ElMkYlMkZ3d3cubGVnYWx0ZWNobm9sb2d5aHViLmNvbSUyMiUyQyUyMnRhcmdldCUyMiUzQSUyMl9ibGFuayUyMiUyQyUyMnJlbCUyMiUzQSUyMnVnYyUyMG5vb3BlbmVyJTIwbm9yZWZlcnJlciUyMiUyQyUyMmNoaWxkcmVuJTIyJTNBJTVCJTdCJTIydGV4dCUyMiUzQSUyMkxlZ2FsJTIwVGVjaG5vbG9neSUyMEh1YiUyMiU3RCU1RCU3RCUyQyU3QiUyMnRleHQlMjIlM0ElMjIlMjBmb3IlMjB0aGVpciUyMHNwb25zb3JpbmclMjB0aGlzJTIwZXBpc29kZS4lNUQlMjIlN0QlNUQlN0QlNUQ="><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">&nbsp;for their sponsoring this episode.]</span></span></p>
</div>
</div>
</div>
<div class="text-base my-auto mx-auto pb-10 [--thread-content-margin:var(--thread-content-margin-xs,calc(var(--spacing)*4))] @w-sm/main:[--thread-content-margin:var(--thread-content-margin-sm,calc(var(--spacing)*6))] @w-lg/main:[--thread-content-margin:var(--thread-content-margin-lg,calc(var(--spacing)*16))] px-(--thread-content-margin)">
<div class="[--thread-content-max-width:40rem] @w-lg/main:[--thread-content-max-width:48rem] mx-auto max-w-(--thread-content-max-width) flex-1 group/turn-messages focus-visible:outline-hidden relative flex w-full min-w-0 flex-col agent-turn">
<div class="mt-3 w-full empty:hidden">
<div class="text-center">
<p><iframe title="Spotify Embed: Orbital CTO Andrew Thompson on Practice Area AI, Real Estate Law, and the Future of Legal Work" style="border-radius: 12px" width="100%" height="152" frameborder="0" allowfullscreen allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy" src="https://open.spotify.com/embed/episode/0OzfkTweWTpmv70jRtm4Z6?si=0ZPjubZXSO6kDDmT9jzQnQ&amp;utm_source=oembed"></iframe></p>
<p><a href="https://www.youtube.com/watch?v=VGKJPefS_S4"><img style=" max-width: 100%; height: auto; " src="https://www.geeklawblog.com/wp-content/uploads/sites/528/embed_thumbs/VGKJPefS_S4.png"></a></p>
<p>&nbsp;</p>
<p>&#8288;&#8288;&#8288;&#8288;&#8288;Email: geekinreviewpodcast@gmail.com<br>
Music: &#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Jerry David DeCicca&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</p>
<h5>Transcript:</h5>
</div>
</div>
</div>
</div>
</section>
</div>
</div>
</div><p><span id="more-19260"></span></p><p>Nikki Shaver (00:00)<br>
Hi Greg and Marlene, I&rsquo;m coming to you live from the English countryside. Really want to let all of your viewers and listeners hear about something that I think will be really interesting for them. One of the best ways of course to stay on top of technology in this crowded market is to see demos of new products and new product features. All of us think about new products, but of course the products that are out there also because it&rsquo;s easier to build with AI.</p><p>are moving faster, evolving faster than ever before and dropping new features all the time. So it&rsquo;s also important to stay on top of that. But of course, it takes time to individually reach out to vendors. And also you may not necessarily know what&rsquo;s new and worth looking or perhaps you don&rsquo;t want to commit to a relationship with a vendor right now by reaching out and opening up that dialogue, but you&rsquo;d still like a sneak peek at the solution.</p><p>Why not let us at Legaltech Hub take care of that for you? We are organizing what we call demo dozens. These are sessions where we get 12 vendors to come in and show us an update on their product or the first demo of their product at their early stage. You get to register for free, come along, you get the schedule ahead of time so you can pop in for as many or as few as you&rsquo;d like to see. get a recording afterwards. Great way to stay on top of the market.</p><p>go to legaltechnologyhub.com, look for the events dropdown on the top menu, go to LTH events and you&rsquo;ll see the link to register for the next demo dozen which is coming up on May 19th. And stay tuned, we&rsquo;ll be doing these on a regular basis.</p><p>Marlene Gebauer (01:42)<br>
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I&rsquo;m Marlene Gebauer.</p><p>Greg Lambert (01:49)<br>
And I&rsquo;m Greg Lambert. And Marlene, we&rsquo;ve seen over the past few months where the AI market has felt like it&rsquo;s just gone through all of these shifts. And even when companies like foundational model companies like Anthropic, know.</p><p>create a legal plug-in and it causes a huge disruption in the market. And yet we&rsquo;re still seeing specialist vertical startups that are raising massive rounds to prove that these foundational models can&rsquo;t necessarily solve it all.</p><p>Marlene Gebauer (02:18)<br>
Yeah, that&rsquo;s right. You know, we&rsquo;re currently in a high stakes debate over whether the future of law belongs to the one model for everything giants or the specialized scalpel designed for specific practice areas. And our guest today is right at the epicenter of this debate.</p><p>Greg Lambert (02:36)<br>
We&rsquo;re thrilled to welcome Andrew Thompson, the CTO of Orbital. Andrew is a lifelong geek. We love that. Who&rsquo;s an engineer and has over two decades of experience in a career that spans across fintech to prop tech. And now he&rsquo;s diving into legal AI.</p><p>Marlene Gebauer (02:54)<br>
Today he leads the technical vision for Orbital, &#8275; a real estate legal AI platform that has processed over 200,000 transactions and recently closed a massive 60 million series B round to fuel its expansion into the US market. Andrew, welcome to the Geek in Review.</p><p>Andrew Thompson (03:13)<br>
Thank you very much for having me, Marlene and Greg.</p><p>Greg Lambert (03:15)<br>
All right, Andrew, before we jump into our questions and talk about your work there at Orbital, would you mind giving our listeners a bit of an elevator talk on what Orbital does and who your main clients are and how they use your product?</p><p>Andrew Thompson (03:30)<br>
Yeah, absolutely. So, Orbital is an AI built specifically for real estate. That&rsquo;s specialism is what sort of sets us apart from sort of generalist legal tech and foundation model plugins. We go really deep on that domain. Our team combines sort of real estate lawyers and technical experts who sort of have first-hand experience with real estate legal work and property transactions. And we play in this space sort of with real estate that is worth about 140 billion.</p><p>And sort of some of our clients include Clifford Chance, Seyfarth Shaw, Grosvenor, Vinson &amp; Elkins. and many more. We did about 200,000 property transactions last year in the US and the UK.</p><p>Greg Lambert (04:10)<br>
I&rsquo;ve talked to a number of real estate lawyers who saw your demo on Legaltech Hub.</p><p>They were really impressed &#8275; by what they saw there. So Andrew, let&rsquo;s talk about you for a second. Your path to becoming CTO as a major legal tech player hasn&rsquo;t been exactly a straight path, which is not unusual in this industry. So how did the operational discipline and the focus on the unsexy parts of efficiency in the service industry help shape your</p><p>Andrew Thompson (04:18)<br>
Thank you.</p><p>Greg Lambert (04:45)<br>
approach to building this agentic system for real estate lawyers here at Orbital.</p><p>Andrew Thompson (04:51)<br>
Okay, great question Greg. So let me start with sort of the first part and then I&rsquo;ll go into the second part of sort of the agentic bit. So yeah, my career is a little bit sort of nonlinear. I&rsquo;ve been a software engineer, but always sort of really early on in my career was always well aware that if you point software engineers at the wrong problem, that doesn&rsquo;t turn out well. So I&rsquo;ve always sort of been an engineer at heart.</p><p>Greg Lambert (05:13)<br>
They create really</p><p>cool stuff that has no practical effect. Yes, I know I&rsquo;ve been there.</p><p>Andrew Thompson (05:16)<br>
Look, yeah, yeah, that happens a lot. So</p><p>Marlene Gebauer (05:17)<br>
Hahaha.</p><p>Andrew Thompson (05:19)<br>
I was always being like, I&rsquo;m going to be technical, but like, how do we make sure that we actually build a product customers want? And that&rsquo;s been sort of one of the key threads throughout my career. I&rsquo;ve been involved, interestingly, in Canada in a prop tech business and then here in London. And prior to coming to Orbital, I was at a business called Appear Here. And it was essentially an Airbnb for sort of Main Street properties.</p><p>So we would go to landlords in Paris, New York, London, Amsterdam, Milan, Chicago, find sort of spaces that we can put on a platform and sell or sort of rent out on the short term to brands like Chanel, Dior, Nike, Adidas. And so sort of, had engineers, I had product people, but I also had real estate brokers on my team, which is very different from a CTO. And the reason we were doing that is we would have brands that come to us and&hellip;</p><p>sort of tell us what exact spaces they want in the best cities in the world, what shop frontage, how much square foot, what they were potentially willing to pay, and I would send brokers out into Main Street to go and find those spaces. Once we found them, then we needed to transact them really quickly. And as you can imagine, that sort of obviously became a bit of a painful process. Those spaces&hellip;</p><p>Greg Lambert (06:24)<br>
Yeah, I see</p><p>the problem in that process, the quickly part. &#8275;</p><p>Andrew Thompson (06:28)<br>
Exactly. Yeah.</p><p>And so, you know, the quicker we could, you know, get the leasing done on that and get the space on our platform once we found it, it was obviously a competitive market for some of the best spaces. And so I was dealing with real estate lawyers day in and day out there as well in order to sort of provide value to our business. So sort of coming into Orbital, I understood the sort of unique challenge that that had and sort of maybe what was required. This is prior to the whole new current AI revolution. You know, we were building kind of</p><p>machine learning or deep learning when I first started with the business over five and a half years ago. And so sort of that&rsquo;s my sort of non-linear path that I&rsquo;ve always been at this intersection between both the technology but also sort of trying to get to the heart of really the customer and often the physical environment to sort of play a sort of key part. So sort of the second part of your question, how does that actually fit into building these agentic systems? So, you know, most people have this experience, ChatGPT comes out.</p><p>You have this sort of like, wow, this is amazing moment. How do I actually use this fundamental new technology to sort of build product at our company? And so maybe I&rsquo;ll, we have mantras inside our organization, the sort of principles that you sort of focus on and to sort of, again, whenever something new comes out, I&rsquo;ve seen going from on-premise software to the cloud, from desktop to mobile, there&rsquo;s always these intuitions that are very right going from one paradigm shift to the next.</p><p>But there&rsquo;s always some intuitions that are very, very wrong. Like if you just imagine mobile, I know, let&rsquo;s take a giant website and shrink it down to the tiny bit of your screen. That&rsquo;ll work for customers. And clearly that didn&rsquo;t. You needed to reimagine sort of the UX. And so in this sort of agentic paradigm, you sort of have to operate differently. This, I tell you, remember when, you know, GPT 3.5 blew everybody&rsquo;s socks off, and if you squinted, you could see it doing a little bit of legal reasoning, but you if you&hellip;</p><p>If you sort of pushed it any further than one or two prompts, you realized it would sort of fall over. But it was that sort of insight. One of the mantras we have is sort of betting on the model. So this idea of like, regardless of where it is today, sort of if you looked in your crystal ball, where can you imagine that these models are gonna go cost-wise, intelligence-wise, capability-wise, speed-wise, and then try to build a product for that? And so ultimately, sort of trying to figure out</p><p>You know, every company wants to build a product that customers love and you want to get out there and be first to market. But when a brand new technology comes out, like some of these AI models that are, you know, at least in the early days, very rough around the edges, you kind of need to figure out what is the shape of the product that customers actually want, knowing that as it got better over the next six, 12, 18 months, like, you know, we&rsquo;ve all seen that the sort of product will build into that. I think there&rsquo;s, you know, it&rsquo;s obviously hasn&rsquo;t been a massive long time already since sort of ChatGPT and AI has been out, but there&rsquo;s already been lots of</p><p>startups that have sort of built something very impressive and then disappeared into the ether. And I think the game you need to play is like betting on the model, build something that&rsquo;s valuable today, but also in six months and 12 months. And that is surprisingly different from building SaaS software. And I think that&rsquo;s, you know, to your preamble right at the beginning, I think that&rsquo;s what a lot of the market is reacting to. That&rsquo;s a lot of what software businesses are trying to grapple with.</p><p>Marlene Gebauer (09:34)<br>
So I want to set the stage for the next next question, Andrew. &#8275; In early 2026, we saw this massive market panic after Anthropic launched a legal plugin and investors were fearing that, you know, middlemen and legal tech were going to be dismantled overnight. You&rsquo;ve been quite vocal that this one model for everything narrative is really missing something fundamental about the legal market. So why do you believe that</p><p>practice area AI is ultimately going to win out over generalist tools and what does a specialized platform like Orbital provide that a general purpose model like Claude or chat GPT simply can&rsquo;t?</p><p>Andrew Thompson (10:15)<br>
Yeah, it&rsquo;s a great question, but it&rsquo;s also pretty much like the question I think most people are sort of grappling with it at the moment. Yeah. You obviously sort of highlighted legal, but I see this everywhere. This idea that, you know, AI labs are building better and better general intelligence. What was bleeding edge two years ago, 18 months ago, even a year ago or six months ago, and now kind of table stakes, everybody can do it. And it&rsquo;s sort of&hellip;</p><p>Marlene Gebauer (10:21)<br>
Ha ha.</p><p>Greg Lambert (10:22)<br>
Tell us what the secret sauce is.</p><p>Marlene Gebauer (10:24)<br>
What is it? We want to know.</p><p>Andrew Thompson (10:43)<br>
Ultimately, is everything going to be consumed or is there a way to build software where you&rsquo;re constantly sort of a level above what the AI labs are of churning out with their models? And so sort of with that in mind, you&rsquo;ve obviously articulated there, I think about it sort of this barbell approach. It&rsquo;s not sort of an I can either or. I think that&rsquo;s ultimately the hollow sort of not being opinionated in the middle and trying to be, leverage a little bit of the generic, but only go so far deep into sort of the solving of</p><p>problem deeply for your customers, you kind of want both on sort of either side. And so maybe let&rsquo;s sort of, instead of just purely talking about book, let&rsquo;s take a step back and just think about sort of not thinking about AI. But if you have like such clever people, and again, I&rsquo;m gonna anthropomorphize slightly here, so sort of take away the pinch of salt. But there&rsquo;s a lot of smart people, let&rsquo;s just take for example, a bunch of PhDs, they&rsquo;re not necessarily great at real estate law right out of the gate.</p><p>There&rsquo;s sort of a whole bunch of training and tools that they need and it&rsquo;s sort of on the job experience that sort of intelligence as a precursor is obviously really good. The more intelligent lawyers often are sort of really good at what they do, but there&rsquo;s a whole sort of category of stuff that comes after that. sort of using that as a bit of an analogy from this idea of sort of the generalized intelligence from the sort of vertical ones. You can sort of start with something general and build something sort of light on top. And yeah, you&rsquo;re a little bit better than the general, but you&rsquo;re almost&hellip;</p><p>at threat is as each time they get better and better, you very quickly get eclipsed. And we&rsquo;ve obviously sort of, you know, very much seen this &#8275; recently over the last few months. And so I think the way that AI sort of changes this is that, you know, from this analogy with humans, AI already has all of this sort of legal and real estate knowledge built in. And so that&rsquo;s why it&rsquo;s a little different than humans, right? Humans&hellip;</p><p>do take years to train and sort of years, sometimes decades on the job experience to gain all that experience. A lot of that, not everything, a lot of that is sort of built inside the model. And so it&rsquo;s sort of how do you leverage that model? How do you prompt it? How do you instruct it to think like in our case, a real estate lawyer, and then what specific tools do you need to give them in order to be sort of vastly better than the sort of general tools? And I think that&rsquo;s ultimately the, when I talked about sort of the question.</p><p>That&rsquo;s ultimately, I think, the rough playbook that a lot of companies like us and others are sort of thinking about from a sort software engineering perspective. But then I think if you, you know, just let&rsquo;s take a step back from AI and software engineering and products. If you just look at what our customers, other people&rsquo;s customers on the ground, everybody&rsquo;s enamored with AI. They get a lot of value out of it. And oftentimes you can get 80 % there. I&rsquo;m just sort of cherry picking here, but 80 % of the solution.</p><p>but that last 20 % is really hard. It requires not just prompting the model better, it requires sort of proprietary knowledge or some of the tools that you&rsquo;re giving it. They need to be hyper specialized for sort of the real estate practice area. And so sort of that&rsquo;s ultimately for us, we will continue to sort of leverage the AR models and we&rsquo;re expecting them to get, you know.</p><p>a little bit better, twice as better, doubly as better, 10x better over the coming years. But so long as the tools and the data on top of those models, will leverage to solve our customers&rsquo; problems in a truly deep way that these generalized systems aren&rsquo;t able to do.</p><p>Marlene Gebauer (14:11)<br>
I have a sort of out of left field question. So I&rsquo;ll just throw it out there and see what happens. Do you think this kind of debate in terms of point solutions versus the general solutions, is that going to have any impact in terms of what wins out, unlike the energy consumption that we are all thinking about in terms of using GenAI models?</p><p>Andrew Thompson (14:14)<br>
Go for it.</p><p>So you&rsquo;re saying it&rsquo;s like, will the energy consumption be different if you&rsquo;re using a general model or a&hellip;</p><p>Marlene Gebauer (14:44)<br>
Yeah, will it have a positive impact?</p><p>Andrew Thompson (14:46)<br>
I don&rsquo;t think I&rsquo;m educated enough to know categorically the answer on that actually. Like was just thinking ultimately if we sort of from first principles what really matters here? The kind of unit of measurement or tokens. The more tokens you use, the more GPUs are crunching them, the more energy you&rsquo;re using. And so ultimately I think there&rsquo;s from an application developer&rsquo;s perspective, the less tokens you use, the less power you use.</p><p>I think you could probably argue that there might be a slightly, you&rsquo;d be more efficient if you really understood the domain and with real estate there are sometimes a huge amount of documents. We&rsquo;re talking a portfolio of a thousand properties and each property has 10 documents and each side of those documents are 50 to 100 pages. You ultimately do need to read every single word or quote unquote token on those pages and so you are still sort of burning some of the power there.</p><p>I think ultimately where the efficiency comes from more is sort of a layer below the application stack. Obviously some application developers can be wasteful, but most of the time we are not wasteful because we have to sort of spend the money on tokens. And so I think, you know, we&rsquo;ve already seen cost reductions in tokens. And I think partly that comes from Nvidia&rsquo;s GPUs getting better and more energy efficient, as well as the sort of AI labs producing sort of</p><p>better algorithms that are vastly more efficient. I think that&rsquo;s ultimately where the gains are mostly gonna come from. That would be my sort of just thinking on the spot response to that.</p><p>Greg Lambert (16:11)<br>
I like that question though. Maybe one of the PhDs that&rsquo;s listening to this can write a paper on it, Marlene. Andrew, I want to &#8275; talk about real estate itself. We talk about real estate being the largest asset in the world, class in the world.</p><p>Marlene Gebauer (16:12)<br>
And we, go ahead.</p><p>Andrew Thompson (16:14)<br>
Hmm.</p><p>Greg Lambert (16:33)<br>
And yet the work, and I know Marlene and I have both seen this from our real estate lawyers, is still very manual, it&rsquo;s fragmented, there&rsquo;s all kinds of nuance in it. people&hellip;</p><p>may actually say that it really hasn&rsquo;t changed that much in the last couple hundred years. So you focused Orbital on the spatial visualization and connecting deeds to the physical reality of the land. And I&rsquo;ve actually seen demos of this and it&rsquo;s really, really interesting how you do the overlays on it and you&rsquo;re working with the land.</p><p>&#8275; itself on those maps. So you might be talking about, can you explain the technical challenges of teaching the AI to kind of read the property boundaries and the historical maps and how the spatial intelligence actually speeds up the property deal?</p><p>Andrew Thompson (17:21)<br>
Mmm.</p><p>Yeah, absolutely. So, maybe I&rsquo;ll sort of touch on a few of the challenges. There&rsquo;s quite a lot in real estate. You lot of people are used to document heavy, lot of text, and then reasoning over that, and now there&rsquo;s this sort of visual and spatial layer. So I think, first and foremost, there&rsquo;s a reasoning challenge on sort of turning what is typically sort of textual legal description of the property into a collection of essentially points and lines and curves. And you then need to sort of</p><p>create the property boundary from that. So you can imagine, I&rsquo;m just going to cherry pick something that&rsquo;s often sort of quite slightly humorous, there&rsquo;ll be written in the technical description, go to the tree on the corner, walk 20 paces, &#8275; turn 30 degrees, and you&rsquo;re sort of mapping this all out. That is written sometimes in a beautifully photocopied document, sometimes not so much so. So you first have to extract that, you then have to sort of interpret that and make sure that sort of&hellip;</p><p>the lines and points are correct and then actually try to sort of etch that in a more sort of classical way without using sort of AI. You&rsquo;ve sort of done the extraction and now you&rsquo;re sort of plotting it on a map. I think there&rsquo;s visual challenges off the back of this. A lot of people, you know, we all know the word LLMs, large language models, people have talked about VLMs, visual language models. They are kind of quite a few steps behind.</p><p>where we are intelligence-wise with large language models. You know, can all take a picture of our fridge and tell her, do I want for dinner tonight? But if you take a picture of a very complex real estate survey or plan, you know, it just sort of falls over. And so you need sort of a combination of these VLMs analyzing, but then you also need to sort of drop down into more sort of classical machine learning or computer vision techniques that you have. And this is all orchestrated by AI. So much like a human can come in and, you know,</p><p>look at the legal description on the, sort let&rsquo;s just say the title commitment, and then they can go and sort of plot that manually on a map. You can now get AI to do that all automatically, where it&rsquo;s sort of piecing all of those pieces together that typically would take hours for a human to do manually.</p><p>Greg Lambert (19:35)<br>
Yeah, I am so glad you mentioned VLM&rsquo;s because I&rsquo;ve been aching to talk to somebody about this since I played around with some a couple of weeks ago. And just to kind of talk about the difference between the large language models and the visual language models, I actually did a test a couple of weeks ago where I threw a scan document at both and Claude actually gave me a really good explanation. I think I used the</p><p>Andrew Thompson (19:56)<br>
Mm-hmm.</p><p>Greg Lambert (20:02)<br>
when VLM modeled to do the visual.</p><p>And I asked it, know, kind of what was the difference and the response, and I wanted to kind of verify this with you, Andrew, that I got back from Claude is that the large language models can look at a document, can look at an image, and can kind of give you the explanation of what&rsquo;s there. It can kind of tell you, you know, this is your refrigerator, I see some milk, I see some, you know, some apples or whatever. &#8275;</p><p>The</p><p>visual language model can actually explain everything that&rsquo;s on the image. Here&rsquo;s the text, here&rsquo;s the handwritten text, here&rsquo;s the image of the apple and the milk. But it can&rsquo;t necessarily give you the explanation of how it all gets put together. So how do you use the combination of the two?</p><p>Andrew Thompson (20:55)<br>
Hmm.</p><p>Marlene Gebauer (20:59)<br>
I&rsquo;m also curious the difference between how they&rsquo;re trained. You know, is one trained on sort of text, but it understands visuals, the other trained on visual that you can like, how does, what is the difference?</p><p>Andrew Thompson (21:11)<br>
Yeah, let&rsquo;s start with that question. I obviously don&rsquo;t, we don&rsquo;t train sort of our own vision VLM. But I believe, you know, all of these systems, they need training data. When you look at the LLMs, they&rsquo;re fed on the internet, they&rsquo;re fed on books. There&rsquo;s, you know, multi-billion pound companies producing label data with doctors and lawyers and software engineers, and you&rsquo;re ingesting code bases. And so that&rsquo;s its training data. And then you need to sort of layer sort of reinforcement data to sort of steer it in the right direction.</p><p>I believe the approach is very similar for VLMs. And so you need sort of images and then you need sort of textual explanations on top of that. That&rsquo;s either historic data that&rsquo;s just always been there and you can sort of the AI labs can sort of scoop that up and include that in training data much like they do the internet. Or I don&rsquo;t know this for sure, but I imagine they&rsquo;re trying to get their hands on lots of images and then getting sort of, quote unquote, experts</p><p>to look over them and produce a bit of a write-up. And then you can kind of understand the sort of what&rsquo;s happening in the image, what does it have? You can understand some of the more sort of domain specific elements of that image. Like again, just back to this trivial example of a fridge, you might have a chef with a picture of a fridge, then go, okay, well you can make spaghetti bolognese tonight, or maybe you can make a salad based on those ingredients, but not much else. And so I think&hellip;</p><p>All of that is getting sort of sucked in from a sort of training perspective to make these better and better. The more data you have, the better the VLMs get, and the better quality of that data, the better the VLMs get. I think the reason they are behind is that we just don&rsquo;t have a lot of that data in the world, relative to all the text. You think about newspapers, you think about even this recording and it&rsquo;s getting turned into a transcript, or you think about, you know, code.</p><p>that is open source sitting in GitHub, all of that is rich. Let&rsquo;s take our domain, very detailed, real estate specific, legal images that are buried in 100 page PDFs that are private information. The Model Labs don&rsquo;t have access to that and so they just can&rsquo;t reason well over those images yet. To your question, Greg, just remind me what that was again.</p><p>Greg Lambert (23:16)<br>
when you&rsquo;re combining the use of it, how are you kind of bridging the VLM information and the LL information to get, instead of getting the one plus one equals two, you&rsquo;re getting the sums greater than the individual parts.</p><p>Andrew Thompson (23:31)<br>
Yeah.</p><p>Great idea. I think there&rsquo;s been a wholesale, you know, again, let&rsquo;s just take the textual side. Prior to ChatGPT, most people doing, you know, of LegalTech 1.0 were using classical machine learning, and pretty much there are some players who are not doing this quite yet, but most people have just wholesale moved over to using LLMs. I think we&rsquo;re still in a world where in order to kind of maximize accuracy, trust, minimize hallucinations,</p><p>we have to be doing both of those at the same time. I don&rsquo;t think, I don&rsquo;t know of anybody, I&rsquo;m sure there are, but from our sort of experience and our domain, you can&rsquo;t just plug these sort of real estate legal images into a VLM at the moment. You can get some information, but you still need to use sort of classical machine learning techniques. These documents still need to be OCR&rsquo;d. And so it&rsquo;s sort of, it&rsquo;s a combination of the two to almost sense check each other, much like you go to a doctor and then get a second recommendation and sort of.</p><p>VLMs have their strength over classical machine learning and classical machine learning has its strength over VLMs and sort of the two together are great. Long term, you can imagine the same thing that happened with sort of LLMs will happen with VLMs, but we&rsquo;re not quite there today.</p><p>Greg Lambert (24:40)<br>
Thanks.</p><p>Marlene Gebauer (24:40)<br>
dynamic duo.</p><p>Andrew Thompson (24:42)<br>
Yeah, back to that thing.</p><p>It&rsquo;s like everything&rsquo;s different now. Three, six, nine, twelve months from now, where a VLM is going to be. Part of our job is trying to figure out and squint and go, I think it&rsquo;s going to be better or it&rsquo;s stagnating. And I guess that&rsquo;s part of the challenge, but also the fun of this game at the moment of sort of building software in this industry is you&rsquo;re not entirely sure all the time where, like how fast the world is going to move in what trajectory.</p><p>Greg Lambert (25:06)<br>
Yeah, well even so, Andrew, in that conversation, you were still talking about the need for OCR. You were still talking about the need for machine learning. So I mean, it&rsquo;s almost like we haven&rsquo;t really gotten away to where one tool runs at all. You still need to know this whole kind of toolbox of different tools and which one to use at the right time.</p><p>Marlene Gebauer (25:28)<br>
Well, talking about moving quickly, um, at, the 2025 AI engineer world&rsquo;s fair. know, now I&rsquo;m not now I know that there&rsquo;s an engineer&rsquo;s world&rsquo;s fair. Um, you introduced a concept called the prompt tax, which is the hidden cost of, staying on the bleeding edge while you have the foundational models are constantly upgrading and, know, and breaking existing workflows. So.</p><p>Andrew Thompson (25:40)<br>
Yeah.</p><p>Marlene Gebauer (25:57)<br>
Your mantra of betting on the model, know, building for where the tech is going, you know, not where it is today. How do you use that? How do you practically manage like, let&rsquo;s say like a prompt library that might become obsolete every six months, you know, without turning your product into, you know, a, I think the quote was like a fragile mesh mesh mess. I can&rsquo;t talk.</p><p>Greg Lambert (26:15)<br>
Six weeks.</p><p>Andrew Thompson (26:17)<br>
Yeah.</p><p>Great point. that AI engineering world&rsquo;s fair is an absolute gold mine for folks in our industry. It&rsquo;s sort of people at the bleeding edge all staring at similar problems in different directions going, how do we grapple with what&rsquo;s happening? And you learn so much from sort of people right at the bleeding edge. So I&rsquo;d recommend that for some of your viewers if they&rsquo;re not familiar. But to your question, let me separate two things. I think there&rsquo;s like teaching an AI system</p><p>how to be a real estate lawyer. And then there&rsquo;s all these other things like people have called it prompt engineering or prompt tax, or sort of like overfitting your AI to a given sort of model that happens to have come out maybe six months ago or three months ago. And so let&rsquo;s sort of talk about the first one, teaching an AI to think like a real estate lawyer. What we&rsquo;ve discovered over time, if you get that wrong, you specify it so specifically that each new model, you have to change a huge amount.</p><p>versus if you find the right abstraction level, and again I&rsquo;m going to sort anthropomorphize a second back to what we said earlier, you&rsquo;ve got a really intelligent human, they go to university, and they sort of learn some of the theory and the fundamentals. It&rsquo;s sort of that level of abstraction when we talk about kind of how we&rsquo;re writing prompts to sort of teach an AI system how to be a real estate lawyer. That&rsquo;s our kind of proprietary knowledge that we have real estate lawyers on our team in the US, in the UK.</p><p>sort of adding incremental value or sort of sweeping amounts of value on how to do various workflows. So things like, you we talked earlier about legal descriptions. How exactly do, you know, where do you find those in the real estate documents? What happens when you have two legal descriptions that don&rsquo;t match related to the same property? How do you extract those? How do you plot them? What do you do when the boundary, you know, doesn&rsquo;t match up? And so all of that information isn&rsquo;t lost. And I think what&hellip;</p><p>is also helpful as these models have gotten better and better, used to be if you, know, that talk came up last year and sort of the world moves pretty fast, I was, a little bit of a worry that as our prompt library got bigger and bigger, it would just sort of more and more work to sort of prune that every time a new model came out. But as models are coming out, they&rsquo;re getting better to handle all of these things. And so because we have what a real estate lawyer is at a sort of at a sufficient abstraction level, that</p><p>stays around for all of time and it sort of continues to compound and be valuable for our product. The thing that I think has very quickly become irrelevant or no longer needed or just sort of the models don&rsquo;t even care about anymore is what people call prompt engineering or prompt tax. There were all these papers coming out that, you if you told the model that it was super important to your career and it couldn&rsquo;t get this wrong, hallucinations rate would go down by a few percentage.</p><p>We tried all of that, we saw the results, and was like, that&rsquo;s great. A lot of that stuff is just melting away into nothing, and you no longer have to do that. I think a lot of that knowledge that was found out is now baked into the AI model. We keep talking about it getting better and better. And so that was never, that was a short term fix for a problem that has now disappeared. And so sort of back then when I created that presentation, I was a little worried. What happens when our prompt library is double the size, 10 times as big? I&rsquo;m actually no longer worried.</p><p>partly for those reasons, but also partly we can take our prompt library, feed it to these models, and just iteratively go on and say, this is what we&rsquo;re finding from one model to the next. Help us update all of these prompt libraries. And a lot of this work, even internally for us, is getting automated.</p><p>Greg Lambert (29:42)<br>
You can tell me I can stop lying to my AI tool to tell it I&rsquo;m going to give it a $150 tip if it gets it right. Good.</p><p>Marlene Gebauer (29:46)<br>
you</p><p>Andrew Thompson (29:50)<br>
I would need to see the data, but probably</p><p>Marlene Gebauer (29:53)<br>
Hahaha</p><p>Andrew Thompson (29:53)<br>
so. Or at least the days are quite numbered on needing to do that anymore.</p><p>Greg Lambert (29:58)<br>
Yeah, I&rsquo;m always worried that its memory will remember that I&rsquo;ve offered to give it money and all of a sudden it has access to my account. Yeah. So Orbital recently opened up some US offices. I know you&rsquo;ve got New York and I know you&rsquo;re looking at&hellip;</p><p>Marlene Gebauer (30:03)<br>
Follows up, yeah.</p><p>Andrew Thompson (30:04)<br>
Exactly, it&rsquo;ll extort you later.</p><p>Marlene Gebauer (30:08)<br>
All of sudden a guy comes to visit you, you know.</p><p>Greg Lambert (30:18)<br>
Chicago and Austin as possible following the $60 million Series B round. And you&rsquo;ve noted that the U.S. real estate legal services market is like $140 billion opportunity that still has, and again, we&rsquo;ve seen it, it&rsquo;s very dramatically under-automated in the processes that it does. So as you move into the U.S. market, what are the biggest differences you&rsquo;re seeing</p><p>and how say the AmLaw 100 firms approach AI compared to what you were seeing in the Magic Circle firms back in the UK.</p><p>Andrew Thompson (30:57)<br>
Yeah, absolutely. think that&rsquo;s a great question. Just to reframe that sort of slightly, we&rsquo;ve definitely fully moved in. We&rsquo;re in the US now. We&rsquo;ve got the majority of sort of the top 20 real estate practices using Orbital. I&rsquo;ve mentioned some of the names before, like Seyfarth Shaw, BCLP, Vinson &amp; Elkins, Goodwin, Polsinelli. It was always sort of a, you I&rsquo;ve been in a lot of businesses and you sort of take your product to a different country, especially in a regulated industry like real estate legal.</p><p>And you&rsquo;re like, how much of my product needs to be completely rebuilt from the ground up versus how much of it is applicable? And we found that sort of, you know, not everywhere, but most of the differences haven&rsquo;t sort of fundamentally stopped or sort of changed our core value proposition to real estate lawyers. They want things to be, you know, speed of response to clients. They want the quality to be as high as it could possibly be. They want a second pair of eyes on everything. And they want to ultimately sort of reduce the risk.</p><p>that they have. so that sort of those fundamentals have been the same, the products are built around those. And so whether it&rsquo;s sort of, you know, we started in the UK, we&rsquo;ve now gone to the US, that&rsquo;s our biggest market, it&rsquo;s sort of, you know, those differences aren&rsquo;t sort of, they&rsquo;re slight. But if I had to sort of tease out a few differences, one of them is InfoSec. You know, ISO 27001, you know, much more prevalent here in the UK versus SOC 2.</p><p>and data residency. Obviously, if you have a service that&rsquo;s sort of built in the UK and it stores data there and there was GDPR, now in the US, there&rsquo;s sort of different requirements. So obviously, infosec requirements need to be different. The billable hour, lot more prevalence in the US than in the UK. I think the UK has pushed really hard on this. There&rsquo;s a lot of fixed fee work, especially in real estate. And so we sort of did see a little bit of a difference there.</p><p>And then US law firms have initially been a little bit more sort of conservative. Obviously the ABA ruling that came out on AI a while back, law firms had to react to that in the US. so UK law firms were sort of a little bit quicker off the blocks to sort of adopt that and sort of &#8275; use the product.</p><p>But I think a lot of that has been ironed out now. A lot of large US law firms have committees to handle that. So I think that&rsquo;s becoming &#8275; less of a difference between the two regions. And then I think the big one is, I mentioned it earlier, title and survey review. That&rsquo;s a very US-centric piece. You get a title commitment policy. There&rsquo;s often sometimes hundreds of exceptions and linked documents that you need to download and review. And then part of this sort of</p><p>geospatial of visualizing the property is sort of heavily sort of tied into how real estate lawyers work with title companies. And so we sort of built out that offering more and more and it&rsquo;s become sort of a leading product in the US for what it does related to that. So I&rsquo;d say that&rsquo;s probably one of the biggest differences.</p><p>Marlene Gebauer (33:46)<br>
So one of the things we talk about on the podcast a lot is how AI is going to be affecting sort of up and coming people in the workforce. we enjoyed reading about your experience teaching a class of 30, 11 year olds about AI at your son&rsquo;s school. And you mentioned that they have some really surprisingly deep questions like whether someone is actually coding you.</p><p>So what did that experience teach you about the future of work and creativity? And how should the geeks in the industry currently leading be prepared for the next generation of the AI native workforce?</p><p>Andrew Thompson (34:23)<br>
Yeah, love this question. That was two years ago when my son was 11. I literally just celebrated his birthday a little week early this weekend and he&rsquo;s now 13. And like two years in this AI world just feels like a lifetime ago. I went back and sort of quickly looked at that presentation was like, wow, okay, things have changed. So sort of, I&rsquo;ve had to update a few of my priors, but I&rsquo;ll sort of pull out maybe three things from this.</p><p>Greg Lambert (34:28)<br>
God.</p><p>Andrew Thompson (34:49)<br>
As per usual, children often mimic their parents, especially at that sort of young age. And I think their parents&rsquo; attitude to sort AI is often sort of channeled through them. And you can kind of sort of picked up, you know, when I was in that class and when I even have now where I&rsquo;ve done sort of subsequent presentations or sort of chatted to my son&rsquo;s sort of friends. And I think maybe the piece to just think about your parents, like we all sometimes fear the unknown.</p><p>There&rsquo;s this big thing that&rsquo;s happening. It&rsquo;s really exciting. There&rsquo;s lots of opportunity, but there&rsquo;s also lots of change afoot. And you can see, you know, almost when I&rsquo;m chatting to my son&rsquo;s friends, you can see which ones almost their parents are a little bit more on the more paranoid side versus which ones see it more as an opportunity. And this sort of dovetails into the second piece that adults have a huge amount of baggage when it comes to like work had to be done this way.</p><p>You know, I&rsquo;ve got a lot of my career and reputation built on doing something in a bit more of a manual way. Children don&rsquo;t have any of that, right? They don&rsquo;t have that baggage. And so it&rsquo;s really sort of, they get to come into age in a world where all of that is just, doesn&rsquo;t matter. Kind of like the internet. Imagine, you know, I grew up prior to the internet and then after, and I look at my son, he knows nothing different from that. And I think it almost puts them at a level at a starting point ahead. Imagine if you&rsquo;re running a race.</p><p>My son&rsquo;s like 100 meters ahead of me immediately coming out of the gate. He can code up things now that I wasn&rsquo;t even doing in university. And he was doing that when he was 11. And so the starting point, it&rsquo;s just so great to see where they are at. And I think my son, it enables, had his, for his birthday, had one of his mates over to play around. And he&rsquo;s coded up a game. They play football in the attic. There&rsquo;s like flicking these little football players and they&rsquo;re knocking balls. And he&rsquo;s created a little app.</p><p>that sort of mimics commentary and it goes off to 11 labs with some text and comes back and it&rsquo;s like, the rain is thundering down. And it&rsquo;s just like, I walk up there and the boys, I have to like go and tell them to go to bed, you know, it&rsquo;s 10 PM at night. And they are having the time of their life. And again, that&rsquo;s play and there&rsquo;s work. But it&rsquo;s really interesting to see, you know, I get the luxury to have a foot in what&rsquo;s happening with software engineering and that whole industry is sort of changing. I get to see what&rsquo;s happening in real estate legal and that whole industry is changing.</p><p>And then I get to see what my son is doing and he&rsquo;s just loving it. He does not seem held back at all. He&rsquo;s just, you know, there&rsquo;s nothing that he feels he can&rsquo;t do. He&rsquo;s constantly thinking about entrepreneurialism and then sort of one last piece to end with. He is constantly running out of tokens. know, we no longer talk about this, like there was the AI bubble just, you know, a few sort of months back.</p><p>Greg Lambert (37:19)<br>
I was gonna ask. It sounds expensive.</p><p>Marlene Gebauer (37:19)<br>
Hahaha.</p><p>Andrew Thompson (37:27)<br>
But when I look at my son, I pay, what is it, $20 a month for his plan. I feel like the $200 plan is probably a little bit too much for a 12-year-old. But he&rsquo;s constantly running out. I thought, geez, imagine he&rsquo;s probably in the top 1 % of kids who are doing it, because obviously I&rsquo;m in AI. What happens if 100 % of kids are using this? We just don&rsquo;t have enough tokens to go around. And I guess to me, that&rsquo;s really inspiring to see sort of&hellip;</p><p>kids sort of using it and you can tell that by the time he goes to university and he gets into the job he&rsquo;ll just be so AI native he&rsquo;ll understand the world in a way that most of us sort of are still kind of grappling with whereas he doesn&rsquo;t have any of that.</p><p>Greg Lambert (38:07)<br>
Andrew, you talked a lot about, and I&rsquo;ll frame it &#8275; as the Wayne Gretzky quote of skating to where the puck&rsquo;s going to be rather than where it is. When you talk about&hellip;</p><p>Andrew Thompson (38:15)<br>
Yes.</p><p>Greg Lambert (38:20)<br>
As you&rsquo;re building, you&rsquo;re looking at where the foundational models and the other advancements are going to be in six months rather than where they are right now. And that really takes a lot of knowing the industry, knowing what&rsquo;s coming in the industry. So I wanted to ask you, what is it that you do personally to kind of keep up with things? Is there one or two must read or must listen to resources?</p><p>that help you kind of predict where things are going to be in six months.</p><p>Andrew Thompson (38:52)<br>
Yeah, absolutely. It&rsquo;s like, I feel like I&rsquo;m just have a &#8275; fire hose connected to my brain and I have to just sort of do filtering of like what&rsquo;s important, what&rsquo;s not. And that&rsquo;s obviously important to my job, but it does feel like&hellip;</p><p>Greg Lambert (39:04)<br>
But just</p><p>get the AI to summarize everything and then inject it straight into your brain. There we go.</p><p>Marlene Gebauer (39:09)<br>
All</p><p>Andrew Thompson (39:10)<br>
You know what, there&rsquo;s actually some truth to that, to be fair. I think my latest thing that I love, getting up on Saturday morning, grabbing a coffee, Harry Stebbings of 20VC here in the UK, but he&rsquo;s sort of very global focused and obviously a huge amount of the development is sort of happening in AI. He has a chap, he obviously interviews lots of sort of one-off people, but there&rsquo;s a recurring podcast he has between him, Rory O&rsquo;Driscoll, who&rsquo;s a VC, and Jason Lemkin, who&rsquo;s sort of ex-founder and VC.</p><p>And I just love having them, three of them, they&rsquo;re all coming from very different vantage points and that sort of open, healthy debate. One person says, this is brilliant, or this is the end of the world, or this is great. And they&rsquo;re constantly sort of chiming in, but it&rsquo;s not, it&rsquo;s less political. They actually have kind of a thing of like, we don&rsquo;t talk politics, let&rsquo;s just talk sort of tech and what this means for the average developer, the average investor. That I find is both entertaining, but also really good weekly insight into sort of what&rsquo;s happening. And they discuss a lot of&hellip;</p><p>You know, even the topic that&rsquo;s sort of the theme of this chat. I think the other piece is X is a fantastic place where a lot of people who are right at the forefront of AI are kind of putting out opinionated takes, people are commenting, and there&rsquo;s a lot of back and forth. And you sort of get the first bit of the fire hose seems to be on X. And I think, least for sort of this audience, Aaron Levie the CEO of Box.</p><p>He takes some incredible insights and distills them down into sort of easy to read paragraphs that really tell you what&rsquo;s happening in the world and how software engineering is changing, what CIOs are looking at. And again, for this audience, if you want to see where the bleeding edge is at, look at software engineering. Not just because software engineers like to adopt things, but the training data. There&rsquo;s more code in the world that has sort of been ingested in that you can see how far ahead it is doing that. And then the last person that really fits into this,</p><p>Boris Cherny who created Claude Code, he&rsquo;s recently got onto X. And if there was ever somebody to really just sort of get inside his head to understand what you&rsquo;ve built, what&rsquo;s happening, where is the world going next, he&rsquo;s a fantastic resource who sort of posts on X on a sort of daily weekly basis. And I sort of read everything he&rsquo;s got to kind of get a little bit of a sniff test as to sort of where the world is going.</p><p>Greg Lambert (41:22)<br>
Thanks.</p><p>Marlene, you&rsquo;re muted.</p><p>Marlene Gebauer (41:24)<br>
Sorry about that. we have come to the time in the podcast where we&rsquo;re to do the crystal ball question, Andrew. So, yeah. So looking ahead to three to five years when every real estate lawyer has, has a bunch of, of proactive AI agents at their disposal. what&rsquo;s the single single biggest shift you see coming for the traditional billable hour and the way property professionals actually derive value from their labor.</p><p>Andrew Thompson (41:51)<br>
Hmm. Another one of these questions. This is the question a lot of people, I was at Legal Week, a month or so ago, and this was one of the big questions around the billable hour and things. But maybe to start off with this time frame, I think three to five years is, like, it&rsquo;s coming sooner. This idea of of humans being able to&hellip;</p><p>&#8275; orchestrate a huge amount of AI agents to do far more work than was possible. I&rsquo;m already seeing it with engineers on my team, back to this point around, look at what software engineers are doing, and probably that&rsquo;s coming for lots of other industries. And so, kind of the way I think about this, raw cognitive ability, so sort of intelligence, I know AI is sort of more intelligent than us in certain things, less in other things, but sort of, if you had to sort of, you know, average raw cognitive ability,</p><p>sort of knowledge of the market, and this is sort of particularly interesting in sort of legal, depending on where the data comes from. And then just the speed at which you can perform work, whether it&rsquo;s writing code or whether it&rsquo;s reviewing a 200 page lease, all of those things are getting democratized at an incredible clip. METR M-E-T-R, is a research group that shows how this is sort of the exponential increase in terms of how much work</p><p>can the equivalent AI system do that a human took N hours? And it was, when it started out, it was like minutes and then it was half an hour. And it&rsquo;s now sort of, I believe, you know, in the kind of more than half a day to more than a day worth of work and Opus, you know, 4.7 has just come out. And so if we look at it a world where all those things get democratized, I think, you know, lawyers have always played in a world where they&rsquo;re competing with other clever, knowledgeable,</p><p>and sort of foster humans or teams to give their clients sort of the best possible service. And I think now we&rsquo;ve just layered in this new thing that sort of takes some of the things humans can do and it is much better than them, but it&rsquo;s also sort of not as good in other things. So I don&rsquo;t have like one silver bullet thing to sort of answer here, but the things I have heard, at least I can, talking to our customers, being at sort of legal week, human empathy seems to come up more and more. is clearly not gonna be, it can mimic empathy.</p><p>but it doesn&rsquo;t actually have true empathy. Being accountable, again, really important in a sort of regulated domain when there&rsquo;s insurance involved. As of yet, most AI systems can&rsquo;t be regulated, they&rsquo;re not insured, even if they are. Let&rsquo;s take planes, for example. I was originally a pilot and wanted to be. Boeing 747s can fly themself, if you program them. But I think I don&rsquo;t wanna put myself, even though I know logically, I don&rsquo;t think I wanna put myself and my family on one of those planes.</p><p>without a human pilot in the seat. Maybe that will change in the future, but I think having humans being accountable is important. And then this sort of third one, this idea of sort of instructing or orchestrating AI. And I&rsquo;ve sort of highlighted this of, know, previously if I&rsquo;m a manager, I orchestrate humans and sort of my value and what I&rsquo;m adding is how effectively do I orchestrate humans. Now I have kind of an extra tool in my toolbox. I&rsquo;m orchestrating humans, but I&rsquo;m also orchestrating tokens.</p><p>And there&rsquo;s this real interesting interplay between the two. And so I can imagine, again, I don&rsquo;t know what&rsquo;s gonna happen with the billable hour. There was so much debate at Legal Week. Why did it exist to begin with? Clients wanted it, that&rsquo;s why we brought it in. Now clients are asking for take it away. And it&rsquo;s been incredibly persistent through year after year after year of people calling that the billable hour is going down. There is an inherent conflict with AI because it is making the work more efficient. But I think, you know,</p><p>humans will be valuable to, you in what way. I think we&rsquo;re all figuring that out and I&rsquo;ve given sort of a couple of ideas as to sort of maybe where that is. But I think this is an ongoing debate that we&rsquo;re all having and I think we&rsquo;ll figure it out. It&rsquo;s just a sort of work in progress as we go.</p><p>Greg Lambert (45:38)<br>
All right, well, Andrew Thompson, CTO there at Orbital. Thank you very much for coming in and I really appreciate you taking the side trip to talk VLMs, LLMs with me. I appreciate that.</p><p>Andrew Thompson (45:49)<br>
It&rsquo;s a pleasure.</p><p>Marlene Gebauer (45:51)<br>
Yeah, thank you, Andrew. And thanks to you, our listeners, for listening to the Geek in Review podcast. If you enjoyed the show, please share it with a colleague. We&rsquo;d love to hear from you on LinkedIn and Substack.</p><p>Greg Lambert (46:02)<br>
And Andrew, for our listeners who want to follow your engineering mantras or learn more about Orbital, where&rsquo;s the best place for them to &#8275; connect with you?</p><p>Andrew Thompson (46:12)<br>
Yeah, great. Head over to orbital.tech. That&rsquo;s our website and the tech blog is linked from there. We&rsquo;ve got lots of content on the mantras and everything else and some of the sort of interesting things we&rsquo;ve been working on over the years and some of the really interesting things that are probably going to drop fairly soon.</p><p>Marlene Gebauer (46:28)<br>
And as always, the music you hear is from Jerry David DeSica. Thank you, Jerry, and goodbye, everybody.</p><p>&nbsp;</p>
]]></content:encoded>
					
		
		
			<dc:creator>xlambert@gmail.com (Greg Lambert)</dc:creator></item>
		<item>
		<title>Spoiler Alert: Legal Marketing’s Next Evolution is Agentic and Product-Led –</title>
		<link>https://www.geeklawblog.com/2026/04/spoiler-alert-legal-marketings-next-evolution-is-agentic-and-product-led.html</link>
		
		
		<pubDate>Fri, 24 Apr 2026 17:11:43 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[legal marketing]]></category>
		<category><![CDATA[LMA]]></category>
		<category><![CDATA[Product Marketing]]></category>
		<guid isPermaLink="false">https://www.geeklawblog.com/?p=19255</guid>

					<description><![CDATA[Earlier this week, I attended the 2026 Legal Marketing Association Annual Conference  in New Orleans. By all accounts, it was a success—great energy, strong attendance, and a clear signal that legal marketing is in the middle of a real transformation. The sessions reflected it: legal operations, client intelligence, AI, change management, video. The conversation in... <a href="https://www.geeklawblog.com/2026/04/spoiler-alert-legal-marketings-next-evolution-is-agentic-and-product-led.html">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>Earlier this week, I attended the <a href="https://www.legalmarketing.org/">2026 Legal Marketing Association Annual Conference&nbsp; </a><a href="https://www.legalmarketing.org/">in New Orleans</a>. By all accounts, it was a success&mdash;great energy, strong attendance, and a clear signal that legal marketing is in the middle of a real transformation.<img style=" max-width: 100%; height: auto; " fetchpriority="high" decoding="async" class="alignright size-medium wp-image-19256" src="https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-320x182.jpg" alt="" width="320" height="182" srcset="https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-320x182.jpg 320w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-240x136.jpg 240w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-40x23.jpg 40w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-80x45.jpg 80w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-160x91.jpg 160w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-550x313.jpg 550w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-367x209.jpg 367w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-275x156.jpg 275w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-220x125.jpg 220w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-440x250.jpg 440w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-184x105.jpg 184w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-138x78.jpg 138w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-413x235.jpg 413w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-123x70.jpg 123w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-110x63.jpg 110w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-330x188.jpg 330w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-300x170.jpg 300w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-600x341.jpg 600w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-207x118.jpg 207w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-344x196.jpg 344w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-55x31.jpg 55w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-71x40.jpg 71w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026-95x54.jpg 95w, https://www.geeklawblog.com/wp-content/uploads/sites/528/2026/04/LMA-2026.jpg 644w" sizes="(max-width: 320px) 100vw, 320px"></p><p>The sessions reflected it: legal operations, client intelligence, AI, change management, video. The conversation in every room was heightened and exciting. <a href="https://conference.legalmarketing.org/About/Annual-Conference-Advisory-Committee">I was even on the ACAC (shout out the best Co-Chairs, President, Staff and committee!).</a></p><p>And yet, something still felt missing.</p><p>Not more tools. Not more tactics.</p><p>What&rsquo;s missing is a shift in <strong>operating model</strong>.</p><p><strong>Legal Marketing is Advancing&mdash;But Still Disconnected</strong></p><p>Legal marketing teams are doing more than ever&mdash;supporting sophisticated BD efforts, producing targeted campaigns, scaling thought leadership, and experimenting with AI.</p><p>But much of it is still fragmented:</p><ul>
<li>Campaigns disconnected from long-term positioning</li>
<li>Messaging that shifts by practice or partner</li>
<li>Client intelligence that isn&rsquo;t operationalized</li>
<li>AI used tactically, not systemically</li>
</ul><p>We&rsquo;re moving faster&mdash;but not always more coherently.</p><p><strong>The Missing Layer: Product Marketing Discipline</strong></p><p>What&rsquo;s coming next&mdash;likely accelerating into 2027&mdash;is a shift toward a <strong>product marketing ethos for legal marketing</strong>.</p><p>Not because law firms become product companies&mdash;but because the <strong>problems product marketing solves are now legal marketing&rsquo;s problems</strong>.</p><p>At its core, product marketing brings structure to go-to-market:</p><ul>
<li>Market intelligence and validation</li>
<li>Clear value propositions</li>
<li>Consistent messaging and positioning</li>
<li>Enablement of front-line teams aka lawyers</li>
</ul><p>Legal marketing already touches all of these&mdash;but rarely as a <strong>cohesive, repeatable system</strong>.</p><p>This isn&rsquo;t about more content. It&rsquo;s about <strong>clarity, consistency, and scalability</strong> in how firms go to market.</p><p><strong>Why This Matters Now</strong></p><p>The traditional model&mdash;relationships, reputation, responsiveness&mdash;is under pressure.</p><p>Buyers now expect:</p><ul>
<li>Clear articulation of value &nbsp;&ndash; especially in the #AIEra</li>
<li>Industry-specific insight</li>
<li>Differentiation beyond credentials</li>
<li>Faster, more tailored engagement</li>
</ul><p>At the same time, firms are expanding into <strong>repeatable offerings</strong>&mdash;managed services, alternative delivery models, and more structured solutions.</p><p>That combination demands something new:</p><p>A disciplined, scalable approach to how firms define and deliver value to the market.</p><p>Most AI adoption in legal marketing today is still tool-based&mdash;drafting, summarizing, automating tasks.</p><p>Helpful, but incremental.</p><p>The real shift is toward <strong>agentic AI workflows</strong>&mdash;systems that can:</p><ul>
<li>Continuously monitor client industries and trigger insights</li>
<li>Adapt messaging dynamically</li>
<li>Assemble pitches grounded in validated value propositions</li>
<li>Enable lawyers with real-time, tailored talking points</li>
<li>Learn from outcomes and improve over time</li>
</ul><p>But these systems only work with structure and reliably clean data.</p><p>Without clear positioning, messaging, and audience definition, AI just scales inconsistency.</p><p>With it, AI becomes a strategic execution layer.</p><p><strong>The Convergence That Changes the Model</strong></p><p>This is why the next evolution of legal marketing isn&rsquo;t just AI adoption&mdash;it&rsquo;s the convergence of:</p><p><strong>Product marketing discipline + agentic AI execution</strong></p><p>Together, they shift legal marketing from:</p><ul>
<li>Campaigns &rarr; Systems</li>
<li>Content &rarr; Intelligence</li>
<li>Support &rarr; Enablement</li>
<li>Reactive &rarr; Proactive</li>
</ul><p>Marketing doesn&rsquo;t just support growth&mdash;it helps <strong>systematically create it</strong>.</p><p><strong>What This Looks Like in Practice</strong></p><p>In the near future, leading firms will operate with:</p><ul>
<li>Continuous client and market intelligence feeding BD efforts &nbsp;&ndash; I have been trying to get the industry here for years. Today&rsquo;s tech makes my last 15 years of effort a wash.</li>
<li>Messaging that is consistent but dynamically applied</li>
<li>Pitches and proposals built from validated value frameworks</li>
<li>Lawyers equipped with tailored insights before every interaction</li>
<li>Thought leadership driven by real client pain, not just editorial calendars</li>
</ul><p>The building blocks already exist.</p><p>What&rsquo;s missing is the integration.</p><p><strong>This Isn&rsquo;t About Productizing Law</strong></p><p>There will be pushback.</p><p>&ldquo;We&rsquo;re not a product company.&rdquo;<br>
&ldquo;Our work is bespoke.&rdquo;<br>
&ldquo;Our partners won&rsquo;t adopt this.&rdquo;</p><p>But this isn&rsquo;t about productizing legal work. That&rsquo;s already happening thanks to AI and process automation.</p><p>It&rsquo;s about <strong>productizing how firms go to market</strong>&mdash;how they define value, communicate it, and deliver it consistently, especially as the needs of buyers are shifting under pricing pressure and AI engagement.</p><p>A move to agentic PMM doesn&rsquo;t remove nuance. It scales it.</p><p>Better go to market doesn&rsquo;t replace relationships. It strengthens them.</p><p><strong>The Competitive Reality</strong></p><p>The conversations at LMA made one thing clear: legal marketing is ready for its next phase.</p><p>But that phase won&rsquo;t be defined by who uses the most AI tools.</p><p>It will be defined by who builds the <strong>most effective go-to-market systems</strong>.</p><p>As commercial models in the legal industry continue to evolve&mdash;toward more structured offerings, pricing innovation, and increased competition from alternative providers&mdash;firms will need more than strong relationships and good marketing.</p><p>They&rsquo;ll need <strong>repeatable, intelligent, and scalable ways to compete</strong>.</p><p>That&rsquo;s why the shift to an <strong>agentic, product-led legal marketing model</strong> matters.</p><p>Because in the next phase of legal marketing, this isn&rsquo;t about being more relevant.</p><p>It&rsquo;s about being staying <strong>competitive</strong> to effectively win more client work in a transformational market.</p>
]]></content:encoded>
					
		
		
			<dc:creator>xlambert@gmail.com (Greg Lambert)</dc:creator></item>
		<item>
		<title>Greg Mazares Sr. on AI, E-Discovery, and the Future of Human-Led Legal Services</title>
		<link>https://www.geeklawblog.com/2026/04/greg-mazares-sr-on-ai-e-discovery-and-the-future-of-human-led-legal-services.html</link>
		
		
		<pubDate>Mon, 20 Apr 2026 02:01:46 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[billable hour]]></category>
		<category><![CDATA[data security]]></category>
		<category><![CDATA[document review]]></category>
		<category><![CDATA[e-discovery]]></category>
		<category><![CDATA[engineered intelligence]]></category>
		<category><![CDATA[legal AI]]></category>
		<category><![CDATA[Legal Innovation]]></category>
		<category><![CDATA[legal operations]]></category>
		<category><![CDATA[litigation support]]></category>
		<category><![CDATA[podcast]]></category>
		<category><![CDATA[purpose legal]]></category>
		<guid isPermaLink="false">https://www.geeklawblog.com/?p=19251</guid>

					<description><![CDATA[This week on The Geek in Review, we talk with Greg Mazares Sr., CEO of Purpose Legal, about what it takes to lead through one of the most important transition periods in legal services. Drawing on decades of experience across business, litigation support, and e-discovery, Mazares brings a steady, practical view to a market flooded... <a href="https://www.geeklawblog.com/2026/04/greg-mazares-sr-on-ai-e-discovery-and-the-future-of-human-led-legal-services.html">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>This week on The Geek in Review, we talk with <a href="https://www.purposelegal.io/leadership/greg-mazares/">Greg Mazares Sr.,</a> CEO of <a href="https://www.purposelegal.io/">Purpose Legal,</a> about what it takes to lead through one of the most important transition periods in legal services. Drawing on decades of experience across business, litigation support, and e-discovery, Mazares brings a steady, practical view to a market flooded with AI claims and rapid change. His message is clear from the start. The legal industry has faced major shifts before, from paper banker boxes to digital workflows, and this moment is another chapter in that longer story. Rather than treating AI as a threat, he sees it as a tool for adaptation, growth, and smarter client service.</p><p>A central theme in the conversation is Mazares&rsquo; belief that AI works best when paired with people and disciplined process. He argues that the future does not belong to technology alone, but to organizations that know how to combine tools, talent, and operational rigor. That philosophy sits behind Purpose Legal&rsquo;s acquisition of Hire Counsel and its broader push to reunite technology and staffing under one roof. In Mazares&rsquo; view, clients do not simply want software. They want experienced professionals who know how to apply AI in defensible, repeatable ways that improve outcomes without sacrificing judgment.</p><p>The discussion also highlights Purpose Legal&rsquo;s new offerings, including Purpose Xi and Case Optics, which aim to deliver early case insights in days rather than weeks. What makes Mazares&rsquo; framing stand out is his insistence that speed alone is not the point. Faster results matter only when paired with expert validation, tested workflows, and credible guardrails. He describes a legal market where clients once assumed AI would let them bring everything in-house, but now increasingly value outside experts who bring both technological fluency and hard-earned experience. That shift, he suggests, is raising the level of service providers from operational support teams to strategic partners embedded more deeply in legal work.</p><p>Greg and Marlene also press Mazares on data security, client trust, and the cultural pressures that come with rapid growth. Here again, his answers return to discipline and execution. He points to major investments in cloud security, around-the-clock protection teams, and tighter controls over on-site review environments. He also argues that many of the greatest risks still come from human behavior, which makes vetting, supervision, and protocol design as important as any technical control. On culture, Mazares emphasizes recognition, communication, and adaptability as the backbone of a company that wants to grow without losing its identity. For him, scaling a business is not only about revenue. It is about building a place where people feel seen, trusted, and prepared for change.</p><p>The episode closes on a thoughtful look at the next few years for litigation, junior associates, and the billable hour. Mazares predicts that junior lawyers will not disappear, but their role will shift toward becoming guides in the use of AI, both inside firms and in conversations with clients. As routine work becomes more compressed, he expects associates to provide higher-value service in fewer hours, with stronger technical fluency and a more consultative posture. It is a fitting end to an episode grounded in realism rather than hype. Mazares does not present AI as magic, and he does not dismiss its significance either. Instead, he offers a view of the future shaped by adaptability, experience, and the belief that in legal services, the winning formula still comes down to people, process, and sound judgment.</p><p data-start="1979" data-end="2573"><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true"><strong>Listen on mobile platforms:&nbsp;&nbsp;</strong></span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://podcasts.apple.com/us/podcast/the-geek-in-review/id1401505293" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Apple Podcasts&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span>&#8288;</a><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true"><strong>&nbsp;|&nbsp;&nbsp;</strong></span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://open.spotify.com/show/53J6BhUdH594oTMuGLvANo?si=XeoRDGhMTjulSEIEYNtZOw" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Spotify&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span>&#8288;</a><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&nbsp;|&nbsp;</span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://www.youtube.com/@thegeekinreview" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;YouTube&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span></a>&nbsp;|&nbsp;<a href="https://thegeekinreview.substack.com/">Substack</a></p><p><span data-slate-node="text"><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">[Special Thanks to&nbsp;</span></span><a class="Link-sc-k8gsk-0 feDGbw e-9652-text-link sc-jWfcXB gQGioO" href="https://www.legaltechnologyhub.com/" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">Legal Technology Hub</span></span>&#8288;</a><span data-slate-node="text" data-slate-fragment="JTVCJTdCJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmNoaWxkcmVuJTIyJTNBJTVCJTdCJTIydGV4dCUyMiUzQSUyMiU1QlNwZWNpYWwlMjBUaGFua3MlMjB0byUyMCUyMiU3RCUyQyU3QiUyMnR5cGUlMjIlM0ElMjJsaW5rJTIyJTJDJTIydXJsJTIyJTNBJTIyaHR0cHMlM0ElMkYlMkZ3d3cubGVnYWx0ZWNobm9sb2d5aHViLmNvbSUyMiUyQyUyMnRhcmdldCUyMiUzQSUyMl9ibGFuayUyMiUyQyUyMnJlbCUyMiUzQSUyMnVnYyUyMG5vb3BlbmVyJTIwbm9yZWZlcnJlciUyMiUyQyUyMmNoaWxkcmVuJTIyJTNBJTVCJTdCJTIydGV4dCUyMiUzQSUyMkxlZ2FsJTIwVGVjaG5vbG9neSUyMEh1YiUyMiU3RCU1RCU3RCUyQyU3QiUyMnRleHQlMjIlM0ElMjIlMjBmb3IlMjB0aGVpciUyMHNwb25zb3JpbmclMjB0aGlzJTIwZXBpc29kZS4lNUQlMjIlN0QlNUQlN0QlNUQ="><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">&nbsp;for their sponsoring this episode.]</span></span></p><p><iframe title="Spotify Embed: Greg Mazares Sr. on AI, E-Discovery, and the Future of Human-Led Legal Services" style="border-radius: 12px" width="100%" height="152" frameborder="0" allowfullscreen allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy" src="https://open.spotify.com/embed/episode/5TpRcpcR4G3t6TCsdSgdXN?si=9iMiulH4ShqNCKIcFHKyCQ&amp;utm_source=oembed"></iframe></p><p><a href="https://www.youtube.com/watch?v=jINNI_zhQDE"><img style=" max-width: 100%; height: auto; " src="https://www.geeklawblog.com/wp-content/uploads/sites/528/embed_thumbs/jINNI_zhQDE.png"></a></p><p>&#8288;&#8288;&#8288;&#8288;&#8288;Email: geekinreviewpodcast@gmail.com<br>
Music: &#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Jerry David DeCicca&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</p><h5>Transcript:</h5><p>&nbsp;</p><p><span id="more-19251"></span></p><p>Marlene Gebauer (00:00)<br>
Hey everyone, I&rsquo;m Marlene from The Geek in Review and I have Sam Moore here from Legal Tech Hub. And Sam, I think you&rsquo;re going to tell us about AI enablement advisory projects. All right, let&rsquo;s hear it.</p><p>Sam Moore (00:11)<br>
Mm-hmm. Yep, that&rsquo;s correct.</p><p>Thank you, Marlene. So one of the requests that we&rsquo;ve been seeing quite commonly recently on the advisory side of Legal Tech Hub is what we&rsquo;re calling AI enablement. This means we&rsquo;re assessing a law firm or law department&rsquo;s AI positioning, their readiness, existing use, if any. And we&rsquo;re seeing this come up in a couple of interesting different ways. Firstly, when a firm is ready to make a commitment and investment into an AI tool or platform,</p><p>and they want to make sure they have the best possible experience and see early ROI if they can. They can engage us to run sprints with various teams in the firm to find those early use cases, help refine some workflows, and come up with a roadmap of where to start and where they should be trying to get to. And sometimes those sprints reveal really interesting differences between management perception and attorneys&rsquo; actual readiness, which can be really valuable. And the second way we&rsquo;re seeing this come up</p><p>is firms who were early adopters of AI. So maybe they bought into a tool a year ago, 18 months ago, and they&rsquo;re coming up on renewal. And they&rsquo;re looking really carefully at the cost and the benefits. And they&rsquo;ll engage us to run a similar kind of sprint format with a different discovery focus, and we&rsquo;ll help them surface where that existing tool is and isn&rsquo;t generating value for the business, help the firm understand why that might be,</p><p>and then provide them with the kind of objective understanding needed to really inform that go, no-go decision on renewal. And those subscriptions can be quite big numbers. So being confident in the renewal decision is really important. And we&rsquo;re finding that those AI enablement projects often lead into further work around AI procurement, which I think is a really encouraging sign that the industry is moving beyond the FOMO and the push to buy something now and</p><p>figure out how to use it later. And instead, I think we&rsquo;re now taking a more nuanced and measured approach that goes beyond the hype, and that could only be a good thing.</p><p>Marlene Gebauer (02:14)<br>
It sounds like a really important advisory service that you&rsquo;re offering because it&rsquo;s really important to kind of have a third party give an impartial review and an opinion on this. So, sounds great.</p><p>Sam Moore (02:29)<br>
Fantastic. Thank you.</p><p>Marlene Gebauer (02:37)<br>
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I&rsquo;m Marlene Gebauer.</p><p>Greg Lambert (02:44)<br>
I&rsquo;m Greg Lambert. Marlene, you know, we&rsquo;ve brought in a bunch of the new kids on the block recently with all the legal tech, but the new kids are not old enough to know the New Kids. So, today we&rsquo;re going to bring on a veteran who has successfully navigated the industry in the era all the way back when we were</p><p>Marlene Gebauer (02:53)<br>
I hope the new kids actually know that reference.</p><p>That&rsquo;s all right.</p><p>Greg Lambert (03:08)<br>
pulling up all those paper banker boxes all the way to the current generative AI phase.</p><p>Marlene Gebauer (03:11)<br>
Mm-hmm.</p><p>We&rsquo;re thrilled to welcome Greg Mazares, Sr., the CEO of Purpose Legal.</p><p>Greg is a Harvard MBA and has over 45 years of executive experience building and scaling companies in consumer products, financial services, and litigation support.</p><p>Greg Lambert (03:32)<br>
And Greg&rsquo;s an executive who likes to describe himself as a student of the game who has applied classic strategic frameworks to the evolution of e-discovery. So Greg, welcome to The Geek in Review.</p><p>Greg Mazares (03:44)<br>
Thank you. It&rsquo;s great to be with you both.</p><p>Marlene Gebauer (03:47)<br>
So Greg, you have a unique daily routine. We&rsquo;ve heard that you start every morning with an hour of research. So, reading everything from legal tech news to McKinsey white papers while keeping Michael Porter&rsquo;s Five Forces top of mind. So in an industry that often feels like it&rsquo;s reinventing itself every six months,</p><p>Greg Lambert (04:00)<br>
Wow.</p><p>Marlene Gebauer (04:07)<br>
how does grounded historical study and classic economic theory help you cut through AI hype and identify real strategic opportunities?</p><p>Greg Mazares (04:16)<br>
I would say that the most important thing to me is not to be afraid or concerned about change. And I read looking for ways to adapt, looking for ways other people are adapting,</p><p>and trying to see where trends are going as opposed to looking for reasons to get out of doing the things that have helped us be successful through the years. I&rsquo;ve been thinking about AI quite a bit. I remember back when I started my career at a Carnation company, a subsidiary of Nestl&eacute;, we had six members of the team who were working on what I would call entry-level typewriters. And then suddenly one of the senior</p><p>executives decided to bring in a very large IBM Selectric, you know, the newest piece of great technology. And they cut six people down to one person to support the entire marketing department at that point in time. But the other thing I learned is that the other five people, rather than eliminating them, they were redeployed in other roles. And so I look at that and then we jump to what has transpired in</p><p>where we started, you talked about banker boxes and paper and people. It was amazing. We&rsquo;d run into each other crossing the street, you know, which boxes were going to collide with other boxes, so forth, so on. We moved from that into technology, TIFF images, all of the above. And once again, we had to adapt and we had to change. And so when I&rsquo;m looking at, in the case of AI, I see AI as an amazing opportunity,</p><p>not a problem. And all of the people who have predicted doom and gloom for our industry, I just don&rsquo;t agree with that. I think there are strategic ways to embrace AI and embrace the AI age and help our clients in ways we&rsquo;ve never been able to do so before. So I read everything that I can, but I&rsquo;m somewhat focused. I also</p><p>look at what the competition&rsquo;s doing. I look at other industries and how they&rsquo;re applying AI. It&rsquo;s not just about the legal industry. It&rsquo;s about general management more broadly. And I think there are great opportunities right now. I&rsquo;m as excited today as I was when I started in the legal field in 1988. And I think it&rsquo;s just a new age for us all.</p><p>Greg Lambert (06:50)<br>
Well, speaking of Carnation, it reminded me my uncle was a Carnation delivery driver for like 30 years in Arizona. So I&rsquo;ve got some family ties there as well. Well, Greg, it sounds like you&rsquo;re like us, you&rsquo;re a lifelong learner. You&rsquo;re constantly trying to look at new information, but not chase the trend. But one of the things that you did in early 2026</p><p>is Purpose Legal acquired Hire Counsel, which brought a network of, I think it was over 70,000 vetted attorneys into the fold. And you described this as an inflection point where technology providers and staffing firms have moved too far apart. So why is it that you feel like the market needed to bring back in a people layer and the technology layer and put those back</p><p>together under one roof? And what does the modern legal services provider look like now that these two layers are unified?</p><p>Greg Mazares (07:55)<br>
I think the most important thing is the realization that AI by itself, it&rsquo;s not all about the technology. The age of AI is about technology, people, and process. And probably process working in conjunction with great people</p><p>is going to turn out to be the most important aspect of AI. That&rsquo;s because we have those that have totally embraced it, and we have those that are on the other end of the spectrum that are fearful of it. And then we have some that are dabbling in it. They&rsquo;re not sure which way to go just yet. But I think the combination of great people working with and implementing impeccable processes using AI, that&rsquo;s going to be the winning formula for everybody going forward.</p><p>Greg Lambert (08:44)<br>
How do you spin it so, because a lot of us talk about thinking in a workflow where AI first, where you think of what is it that the tools, the automation, can do for you to get you to the starting point, a little further along from where your starting point is. How is it that, how do you explain this to the people on how to</p><p>integrate the AI tools or automation tools or the process into their workflow in a way that makes sense for them?</p><p>Greg Mazares (09:17)<br>
First and foremost, I think we need to look at are there tasks that are relatively simple and easy and almost machine-like that AI can replace for people? Can we free people&rsquo;s time to work on higher level tasks and higher level work product, et cetera? So one of the ways that AI can be incorporated is in things that the clients see.</p><p>Another way is the things that clients don&rsquo;t see that we have to do each and every day. If we can free up some of the tasks, many of the tasks that we do when we&rsquo;re working on our own,</p><p>in order to create time that we can spend in helping our clients, helping our colleagues, et cetera, at a higher level, then I think the use of AI is really important in any business, any organization at any time. So it&rsquo;s not AI or people, it&rsquo;s AI plus people,</p><p>utilizing great process that I think is going to carry the day.</p><p>Marlene Gebauer (10:30)<br>
So I&rsquo;m going to add one more pillar to your people, process, tech. I&rsquo;m going to throw in knowledge. And I want to preface the question by noting that today Harvey announced that they basically have an agentic workflow now incorporated into their product. It&rsquo;s like from start to finish, you tell it what you want to do. You tell it what the result is. And it will do it now.</p><p>Greg Lambert (10:48)<br>
Everybody&rsquo;s got an agentic workflow now.</p><p>Marlene Gebauer (10:59)<br>
I haven&rsquo;t seen this yet, but you&rsquo;ve introduced a concept called engineered intelligence, which powers your new Purpose XI. Is that how you&hellip; Purpose&hellip;</p><p>Greg Mazares (11:08)<br>
Purpose XI,</p><p>and then one of the modules of that that we have introduced up front is Case Optics.</p><p>Marlene Gebauer (11:18)<br>
Okay, so if I&rsquo;m understanding this correctly, this codifies a human intelligence plus AI philosophy. So it rejects the idea of AI as a standalone substitute for human judgment. So with the launch of Purpose XI and Case Optics, you&rsquo;re promising early case insights in days rather than weeks, which is great. How do you practically ensure that speed doesn&rsquo;t compromise defensibility,</p><p>and why is this expert validation an essential guardrail for today&rsquo;s litigation?</p><p>Greg Mazares (11:49)<br>
That&rsquo;s a great point. I think if it were technology by itself and we&rsquo;re talking about speed, that&rsquo;s one thing. If we&rsquo;re talking about speed using amazing technology, and there are so many choices when we&rsquo;re talking about AI, I think we&rsquo;d all agree there&rsquo;s not one solution. There&rsquo;s a myriad that are out there and there are many that are very, very good. But when you have people that have had experience and understand how to use the technology</p><p>to incorporate excellent judgment into what we&rsquo;re doing, then that is another layer of value that we&rsquo;re adding to clients. So clients used to hire us and other companies</p><p>because of the people and their belief in the people. Now we have people that can be supercharged with the help of new technologies. And these experts are able to develop workflows that are tested, that are true, that are defensible in nature, that reduce risk. And the combination of the two is so powerful. And we now have many clients that say to us, we thought</p><p>that when AI came out, we weren&rsquo;t going to need you anymore after a certain point in time. You would teach us and then we could just go off and we could do it all by ourselves in-house. Now they&rsquo;re saying, we think it&rsquo;s going to be a long time, if not forever, that we&rsquo;re going to need to work with people like you because you add a layer of experience, judgment, and you give us the guardrails and the</p><p>defensibility that we need to be even more efficient and effective in the work that we do. So we&rsquo;re viewing it now as a partnership in which we work with our clients. Our people, who are very experienced, have worked on multiple projects, have made mistakes, have learned from the mistakes, have avoided mistakes, because people will always, using judgment, they will</p><p>refine what they do over time. The fact of the matter is the clients have more confidence in having experts who can basically elevate their game using this very powerful technology. They are more confident working as a team. So in effect, it&rsquo;s elevated the kind of work that we&rsquo;re doing. We&rsquo;ve created new services and we have people that are doing new and different and better things that are helping</p><p>our clients in such a way, it&rsquo;s giving them even more confidence in the technology than if they were off on their own trying to utilize it. And don&rsquo;t get me wrong, there are brilliant clients everywhere you look, but they like to have another expert add a layer of confidence. When I talk about defensibility and we talk about guardrails, we&rsquo;re talking about</p><p>clients who want to use the technology, but want to also know that they&rsquo;re not going to be criticized. They&rsquo;re not going to make a fatal mistake and that they can use it over and over and over again in newer and better ways as they gain experience. So it is something that&rsquo;s going to take some time. I would call it an evolution rather than a revolution. And everybody&rsquo;s learning as we go along.</p><p>In some cases, it&rsquo;s being adopted very quickly. In other cases, there&rsquo;s a level of hesitancy to go all in. And for this reason, I would suggest that our industry has changed, but it&rsquo;s not changing all at one time. It&rsquo;s something that&rsquo;s going to happen over many years. And it will be different for some versus others. So we&rsquo;re excited. We look at it as a great opportunity to have our many</p><p>talented, experienced, proven people be able to help our clients advance their practices in ways that they haven&rsquo;t experienced before.</p><p>Marlene Gebauer (16:03)<br>
Yeah, so all of us are smarter than one of us, right?</p><p>Greg Lambert (16:03)<br>
Greg, I&hellip;</p><p>And just to follow up on that, Greg, I think, you know, and I&rsquo;ve seen it with our vendor partners, that it used to be, you&rsquo;d be like set this up, get us ready to go, train us, and then turn us loose. And then we might have monthly or quarterly follow-ups on it. But now I&rsquo;m seeing almost weekly, daily, almost interactions with our</p><p>Greg Mazares (16:08)<br>
I think.</p><p>Greg Lambert (16:34)<br>
vendor partners on this. Are you seeing that there&rsquo;s a much tighter relationship between you and your customer clients, and how are your people kind of adjusting to that? Because it&rsquo;s got to be a little bit of a different workflow over the past couple of years now.</p><p>Greg Mazares (16:52)<br>
It is, and it&rsquo;s a different feeling and a different role. Right now, a lot of our people who are the experts in AI, such as our CIO, who heads up this whole area for us, his name is</p><p>Jeff Johnson, he&rsquo;s based out of Dallas and he does an amazing job. Jeff is actually viewed as an extension of the law firm team. And they almost view it as they don&rsquo;t want to go to war without having this type of expertise working with them. So we love this, it&rsquo;s elevated everyone&rsquo;s game. We&rsquo;re probably doing less of the more elementary work</p><p>that we might have done in the past, and we&rsquo;re doing some higher level consulting work right now. We have an advisory group that works with clients depending upon their specific needs in a lot of different areas. So what I&rsquo;m really talking about is this is part of the opportunity. Instead of doing lots and lots of elementary work, we&rsquo;re now doing quite a bit, if not the same quantity, of higher level work. And so we, like everybody</p><p>else, have to adapt to providing services and making money in different ways. And it&rsquo;s exciting, scary, exciting, all of the above. But boy, it gets you motivated to get up each day and find ways to do a great job on behalf of our clients and on behalf of our people. If we take care of our people and we take care of our clients, we will continue to build a winning organization.</p><p>Greg Lambert (18:27)<br>
Well, not to throw a wet towel on the conversation here, but we got to talk about data security and all the fun stuff that comes along with it. So we&rsquo;re in this era now of a post-zero trust environment. So,</p><p>what are you doing and how is Purpose Legal re-engineering the discovery process to ensure that the data sovereignty is there while still allowing for this massive scale required for things like the HSR second request, things like that, that you and your customers run into with security issues?</p><p>Greg Mazares (19:11)<br>
Well, one of the things that&rsquo;s changed versus years past, and what you&rsquo;re bringing up is such an important point and a great question,</p><p>is that so much of the data, of course, is now stored in the cloud as opposed to on-premise. Big, big change. We&rsquo;re working with organizations such as Relativity, Everlaw, 4iG, I can go on and on and on, who have with us set up security mechanisms that are actually better than they have been before.</p><p>We have invested heavily in putting together a large technology group that works around the clock. We have people both in the US and outside who are employees, have been part of our team, to help us protect the data. We also want to make sure that</p><p>we basically haven&rsquo;t seen, we&rsquo;ve seen attempts as everybody will see, at security breaches. We have been able to work hard by setting up mechanisms in advance that will prevent penetration, for example. So we spend a lot of money on data security because clients need to go to sleep at night knowing 100 percent. I mean, it&rsquo;s not a 50 percent chance.</p><p>It&rsquo;s 100 percent. They need to know that we&rsquo;re doing everything we can to protect data. But again, I make the point that so much of the data these days is stored in the cloud. Much of the data is also stored on client systems in certain cases where clients have invested in their own instances for data storage. So data is absolutely moving away from on-premise to cloud environments that are highly protected,</p><p>both by the provider as well as by a large team of security experts in our company. It&rsquo;s the nut without which you have to have security so that your clients go to sleep at night knowing that there&rsquo;s not going to be a problem with data.</p><p>Greg Lambert (21:16)<br>
Greg, I&rsquo;m going to hit you with a hard follow-up here. What are some security issues that you think people may be over concerned about? And then what&rsquo;s something that you think they&rsquo;re under concerned about right now?</p><p>Greg Mazares (21:29)<br>
Well, I think they&rsquo;re over concerned about</p><p>a data breach of the corpus of data that we may be reviewing on their behalf in some instances. I think that what they need to be concerned about, I&rsquo;ll give an example. I&rsquo;m aware of a situation where document reviewers were using 3D glasses in order to try to capture data, look at data, so forth. They&rsquo;re using technologies.</p><p>This wasn&rsquo;t within our company, but it&rsquo;s something I&rsquo;ve heard within the industry. We are now making sure that any on-site reviews are first and foremost secure. So we do not allow reviewers to bring in anything but the clothing they&rsquo;re wearing, so to speak, maybe traditional reading glasses, et cetera, contact lenses. But bottom line is there are</p><p>Marlene Gebauer (22:23)<br>
I</p><p>Greg Mazares (22:23)<br>
Thank</p><p>Marlene Gebauer (22:23)<br>
hope you let them wear their glasses. It&rsquo;s like if it were me, I&rsquo;d never be able to read anything.</p><p>Greg Lambert (22:23)<br>
I was actually thinking,</p><p>I would say the little red and blue, yeah,</p><p>the red and blue 3D glasses that you wear at a movie theater.</p><p>Marlene Gebauer (22:32)<br>
You</p><p>Greg Mazares (22:33)<br>
It&rsquo;s unreal. There are things that people bring in with them and again, we just outlaw it. You just can&rsquo;t have any of that. So we have to protect the clients because there are still a number of on-site reviews where we have lots of people in a room. Frankly, you have to protect against this</p><p>on the outside as well. So the bottom line is those are the things that we feel there&rsquo;s greater risk because there are a lot of people that you don&rsquo;t have direct control over as we did in the old days where we would have supervisors in a single room with anywhere from 10, 50, 100 people reviewing documents at one time. So this has to be impeccable. That&rsquo;s an example that I would throw out there.</p><p>Marlene Gebauer (23:23)<br>
Yeah, it&rsquo;s interesting because everything I read, the main risk is human error as opposed to the technology failing or something like that. Would you agree with that?</p><p>Greg Mazares (23:35)<br>
Well, I think you can always have technology failure. I think people, first and foremost, you need to make sure that you vet reviewers properly. You need to have multiple layers of vetting. We do have certain criteria where we won&rsquo;t use certain people. We many times will reject</p><p>many more people on projects than we will accept. They&rsquo;re just not the right people, don&rsquo;t have the right level of experience or the track record. And this is why we bought a company with, I&rsquo;ll call it a small army of experienced people that&rsquo;s been in operation for multiple decades. We can be very selective of the right people for the right project to make sure that we give clients exactly what they need, both in terms of skills, experience, expertise,</p><p>and a track record of doing things the right way. So beyond that, I can tell you a lot of money goes into data security systems and the people that safeguard it day in, day out. Once again, clients keep coming back because they know their data is going to be in good hands and it&rsquo;s going to be taken care of the way it should be. There&rsquo;s no shortcuts.</p><p>We need to be unbelievably stringent in data security and in protocols and in processes. And if you do that and you keep doing great work, clients will keep coming back. And that&rsquo;s not something they will worry about anymore, but they should worry about it. We all need to be worried about it because there&rsquo;s a lot of smart, bad people out there too, as we know.</p><p>Marlene Gebauer (25:13)<br>
Exactly. All right, I want to switch the conversation a little bit to sort of institutional growth and its impact on culture. So Purpose Legal has seen a 500 percent growth in recent years and is a multi-time Inc. 5000 honoree and backed by Blue Sage Capital. And in fact, Blue Sage has said that they partner with</p><p>management that they like, trust, and admire. So congratulations. But as you scale through aggressive M&amp;A, I mean, that&rsquo;s challenging from a financial perspective, but it&rsquo;s also challenging from a culture perspective. That certainly has an impact on the organization from a cultural perspective and a human perspective. How do you maintain</p><p>a human-centered culture and still stay that company of choice, both for elite legal talent and sophisticated clients?</p><p>Greg Mazares (26:08)<br>
No, it&rsquo;s a great question, especially in the current economic environment, world environment, the impact of AI, which is worrying a lot of people about their future, rightly so.</p><p>In some cases, maybe not. Maybe it opens up opportunities too. I guess that&rsquo;s the way you look at it. First of all, let me make the point. While we&rsquo;re many times larger than we were, let&rsquo;s say, three years ago, most of it has been organic growth. We haven&rsquo;t purchased a lot of companies. And we were very selective and strategic, I think, in the acquisition of Hire Counsel.</p><p>Why did we do that? And I want to just start there. Why did some people say it&rsquo;s counterintuitive? Because if AI is coming onto the scene, why are you acquiring a company that is heavy, heavy, heavy people-oriented? And the fact of the matter is,</p><p>we saw early on that while AI is amazingly powerful, you&rsquo;ve got to have great people behind the wheel of AI. That&rsquo;s number one. Number two, not all document review projects are going to utilize AI. And so you will have to have people. There&rsquo;s going to be buckets. You&rsquo;re going to have AI all in, AI partly in, and no AI. It&rsquo;s going to be that way, I think,</p><p>for a few years, although it is speeding up where some level of AI is being incorporated in the process, which it should be. I think it should be across the board. But I&rsquo;ve learned enough to know that not everybody jumps on the bandwagon at the same time. There are early adopters and there are those that want to wait and see. So in looking at M&amp;A as an opportunity,</p><p>we thought this particular acquisition was going to help us cover all of the constituencies in this emerging AI environment. And so we can handle projects that involve very few people because the technology can do lots of the work. We can handle the midsize projects and then we can put hundreds of people onto a single project who are all experienced, vetted,</p><p>and people we can rely on. We want to be able to handle the different constituencies and different requirements that are out there that I think will be there for many, many years to come. Our growth is about the people that we have becoming more and more productive. The reason we like AI is because we want to make our people more capable of doing more and better</p><p>work faster and be able to be repetitive and do great things for our clients over and over again. So it&rsquo;s going to take the combination of great technology plus great experienced people. And we&rsquo;re still developing the experience levels with people because that&rsquo;s where the training comes into play. And then the other thing is processes. I know that we all didn&rsquo;t invent people, process, technology.</p><p>But in the case of AI, I can&rsquo;t stress enough, process, process, process, driven by really good people and using one of several choices for great technology. That&rsquo;s going to carry the day. It won&rsquo;t be technology alone. It won&rsquo;t be process alone. It&rsquo;s the combination, the three-legged stool, that is going to be so critical to the long-term success of companies</p><p>in this industry. And as I mentioned earlier, we all have to look at ways on our side of the equation, we have to look at different ways to help our clients and to earn a fair profit. Because we&rsquo;re not just hosting terabytes and terabytes of data anymore. As you know, companies like Relativity and Everlaw and others, they&rsquo;re handling that, or firms are making commitments on their own. Law firms are smart. They&rsquo;re making</p><p>good business decisions on their own to invest in technology to host their own data. So through a combination of consulting, a combination of the work that we do on core competencies like forensics and analytics, having very smart people to work on what I&rsquo;ll call science projects, for lack of a better term, where people haven&rsquo;t seen certain types of data or how to utilize</p><p>it or how to analyze it or how to store it, et cetera, we&rsquo;re able to add value on some of the hard things about our industry that I think makes us unique. And so we&rsquo;re building the business through repeat clients who start off giving us smaller projects to midsize projects to mega projects to managed services agreements. And that&rsquo;s how we&rsquo;re building the business. Will we look at other M&amp;A opportunities?</p><p>Yes, we will look at everything, but we&rsquo;re going to be extremely selective as we move forward. And so I really do think it&rsquo;s going to be more organic growth and finding different ways to help our clients and to keep growing our people that&rsquo;s going to help us build the business first and foremost.</p><p>Greg Lambert (31:35)<br>
Greg, before we get to our crystal ball question, since you&rsquo;re a lifelong learner, I&rsquo;m interested to hear this. What are one or two kind of must-read, must-listen, must-watch things that you use that you think other people would</p><p>Marlene Gebauer (31:40)<br>
I&rsquo;m very interested to hear what this is going to be.</p><p>Greg Lambert (31:53)<br>
be beneficial for them to watch or listen to?</p><p>Greg Mazares (31:57)<br>
Well, first and foremost, to start out, this may sound really simple, I would do a whole bunch of setup and a whole bunch of Google Alerts on various topics. And I&rsquo;m getting information throughout the day. Now I&rsquo;m going to convert my Google Alerts into an AI deliverable because I can get an assessment each day of what&rsquo;s happening in the industry, et cetera, et cetera. That&rsquo;s kind of baseline types of things.</p><p>But believe it or not, you find things and you see things and you learn things pretty fast if you take the time to think about what categories of information would you like to capture. I read JD Supra every day. I think there are a lot of great articles. I read most of the legal trade publications. I read a lot of books such as,</p><p>right on my desk right now. I&rsquo;m reading Measure What Matters by John Doerr. I&rsquo;m reading Pattern Breakers. I&rsquo;m looking at where things are headed two or three years out, if not beyond. I don&rsquo;t think anybody can predict what&rsquo;s going to happen 10 years from now.</p><p>Greg Lambert (33:11)<br>
Hold that thought because we&rsquo;re about to ask you that.</p><p>Marlene Gebauer (33:13)<br>
Hahaha!</p><p>Greg Mazares (33:17)<br>
But I think the most important thing is to try not to become complacent, thinking that after 10, 20, 30, 40 years in an industry that you know everything there is to know. I learn so much every day. I make mistakes every day. I also make some good decisions every day.</p><p>The bottom line is I think there are books we can read. I think having conversations with people in the industry that we trust is a great way to learn. Talking to competitors, I call them cooperative competitors.</p><p>Greg Lambert (33:59)<br>
Yeah.</p><p>Greg Mazares (34:00)<br>
Talk to competitors and see if there&rsquo;s a common theme in what people are thinking and saying is smart. I think on the law firm side, I&rsquo;m sure you do that all the time with people in other law firms or corporate clients, et cetera. There are so many smart people out there that we can learn from. And so I make it a point every day to read, to have discussions, to talk to my colleagues internally,</p><p>people in our company that can run circles around me on a whole bunch of topics. And then they come to me on things where they think I might be able to add some value. The last thing I&rsquo;m going to focus on is always spending a lot of time each day communicating with people. Recognize their anniversaries, recognize their birthdays, know about what&rsquo;s going on in their families. Let them know how much you appreciate</p><p>what they&rsquo;re doing. Catch them doing things right is a term that we use all the time. And we&rsquo;re actually setting up a committee that each month will recognize wonderful things that people have done, both in the business, in their community, helping each other, et cetera. So that&rsquo;s a roundabout way of saying, I wish I could tell you it&rsquo;s one thing. There&rsquo;s a whole bunch of things we do in order to help</p><p>build a solid, solid company that will have a long runway, I hope, long after I&rsquo;m no longer involved with it.</p><p>Marlene Gebauer (35:32)<br>
So Greg, as Greg mentioned, we do have our crystal ball question. So this is where we ask you to predict the future. So looking ahead over the next three to five years as the engineered intelligence model matures, what do you think is the single biggest shift you see coming for the traditional role of a junior associate and the billable hour in litigation?</p><p>Greg Mazares (35:57)<br>
It&rsquo;s a great and obvious question because clients are not going to want to pay for hundreds and thousands of hours of work that could be done in tens of hours, let&rsquo;s say. And so I think that junior associates are going to become, in my opinion, the ambassadors on how to use AI. They&rsquo;re going to be able to use AI in ways that they haven&rsquo;t before to teach their clients, to help their clients. They&rsquo;re going to elevate the game. I think junior associates are going to become consultants at high levels for clients. They&rsquo;re going to have to learn. The key thing is to make sure they</p><p>still learn enough. And I&rsquo;m not a lawyer, although sometimes I feel I should be after almost four decades in the industry, but I&rsquo;m not, so I&rsquo;ll preface. But I think junior associates got so much training from reviewing documents and understanding what the documents contain and the different types of information and issues and so forth. I think they&rsquo;re now going to have to not only get a grounding in the law, obviously,</p><p>but they&rsquo;re going to have to now elevate to becoming experts in the use and application of new technology so that they can better help their clients. They may become the teachers to senior partners in some ways. They may come in with more knowledge into the firm than some of their elders have. And they may become incredibly helpful in that</p><p>regard. They will also learn. They can also teach and help their corporate clients in some ways. So I think what&rsquo;s going to happen is they will bill hours, but they will probably bill hours for different levels of services at lower levels of hours, but at higher rates in some cases, in many cases. It&rsquo;s going to have to work that way because there will be a trade-off. But I don&rsquo;t see, there may not</p><p>be quite as many junior associates, but those that there are, it&rsquo;s going to be like having two or three people because they&rsquo;re going to be able to use amazing personal knowledge coupled with very powerful technology. And it&rsquo;s almost like taking one person and turning that one person into two or three that will help carry the day. But I don&rsquo;t think that all junior associates or the role of the junior associate is going to go.</p><p>It&rsquo;s just going to change, and I come back to adaptability. So we all need to change and if we do, as we have over many decades, we can flourish.</p><p>Greg Lambert (38:47)<br>
Greg Mazares, Sr., CEO of Purpose Legal. Thank you very much for joining us and sharing your daily routine and your knowledge with us. This has been a fun conversation. Thanks.</p><p>Marlene Gebauer (39:00)<br>
Thank</p><p>Greg Mazares (39:01)<br>
Thank you both.</p><p>Marlene Gebauer (39:03)<br>
And thanks to all of you for listening to The Geek in Review podcast. If you enjoyed the show, please share it with a colleague. We&rsquo;d love to hear from you on LinkedIn and our Substack page.</p><p>Greg Lambert (39:13)<br>
And Greg, for listeners who want to follow the Mazares method, as we&rsquo;re going to call it, or learn more about your company or Purpose XI, what&rsquo;s the best place for them to find out more?</p><p>Greg Mazares (39:27)<br>
Certainly contact me on LinkedIn or also go to our website, www.purposelegal.io, and send a message there. I would certainly love to connect with anyone who would like to chat or communicate online.</p><p>Marlene Gebauer (39:42)<br>
As always, the music here is from Jerry David DeCicca. Thank you, Jerry. And bye, everybody.</p>
]]></content:encoded>
					
		
		
			<dc:creator>xlambert@gmail.com (Greg Lambert)</dc:creator></item>
		<item>
		<title>The Latest AI Revolution Just Showed Up in Your Word Doc.</title>
		<link>https://www.geeklawblog.com/2026/04/the-latest-ai-revolution-just-showed-up-in-your-word-doc.html</link>
		
		
		<pubDate>Tue, 14 Apr 2026 13:00:58 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Claude]]></category>
		<category><![CDATA[legal drafting]]></category>
		<category><![CDATA[Microsoft]]></category>
		<guid isPermaLink="false">https://www.geeklawblog.com/?p=19242</guid>

					<description><![CDATA[I&#8217;ll be the first to admit it. Back on February 5th, when Anthropic dropped Claude Opus 4.6 and OpenAI fired back with GPT-5.3 Codex on the same day, I was right there geeking out with everyone else. A million-token context window! A model that helped build itself! People were calling it the &#8220;Kendrick vs. Drake&#8221; of the AI world, which&#8230;... <a href="https://www.geeklawblog.com/2026/04/the-latest-ai-revolution-just-showed-up-in-your-word-doc.html">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>I&rsquo;ll be the first to admit it. Back on February 5th, when Anthropic dropped&nbsp;<a href="https://www.anthropic.com/news/claude-opus-4-6">Claude Opus 4.6</a>&nbsp;and OpenAI fired back with&nbsp;<a href="https://www.eesel.ai/blog/gpt-53-codex-review">GPT-5.3 Codex</a>&nbsp;on the same day, I was right there geeking out with everyone else. A million-token context window! A model that helped build itself! People were calling it the&nbsp;<a href="https://www.youtube.com/watch?v=JqpI65aVJ30">&ldquo;Kendrick vs. Drake&rdquo; of the AI world</a>, which&hellip; well, okay, I&rsquo;ll allow it just this one time.</p><p>The problem is, February 5th didn&rsquo;t actually change how very many law firms and their lawyers worked the next morning.</p><p>April 10th did.</p><h2>The Sidebar That Changed Everything</h2><p>Last Friday, Anthropic released&nbsp;<a href="https://cybersecuritynews.com/claude-beta-for-word/">Claude for Word</a> as a public beta.&nbsp; A simple <a href="https://www.pcworld.com/article/3082700/move-over-copilot-claude-is-coming-to-microsoft-365.html">Word add-in</a>. A sidebar.</p><p>And I think it might be the most consequential legal tech release so far in 2026.</p><p>I know, I know. A&nbsp;<em>sidebar</em>? But think about what it actually solves. Every lawyer I&rsquo;ve talked to who didn&rsquo;t have a Harvey or Legora plug-in has been using AI for the past two years has been doing the same awkward dance of copying text out of Word, pasting it into a browser, prompting the AI, copying the response back, manually formatting it, hope nothing breaks along the way. It&rsquo;s the AI equivalent of printing out an email so you can fax it to someone.</p><p>The&nbsp;<a href="https://marketplace.microsoft.com/en-us/product/office/wa200010453?tab=overview">Claude Word add-in</a>&nbsp;kills that workflow dead. You prompt Claude right there in the document. It returns edits as&nbsp;<a href="https://www.spellbook.legal/learn/claude-for-lawyers">tracked changes</a>. Actual tracked changes, in the review pane, with the formatting intact. Your multilevel numbering doesn&rsquo;t explode. Your defined terms survive.&nbsp;<a href="https://medium.com/@cenrunzhe/claude-for-word-ai-inside-your-document-cf3ae1448ade">Cross-references stay put</a>.</p><p>For transactional lawyers, it goes even further. A single Claude conversation can&nbsp;<a href="https://cybersecuritynews.com/claude-beta-for-word/">span your Word purchase agreement, your Excel valuation model, and your PowerPoint deal summary</a>. You can ask it to check whether the narrative in your financial report matches the underlying data without leaving the document you&rsquo;re working in.</p><p>Is it perfect? No. It&rsquo;s beta. Anthropic themselves have&nbsp;<a href="https://www.thehansindia.com/technology/tech-news/claude-ai-lands-in-microsoft-word-to-transform-document-workflows-1064712">warned against using it for final legal filings</a>&nbsp;without human oversight. And the security folks are rightly flagging the risk of&nbsp;<a href="https://www.thehansindia.com/technology/tech-news/claude-ai-lands-in-microsoft-word-to-transform-document-workflows-1064712">prompt injection attacks</a>&nbsp;embedded in external documents, something that should make every information governance person sit up straight.</p><p>But the direction is unmistakable. AI just moved from the browser tab to the document.</p><h2>Meanwhile, Musk Brought a Debate Team</h2><p>On the same Friday, xAI released&nbsp;<a href="https://www.eweek.com/news/grok-4-20-multi-agent-ai-debate-architecture/">Grok 4.20</a>, and the architecture is genuinely interesting. Instead of one model trying to be right, Grok runs&nbsp;<a href="https://help.apiyi.com/en/grok-4-20-beta-xai-flagship-hallucination-multimodal-agent-guide-en.html">four specialized agents</a>&nbsp;(a coordinator, a researcher, a logician, and a creator) that essentially argue with each other until they reach consensus.</p><p>The result? A&nbsp;<a href="https://whatllm.org/blog/llm-releases-march-2026">22% hallucination rate</a>. The lowest ever measured by Artificial Analysis. For an industry where hallucinated case citations have gotten lawyers sanctioned by federal judges, that number matters.</p><p>But here&rsquo;s where it gets weird. Reports surfaced that banks and law firms working on the upcoming SpaceX IPO (we&rsquo;re talking JPMorgan, Goldman Sachs) have been&nbsp;<a href="https://www.pcmag.com/news/musk-forces-banks-to-use-grok-ahead-of-spacex-ipo"><em>required</em>&nbsp;to subscribe to Grok</a>. Not recommended. Required.</p><p>So now we have a world where your legal tech stack isn&rsquo;t just something you choose. It&rsquo;s something that gets mandated as part of the deal ecosystem. That should make everyone uncomfortable, regardless of how you feel about the underlying technology. I guess since no law firm or corporation AI strategy leaders have ever seriously said, &ldquo;You know what we really want to build our AI strategy on? Grok Enterprise.&rdquo; there has to be some incentives??</p><h2>The &ldquo;SaaSpocalypse&rdquo; and Who Actually Survives</h2><p>You can&rsquo;t understand April 10th without backing up to February 3rd, when the announcement of Claude&rsquo;s legal plugins triggered what the market delightfully called the&nbsp;<a href="https://www.spellbook.legal/newsletter/claudes-legal-ai-plugin-sparks-285b-market-selloff">&ldquo;SaaSpocalypse.&rdquo;</a>&nbsp;A&nbsp;<a href="https://legaltechnology.com/2026/02/03/anthropic-unveils-claude-legal-plugin-and-causes-market-meltdown/">$285 billion single-day selloff</a>&nbsp;in SaaS stocks. Investors panicked. If Claude can do contract review natively inside Word, what happens to all the legal tech companies whose entire value proposition is&hellip; putting a UI on Claude?</p><p>By April 10th, the dust had settled and the answer was becoming clearer.&nbsp;<a href="https://www.globallegalpost.com/news/legaltech-suppliers-shrug-off-claude-ai-threat-worries-1012373220">Ron Friedmann and the Gartner team noted</a> (and just for fun, <a href="https://texaslawbook.net/after-the-claude-crash-what-agentic-tools-mean-for-legal-research-vendors-and-texas-lawyers/">I did too</a>) that Anthropic&rsquo;s plugin wasn&rsquo;t really a threat to companies like LexisNexis or Thomson Reuters. Those vendors have <a href="https://www.jdsupra.com/legalnews/market-reaction-or-overreaction-6221388/">decades of curated case law and proprietary data</a>&nbsp;that no foundation model can replicate overnight. They slept fine Friday night.</p><p>The companies that should be worried? The ones whose moat was &ldquo;we integrated an LLM.&rdquo; Because that integration moat just evaporated. The&nbsp;<a href="https://www.analyticsinsight.net/amp/story/news/anthropic-launches-claude-for-word-revolutionizing-legal-review-and-financial-editing">Model Context Protocol (MCP)</a>&nbsp;is standardizing how AI connects to databases like iManage and Clio, and it&rsquo;s doing to legal tech wrappers what <a href="https://www.geeklawblog.com/2026/03/anthropics-matt-samuels-and-den-delimarsky-claude-mcp-building-the-usb-c-for-the-legal-tech-stack.html">USB-C did to proprietary chargers</a>. If you&rsquo;re a workflow platform like&nbsp;<a href="https://www.veniosystems.com/blog/legalweek-2026-legal-tech-trends/">DISCO or Consilio</a>, you&rsquo;ve got some runway, but you&rsquo;d better be &ldquo;platformizing&rdquo; fast. And if your company&rsquo;s entire pitch was &ldquo;we put a nice interface on Claude&rdquo;&hellip; well, I hope the resumes are updated. Anthropic just showed up in your lane and they&rsquo;re not charging a markup.</p><p>I keep coming back to what <a href="https://haqq.ai/blog/claude-didnt-kill-legal-tech">Stephane Boghossian&nbsp;nailed</a>: &ldquo;Prompting isn&rsquo;t a product.&rdquo; That&rsquo;s the harsh truth of April 10th. The thin-wrapper era is over.</p><h2>The Regulators Apparently Got the Memo</h2><p>If it had just been the Claude and Grok releases, April 10th would have been a busy Friday. But the regulators decided this was their day to show up too.</p><p>The White House dropped a&nbsp;<a href="https://www.lawfaremedia.org/article/white-house-ai-framework-proposes-industry-friendly-legislation">National Policy Framework for AI</a>&nbsp;that proposes preempting state-level AI development regulations. That&rsquo;s a direct shot at California&rsquo;s SB 53 and Colorado&rsquo;s SB 24-205. The framework includes a &ldquo;Right to Compute&rdquo; provision that would prevent states from banning AI-assisted legal research or drafting. For our world, that&rsquo;s significant.</p><p>Speaking of Colorado, Elon Musk&rsquo;s xAI filed a&nbsp;<a href="https://coloradosun.com/2026/04/10/elon-musk-colorado-ai-law-federal-court-lawsuit/">75-page federal lawsuit</a>&nbsp;(<a href="https://www.hrdive.com/news/colorado-ai-bias-law-unconstitutional-elon-musks-xai/817258/">xAI v. Weiser</a>) to block Colorado&rsquo;s algorithmic bias law, arguing it would force Grok to promote a&nbsp;<a href="https://www.theguardian.com/technology/2026/apr/09/elon-musk-xai-colorado-lawsuit">&ldquo;State-enforced orthodoxy.&rdquo;</a>&nbsp;Now, I find the idea that an AI model has a &ldquo;disinterested pursuit of truth&rdquo; that deserves First Amendment protection to be&hellip; let&rsquo;s say a stretch. But Musk&rsquo;s lawyers aren&rsquo;t dumb, and the underlying question (can a state force an AI model to produce certain kinds of outputs?) is one that&rsquo;s going to matter a lot more in two years than it does today. Keep your eye on this one.</p><p>And in what might be the most practically useful development of the day, the&nbsp;<a href="https://ipwatchdog.com/2026/04/10/bites-barks-fifth-circuit-awards-google-transfer-on-mandamus-and-third-circuit-says-online-publication-of-copy/">Third Circuit ruled</a>&nbsp;that legal tech startup UpCodes&rsquo; publication of copyrighted building standards online likely constitutes fair use. If that holds, it has real implications for every AI-driven legal research tool that needs to ingest government-published standards.</p><h2>&ldquo;Trust, Not Technology, Is the Constraint&rdquo;</h2><p>Here&rsquo;s where I want to get practical, because I think the industry conversation has gotten stuck on the wrong question.</p><p>Everyone keeps asking: &ldquo;Which model is best?&rdquo; The&nbsp;<a href="https://whatllm.org/blog/llm-releases-march-2026">benchmark tables</a>&nbsp;are everywhere.&nbsp;<a href="https://designforonline.com/the-best-ai-models-so-far-in-2026/">Gemini 3.1 Pro and GPT-5.4</a>&nbsp;are tied for first on the intelligence index.&nbsp;<a href="https://iternal.ai/llm-selection-guide">Claude Opus 4.6 leads in coding</a>.&nbsp;<a href="https://benchlm.ai/compare/claude-opus-4-6-vs-grok-4-20-beta">Grok 4.20 leads in hallucination reduction</a>. It&rsquo;s all very interesting and almost entirely beside the point for most working lawyers.</p><p><strong>The real question is: do your people trust it enough to use it?</strong></p><p><a href="https://www.wolterskluwer.com/en/expert-insights/legal-industry-leaders-explore-earning-and-maintaining-trust-in-ai-driven-world">Wolters Kluwer&rsquo;s research</a>&nbsp;shows 54% of clients now expect their legal partners to be AI-competent. Not &ldquo;exploring AI.&rdquo; Competent. They&rsquo;re asking sharper questions about turnaround times and whether those times reflect an&nbsp;<a href="https://www.wolterskluwer.com/en/expert-insights/what-legal-operations-professionals-are-thinking-about-in-2026">AI-accelerated workflow</a>.</p><p>Industry leaders like Mark Brennan and Nicole Stone have it right:&nbsp;<a href="https://www.wolterskluwer.com/en/expert-insights/legal-industry-leaders-explore-earning-and-maintaining-trust-in-ai-driven-world">&ldquo;Trust, not technology, is the constraint.&rdquo;</a>&nbsp;February 5th gave us the raw capability. April 10th started building the infrastructure to make that capability usable.</p><p>And I&rsquo;ll tell you what I&rsquo;m watching most closely: the emergence of law librarians and knowledge managers as the critical players in this transition. The people who have spent their careers managing information, evaluating sources, and training professionals to use complex research tools? They&rsquo;re exactly the people firms need right now. Not sitting at reference desks but operating from what some are calling <a href="https://www.wolterskluwer.com/en/expert-insights/checklist-scaling-trust-ai-enabled-legal-organization">&ldquo;Genius Bar&rdquo; environments</a>, helping attorneys learn to work with AI responsibly.</p><p>The more things change, right?</p><h2>Was April 10th Really That Important?</h2><p>I&rsquo;m going to say yes.</p><p>February 5th showed us what AI&nbsp;<em>could</em>&nbsp;do. It was impressive. It was exciting. And for most working lawyers, it was abstract. April 10th showed us how lawyers would&nbsp;<a href="https://www.artificiallawyer.com/2026/04/11/anthropic-targets-lawyers-with-claude-for-word/">actually work</a>. Claude moved into the Word sidebar. The White House started drawing regulatory lines. The courts weighed in on fair use. The market sorted out which legal tech companies have real value, and which were just wrapping someone else&rsquo;s model in a nicer box.</p><p>February 5th was the day we all said &ldquo;wow.&rdquo; April 10th was the day somebody said &ldquo;okay, but where does this actually go in my workflow?&rdquo; And then it showed up in Word. Right there in the sidebar. Like it had always been there.</p><p>So here&rsquo;s what I want to know: when the AI is no longer a separate step, when it&rsquo;s just&nbsp;<em>in</em>&nbsp;the document, invisible, ambient&hellip; who at your firm owns the quality control? Is it the knowledge management and library professionals who have spent twenty years figuring out how to evaluate information sources and train people to use them responsibly? Or is that still being treated as an afterthought?</p>
]]></content:encoded>
					
		
		
			<dc:creator>xlambert@gmail.com (Greg Lambert)</dc:creator></item>
		<item>
		<title>CounselLink’s Kris Satkunas on Rising Legal Spend, Law Firm Rates, and the Future of Value-Based Pricing</title>
		<link>https://www.geeklawblog.com/2026/04/counsellinks-kris-satkunas-on-rising-legal-spend-law-firm-rates-and-the-future-of-value-based-pricing.html</link>
		
		
		<pubDate>Mon, 13 Apr 2026 10:35:13 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[billing rates]]></category>
		<category><![CDATA[CounselLInk]]></category>
		<category><![CDATA[law firms]]></category>
		<category><![CDATA[legal ops]]></category>
		<category><![CDATA[legal spend]]></category>
		<category><![CDATA[outside counsel]]></category>
		<category><![CDATA[podcast]]></category>
		<category><![CDATA[value pricing]]></category>
		<guid isPermaLink="false">https://www.geeklawblog.com/?p=19246</guid>

					<description><![CDATA[This week on The Geek in Review, we talk with Kristina Satkunas of CounselLink about what the numbers are saying in a legal market that still talks about change while clinging hard to old billing habits. Kris discusses the hard data behind outside counsel spend, drawing on CounselLink invoice data and Harbor survey results to... <a href="https://www.geeklawblog.com/2026/04/counsellinks-kris-satkunas-on-rising-legal-spend-law-firm-rates-and-the-future-of-value-based-pricing.html">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>This week on The Geek in Review, we talk with <a href="https://www.linkedin.com/in/kristinasatkunas/">Kristina Satkunas</a> of <a href="https://www.lexisnexis.com/en-us/products/counsellink/default.page">CounselLink</a> about what the numbers are saying in a legal market that still talks about change while clinging hard to old billing habits. Kris discusses the hard data behind outside counsel spend, drawing on CounselLink invoice data and Harbor survey results to compare what legal departments say they expect with what the bills are already showing. She makes the case that the objective data is stubbornly clear. Rates are rising, demand is not falling, and the biggest firms continue to capture a larger share of work.</p><p>There is a widening gap between hope and reality. Legal departments may believe they are on the verge of controlling outside counsel costs, moving more work in house, or shifting matters to smaller firms, but Satkunas notes that the billing data has not caught up to those ambitions. She sees some room for in-house expansion in more routine areas like employment work, especially with AI helping legal teams absorb more volume, yet the largest and most sensitive matters are still flowing to outside counsel. That tension gives the episode much of its energy. Everyone sees pressure building in the system, but the old habits of legal buying and legal staffing remain firmly in place.</p><p>The discussion also gets into the mechanics of better decision-making, and where there is practical value for legal operations leaders. Satkunas emphasizes that data only becomes useful when departments have enough discipline in their enterprise legal management systems to categorize work correctly, clean out outliers, and separate different matter types instead of lumping everything into broad buckets like litigation. She also explains why finance data alone will not do the job. The real insight sits inside invoice-level detail, where hours, rates, firms, and timekeepers reveal what is happening beneath the headline spend numbers. For listeners trying to build a stronger legal ops function, this part of the conversation feels like a polite but firm warning that dirty data still tells stories, but some of them are fiction.</p><p>There is an obvious strain on the billable hour model that AI is placing on it. Satkunas notes that while average partner rate growth has hovered around 5 percent, top-end lawyers are often raising rates even faster, especially as firms try to protect revenue from the work and people they still believe clients will pay for. At the same time, she argues that alternative fee arrangements have remained stuck for years, though AI may finally force movement toward value-based pricing. If technology reduces the hours required to complete the work, then the old logic behind both hourly billing and many flat fees starts to wobble. That leaves firms facing an uncomfortable question, which is how to price legal services based on value delivered rather than time consumed.</p><p>We&rsquo;d say that Satkunas is neither cheerleader nor doomsayer. She is a patient observer of a market trying to pretend nothing is happening while the floorboards creak under everyone&rsquo;s feet. Her prediction is that real value-based billing will begin to appear in pockets over the next couple of years, even as firms continue squeezing what they can from the billable hour in the meantime. For law firm leaders, legal ops teams, and general counsel, this episode is a sharp reminder that disruption does not arrive with a trumpet blast. Sometimes it arrives as a spreadsheet, a trend line, and a guest who quietly points out that the data has been trying to warn us for years.</p><p data-start="1979" data-end="2573"><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true"><strong>Listen on mobile platforms:&nbsp;&nbsp;</strong></span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://podcasts.apple.com/us/podcast/the-geek-in-review/id1401505293" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Apple Podcasts&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span>&#8288;</a><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true"><strong>&nbsp;|&nbsp;&nbsp;</strong></span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://open.spotify.com/show/53J6BhUdH594oTMuGLvANo?si=XeoRDGhMTjulSEIEYNtZOw" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Spotify&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span>&#8288;</a><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&nbsp;|&nbsp;</span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://www.youtube.com/@thegeekinreview" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;YouTube&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span></a>&nbsp;|&nbsp;<a href="https://thegeekinreview.substack.com/">Substack</a></p><p><span data-slate-node="text"><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">[Special Thanks to&nbsp;</span></span><a class="Link-sc-k8gsk-0 feDGbw e-9652-text-link sc-jWfcXB gQGioO" href="https://www.legaltechnologyhub.com/" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">Legal Technology Hub</span></span>&#8288;</a><span data-slate-node="text" data-slate-fragment="JTVCJTdCJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmNoaWxkcmVuJTIyJTNBJTVCJTdCJTIydGV4dCUyMiUzQSUyMiU1QlNwZWNpYWwlMjBUaGFua3MlMjB0byUyMCUyMiU3RCUyQyU3QiUyMnR5cGUlMjIlM0ElMjJsaW5rJTIyJTJDJTIydXJsJTIyJTNBJTIyaHR0cHMlM0ElMkYlMkZ3d3cubGVnYWx0ZWNobm9sb2d5aHViLmNvbSUyMiUyQyUyMnRhcmdldCUyMiUzQSUyMl9ibGFuayUyMiUyQyUyMnJlbCUyMiUzQSUyMnVnYyUyMG5vb3BlbmVyJTIwbm9yZWZlcnJlciUyMiUyQyUyMmNoaWxkcmVuJTIyJTNBJTVCJTdCJTIydGV4dCUyMiUzQSUyMkxlZ2FsJTIwVGVjaG5vbG9neSUyMEh1YiUyMiU3RCU1RCU3RCUyQyU3QiUyMnRleHQlMjIlM0ElMjIlMjBmb3IlMjB0aGVpciUyMHNwb25zb3JpbmclMjB0aGlzJTIwZXBpc29kZS4lNUQlMjIlN0QlNUQlN0QlNUQ="><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">&nbsp;for their sponsoring this episode.]</span></span></p><p><iframe title="Spotify Embed: CounselLink&rsquo;s Kris Satkunas on Rising Legal Spend, Law Firm Rates, and the Future of Value-Based Pricing" style="border-radius: 12px" width="100%" height="152" frameborder="0" allowfullscreen allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy" src="https://open.spotify.com/embed/episode/7wRHOIA1TmoH2LpE309uiZ?si=VGfpEIoaQdSVtRv4G-l0Ag&amp;utm_source=oembed"></iframe></p><p><a href="https://www.youtube.com/watch?v=1YelFOG0QjQ"><img style=" max-width: 100%; height: auto; " src="https://www.geeklawblog.com/wp-content/uploads/sites/528/embed_thumbs/1YelFOG0QjQ.png"></a></p><p>&#8288;&#8288;&#8288;&#8288;&#8288;Email: geekinreviewpodcast@gmail.com<br>
Music: &#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Jerry David DeCicca&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</p><h5>Transcript:</h5><p><span id="more-19246"></span></p><p>Greg Lambert (00:00)<br>
Hey everyone, I&rsquo;m Greg Lambert with The Geek in Review, and I have our friend Sarah Glassmeyer here. Sarah, I know you attend conferences all the time, but you have kind of a unique way that you use the LTH directory. So tell us more about that.</p><p>Sarah Glassmeyer (00:16)<br>
Yeah, conferences are great, and I do attend the educational programming, but my favorite part, and the most important part for me, both for my gig and because of my interest level, is hanging out in the exhibit hall. And when you go to these exhibit halls, there are dozens, even hundreds, of vendors to keep track of. So this is a perfect use case for what I want to talk about.</p><p>And in life, if you&rsquo;re listening to this podcast, I&rsquo;m assuming you are interested in legal technology at some point. Every day there are new products coming out, new changes, new solutions. So the Legal Tech Hub directory is not just a beautifully curated and maintained directory of legal tech products, it is also a tool that you can use. When you are signed into Legal Tech Hub, you can create what we call the portfolio feature. There is a little star by every listing, and you just click the star and it gets added to your portfolio.</p><p>Within that, you can then subdivide it into solutions that you&rsquo;re already subscribed to, ones that you&rsquo;re just kind of thinking about so you can put those in your watching column, and also ones that you&rsquo;ve reviewed but don&rsquo;t like, not for me right now, but you still want to put them to the side without losing them. And whenever I update a listing, or my staff updates a listing, or the vendors themselves come in and provide an announcement, you will get notified about that.</p><p>So it is just a really nice way to keep things organized. Plus, we have a feature within that where you can keep notes. So how I use it is when I&rsquo;m cruising around the exhibit hall and I see something, or I was just at ABA Techshow and watched the Startup Alley, I keep notes as I am doing that. I do not have to keep pen and paper. I can do it on my phone. I can do it on my iPad. And so it is all there now that I am back home and at my desk and can say, what the heck did I actually look at? And also, not during conference season, it is a way to keep track of things so you have one place you can go to, keep referring back to, and keep getting updated information.</p><p>Greg Lambert (02:14)<br>
Well, that&rsquo;s a great suggestion on how to use the directory. Thanks, Sarah.</p><p>Sarah Glassmeyer (02:19)<br>
You&rsquo;re welcome.</p><p>Marlene Gebauer (02:28)<br>
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I&rsquo;m Marlene Gebauer.</p><p>Greg Lambert (02:35)<br>
And I&rsquo;m Greg Lambert. Marlene, we talk a lot of theory on here about legal pricing and where we think things should go, but today we&rsquo;re going to get past that and get into some real hard numbers. And we&rsquo;re thrilled to welcome back one of our favorite data geeks, I think that is what we coined the last time, Kris Satkunas. Kris, good to see you.</p><p>Marlene Gebauer (02:54)<br>
Mm-hmm.</p><p>Kristina Satkunas (02:57)<br>
Good to see you too. Thanks for having me back.</p><p>Marlene Gebauer (03:00)<br>
So Kris is the Director of Strategic Consulting at LexisNexis CounselLink. She has over 20 years of experience advising corporate legal departments on how to use data and drive better business decisions.</p><p>Greg Lambert (03:13)<br>
Yeah. And she is also the mastermind behind the CounselLink Trends Report, which she has been authoring now for 12 years. What&rsquo;s the number now? Yeah. So we are excited to have you on and ready to jump into the numbers.</p><p>Kristina Satkunas (03:23)<br>
It&rsquo;s something like that. Yep, close enough.</p><p>All right, cool.</p><p>Marlene Gebauer (03:34)<br>
So Kris, I recently sat in on a webinar that you did with Amy Borsetti from Harbor, and you were combining your CounselLink invoice data, which represents over $75 billion in legal spend, with Harbor&rsquo;s law department survey data. And I think it painted an incredibly rich picture of where costs are heading. So when you put those two massive data sets together, the objective invoice data and the subjective survey data, that is the data dream. What surprised you most about the current direction of legal spend?</p><p>Greg Lambert (04:04)<br>
That&rsquo;s the data dream right there.</p><p>Kristina Satkunas (04:14)<br>
So because I look at the objective data a lot, I don&rsquo;t think there was a whole lot that surprised me there. I&rsquo;m sure we&rsquo;ll talk about that a little bit more here. But when I look at the survey data, one of the things that it said is that only, I think, about a third of the respondents expect legal spend, meaning outside counsel legal spend, to increase this year, maybe into next year, compared to the past.</p><p>Again, when I look at my data, year over year, the Trends Report shows every driver of spend is increasing. Billing rates keep going up aggressively. AFAs have marginal adoption. The biggest, most expensive firms keep on gaining share. So all of those things point to increased spend, right? And demand is not going down either. So that just says to me fundamentally, something has to change, right? If corporations are really going to keep legal costs down, it is not going to be business as usual.</p><p>So I feel like we are moving maybe toward more in-house work, using tech finally a little bit more to make work more efficient. So I think that statistic from the subjective data was probably the most surprising to me, that so few people think outside counsel spend is going to increase.</p><p>Greg Lambert (05:43)<br>
You know, we hear that whenever there is a dip in the economy or there is pressure on the GCs to rein in costs, one of the things that gets rolled out, whether it is 2008, 2012, 2020, or last year, is, we&rsquo;re going to handle more of this ourselves. And I have never really seen that come to fruition.</p><p>Now, I think I know the two letters that everyone thinks will make a difference this time, but do you think AI will enable that to actually happen this time around?</p><p>Kristina Satkunas (06:14)<br>
Yeah.</p><p>Right.</p><p>Well, I mean, I do, but I think it is only certain types of work, right? So the higher-end litigation, the bet-the-farm stuff, the important deals, that is not ever staying in-house. But I think some of the more commoditized work, yeah, I have definitely talked to some of our customers who say they put together a business case to hire another attorney in their employment work or something like that. So yeah, those two letters, could this be the year? Are those two letters going to start really making a difference? Could be, yeah.</p><p>Greg Lambert (06:59)<br>
Yeah.</p><p>This could be it.</p><p>Marlene Gebauer (07:06)<br>
We keep saying it. It is like, this is going to be the time.</p><p>Kristina Satkunas (07:08)<br>
That&rsquo;s right.</p><p>Greg Lambert (07:10)<br>
You made a point in the webinar that really stood out, and that is when clients go through competitive bidding with RFPs, when they focus pretty much primarily on cost, you were saying that could actually be a problem. A lot of people think of RFPs as a driver to get costs down. So do you mind kind of unpacking that for the audience, what you mean by that could be a problem?</p><p>Kristina Satkunas (07:44)<br>
Yeah, so it is not just cost, right? Outside of the legal profession, if you were putting something out to bid, like an addition on your house, are you really going to go with just the cheapest bid? You are probably going to consider other factors as well, like their experience and whether you really think they are going to deliver what you want, right?</p><p>So I think customers are looking for, and should be looking for if they are not, in their RFP responses, how the firms differentiate themselves. At a minimum, it could be that firms are communicating that they really understand the legal issue and why it is an important matter to the client, right? Looking for that depth of understanding from the firm, and expertise of course comes into play, what they have done in similar cases. But I have also seen things like communicating information about, this is the legal team that we would bring to you to handle this. It is not just this one partner, but who would the team be?</p><p>There was a time not too long ago, and that included information about the diversity of those timekeepers, right?</p><p>Greg Lambert (08:58)<br>
I was wondering, is that off limits now?</p><p>Kristina Satkunas (09:01)<br>
I am not seeing it being asked anymore. I do not know that it is really off limits, but it seems people feel like it is, and it is not really being asked. But all of those things, we are not talking about bidding on commoditized, repeatable work, right? Cost is a factor. And I think Amy said that well when she was talking about this in our webinar the other day. The important thing about the cost component of an RFP, or just going to RFP, is it signals to the firm that cost matters, right? So they are going to pay attention to that. But all those other things are part of the equation and have to balance it out. So yeah, it is more than cost.</p><p>Marlene Gebauer (09:49)<br>
So last year we talked about in-house teams using more data for benchmarking and for negotiation. That was sort of a growing trend. And I am curious if this year we have continued to see that. How common is it now for legal departments to have the kind of data hygiene necessary to pull this off?</p><p>Kristina Satkunas (10:18)<br>
So yeah, it is an interesting question. I think that as more and more organizations, and the vast majority of good-sized organizations, are using some sort of enterprise legal management system, the configuration of that system kind of forces them to create some hygiene, right? Categories of types of work they are going to be billing, setting up some criteria for the types of bills that they will receive. And I think consultants who are helping set those up are getting better and better at understanding that the data that comes out is only as good as how you organize how the data is going to come in.</p><p>But I would say, just from a hygiene perspective, something like three-quarters of organizations that have an ELM in place are good enough in their hygiene. That sounds kind of gross to say, their hygiene was good enough, but you know what I mean. So a lot of companies let a law firm bill a flat fee as if it is hourly. So they do that, and then it looks like you have a timekeeper who billed you $20,000 an hour.</p><p>Greg Lambert (11:23)<br>
Yeah.</p><p>Marlene Gebauer (11:23)<br>
You</p><p>Kristina Satkunas (11:39)<br>
That is dirt in the data that you then have to find a way to filter out. But those sorts of outliers stand out and are pretty easily eliminated. So I think we had at least one of these within the webinar that you saw. I like to use box-and-whisker charts to show the span of data, like hourly rates. And that outlier that I just described, that would be a whisker, right? It would fall outside of the box, right? It is the box that any good analyst is focused on and trying to understand, what is normal, what is actually going on.</p><p>So I think the data is better, and you cannot get hung up on those small outliers because they are always going to be there. You just have to have a way to identify them and focus on what is important. I do think that a lot of organizations could still do a better job at tagging their work. So, for instance, you could categorize, and I have seen this, litigation is litigation. We just call it all litigation if it happens to fall under that, but there are a lot of apples and oranges within litigation. So if you are trying to benchmark what you should be paying for, like IP litigation versus employment litigation, it is probably different. So you do need to separate things like that out.</p><p>So I think we are getting better, but there is lots of opportunity to improve. Data could always be cleaner, but at some point you have to say it is clean enough that we are going to start working with it.</p><p>Marlene Gebauer (13:06)<br>
Very different workflows, too.</p><p>So if a GC wanted to start doing this, would it be their financial data? Would they be focusing on sort of the core areas? What would be the best way to approach it?</p><p>Kristina Satkunas (13:41)<br>
So I would say, and I think that tagging of types of work is super important, right, to be able to break down your work into different categories. And then yeah, it is definitely the financial data. The financial data meaning what is coming in on the invoices, right, how many hours were billed for hourly work, how many hours were billed at what rates, and then breaking that down by law firm, by timekeeper, to be able to slice and dice it as needed.</p><p>As long as you have data that is clean enough. If you do not have some kind of ELM in place, then you kind of have to start fresh. I might almost just leave all the other stuff in the past. You really cannot go to your finance team and ask for this data because they are not pulling apart the invoice. They are picking up which part of the business should be charged for this work, but they are not breaking it down into the detail that you need.</p><p>Marlene Gebauer (14:41)<br>
I often wonder where to start sometimes if you are using AI. Do you go back to historical data prior to AI, or do you just start with AI as a base because it is going to improve and see where you go from there?</p><p>Kristina Satkunas (15:00)<br>
Yeah, right, right. It is a good question. I think I would argue for let&rsquo;s just start with what we have now, because that you can fine-tune and feel good about what you are working with now. Going back, you have probably got a lot of noise in that data that might interfere with things, but you can start with what you have now and then maybe take a little slice of what you have and do a little pilot of some of the historic data before you roll it out full force.</p><p>Greg Lambert (15:30)<br>
While we are fleshing the data out a little bit more, I am curious on your end of things, are you able to implement more of the AI tool strategy on parsing the data? Does it help you kind of see through some of that and kind of fix it on your side, or is it still too messy when it comes to you?</p><p>Kristina Satkunas (15:53)<br>
Yeah, I mean, our product team, as we are developing our product, is certainly working on pulling AI into more and more functions of how CounselLink reads bills and evaluates them. So that is certainly going to improve and will continue to improve. For me as the data geek, I am still relying on the database that we create when all of that data gets scrubbed off of the invoices and gets put into this database.</p><p>But I am playing with AI tools for me to search that database and to organize that data and to look for trends. So I do not know, there are just so many opportunities with how AI gets used. But yeah, it is definitely playing a bigger and bigger role, and will continue to, in everything around analytics for sure.</p><p>Greg Lambert (16:56)<br>
I am a little disappointed. I was expecting you to have a murder board up with all the data points and then red string connecting all of the different pieces. But next year.</p><p>All right. There was one thing that you said at the Texas Trailblazers meeting that we had a few weeks ago, and you talked about the latest data showing that partner rates, for a while there, were around 3 percent increases a year, and now they are right at 5 percent. And this is an aggregate across the board. But the real double-take moment was also seeing, and we talked about this last year when you were on, the associate rates and just how high associate rates are getting.</p><p>I think we are all seeing that hours are kind of flat, maybe down, for legal work across many firms, but the rates are going up. Are we headed to some kind of structural breaking of the system? How do you maintain both of those at the same time?</p><p>Kristina Satkunas (17:01)<br>
Thank you.</p><p>Next year, next year, Greg.</p><p>(18:25)<br>
Yeah, I mean, those associate rates, as high as they are, are definitely interwoven with what you are talking about. So yeah, the model is not going to work if you do not have the hours to support it, right? The law firm model as it is, without being able to bill all the hours historically, something has to give there. And I think the model is breaking. I kind of hope the model is breaking. Let&rsquo;s shake things up a little bit. But I think people are going to hold onto it kicking and screaming for quite a while. And they will probably start by hiring fewer associates. I think I have heard some pockets of that starting to happen.</p><p>But the point that you raise about the $2,000-an-hour associate, that is not the associate AI is likely to replace.</p><p>Greg Lambert (19:18)<br>
Mm.</p><p>Kristina Satkunas (19:18)<br>
In fact, the data that I look at, so I look at the median partner and what they are billing and how much their rate went up, but the fact is the data shows, and has shown this for the past couple of years, that those partners and associates already at the higher end, so that $2,000-an-hour associate, are raising their rates more like 7 percent compared to the 5 percent average.</p><p>So those who are already the more senior people, who should be the more experienced people, are raising their rates more. And I will hypothesize that there is some recognition there from law firms that those people and their expertise can still be billed, right? So let&rsquo;s try to monetize that as much as possible, some revenue protection, by increasing their rates more, knowing that the more junior people are the ones that we might not be able to bill as much, if at all. So yeah, it is actually not until recently that I started paying attention to that, and I think it is tied up in what we are talking about here with AI starting to sneak in.</p><p>Greg Lambert (20:38)<br>
So the gap will continue to grow and the rich get richer.</p><p>Kristina Satkunas (20:44)<br>
I think so. And that is what the data has been showing us for the past couple of years. And I think, again, that is in defense of the model, right? That is people saying, the only way this model is going to work if the hours are going down is the rates have to go up. But I am not going to be able to bill for some of these people, so I have got to raise the rates of the people that I am still going to be able to bill. So yeah, it feels that way.</p><p>Marlene Gebauer (20:47)<br>
Great.</p><p>All right, Kris, I want to get a little provocative here. We have been hearing about alternative fee arrangements for years, but your data shows they have been stuck at around 8 to 10 percent of matters for a while. And now that AI is starting to bring those hours down, the efficiency argument for flat fees is changing. So the question is, have AFAs missed their window of opportunity? If the hours are already shrinking because of technology, does the argument for flat fees disappear?</p><p>Kristina Satkunas (21:44)<br>
The efficiency argument is likely disappearing, yes. And if that is the argument for AFAs, then it does pose a problem for the model for sure. I do not know. I actually think the window, and we have talked about this before, I am so sad that AFAs have not taken off because there are so many wonderful reasons that they should have taken off. But intellectually, people were not ready to embrace them.</p><p>But I think that window may finally be opening because people are going to be forced to take it because, again, of those two little letters that are going to change things. If hours are shrinking, law firms have to repackage their services, I think, in terms of value received instead of the hours that had to get churned to do that work. And so that is your point about efficiency. They cannot package it that way.</p><p>An astute general counsel is not going to pay a flat fee that is approximately what the hours used to add up to on the shadow bill that they used to ask for. We see a lot of that, and it makes me a little bit crazy that general counsel ask for shadow bills. They have a flat fee, but then they want to see how much hourly work was done as well.</p><p>So they are not going to pay that larger flat fee for what the work used to cost. So I think that the billing, the AFA model that will start moving, and hopefully we will start seeing more, is billing that is tied more to outcomes, to value, somehow saying what is the value of this matter as opposed to what did it take to get to it. So yeah, I am going out on a limb and saying that window is opening finally.</p><p>Marlene Gebauer (23:39)<br>
We will be asking next year. Yeah. Well, I think about it, the value of the work has not changed. The time it takes to do it has changed, but the ultimate value of it has not changed. So how do we price that?</p><p>Kristina Satkunas (23:40)<br>
I know. I will remember.</p><p>(23:53)<br>
So then what is the other value, right? So how else can law firms add value on top of that? How can they extrapolate what they have done on that matter and help alert their customers to things they should be looking out for down the road, right? So other value-adds that I think have to start coming from that relationship for a general counsel to believe that there really is extra value.</p><p>I am not saying that is easy, but I think that is where we are going to have to start moving.</p><p>Marlene Gebauer (24:27)<br>
I think AI opens the ability to do that. It just opens that time window to do that.</p><p>Kristina Satkunas (24:33)<br>
Yeah.</p><p>Greg Lambert (24:35)<br>
Last year, Kris, you introduced a new metric into the report, and that is market share by new matter spend. So instead of looking at the total annual spend, which may include some of the old matters that get brought over, the report analyzed new matters opened each year to determine what the future market share was.</p><p>Last year, in that first year of reports, you determined that large firms are gaining, and not losing, the share of the new work, that there is actually more work going to the more elite firms, the bigger firms. And then during the webinar, I thought I heard you say that this year more of the day-to-day might be going to smaller regional firms. One, did I hear that right? And two, what has changed?</p><p>Kristina Satkunas (25:32)<br>
So you kind of heard that right, but you did not hear it from me. And that is where I think looking at real data like we have in CounselLink Benchmarking differs from survey data. In the session that we did, it was both a combination of the benchmark data and survey data. The two are definitely complementary, and it was interesting to see how they lined up. But in my experience, and for the most part, survey data tends to give people the opportunity to paint a rosier picture or a more hopeful picture.</p><p>So the survey data that our friends at Harbor presented said that two-thirds of respondents had completed or were working on an initiative to strategically shift work to smaller firms. So there is a lot of wiggle room within that, right? And that is good, right? Well, that or they are working on it.</p><p>Greg Lambert (25:38)<br>
Ha ha!</p><p>(26:42)<br>
Wishful thinking maybe. Yeah, we are working on it. It is like our AI ROI. We are working on it.</p><p>Marlene Gebauer (26:45)<br>
It is like we are looking at it. We are thinking about it.</p><p>Kristina Satkunas (26:52)<br>
We are working, yes.</p><p>But the data does not show it, right? So yes, I still looked. I think last year was the first year we looked at that new metric, and I looked at it again. But for just new matters, new matters opened in 2025, same thing, large law still gaining share of that new work. I always feel a little bit like a Debbie Downer after people report survey data. I am like, well, data does not support what you are saying. But it does not mean it is not coming. So if they are evaluating that and starting to move work down, it will take some time to see that.</p><p>So I am still hopeful that GCs will realize how much of an opportunity there is there, but we are just not seeing it yet.</p><p>Greg Lambert (27:47)<br>
Well, it does not matter what the data says, it is how I feel. That is what is important.</p><p>Kristina Satkunas (27:51)<br>
That&rsquo;s right. I will go along with that, of course.</p><p>Greg Lambert (27:53)<br>
Kris, before we get to the crystal ball, I know you spend most of your time immersed in the CounselLink Insight benchmarking databases, but we have been asking our guests what they are reading or listening to or following outside of their own reports to kind of keep them cognizant of what is going on in the legal industry and business of law.</p><p>Kristina Satkunas (28:29)<br>
Yeah, so I am sorry to say I do not have a go-to, if you are hoping that I would come up with a particular blog that I read or a particular site I am paying attention to. Obviously, I have a feed that has an awful lot coming into it, so I am looking at a lot of things. But probably like you, I am always looking for new webinars or events like the one I ran into you at a couple weeks ago, looking for something that is a fresh take or something new.</p><p>Maybe because I spend so much time looking at data, I think I actually get more value from using my other time talking to people, hearing what they are really doing, and asking them to validate some of the things that I am seeing in the data. So it is always good for me to talk to whether it be our own customers or people in the industry, to dig into things a little bit more. And also, I am always looking for ideas of whether there are things they would like me to research, right? We do have this great database of so many invoices and matters running through it that if other topics come up, I am always interested in hearing those for that reason.</p><p>So no one specific thing, but yeah, I spend a lot of time looking at different articles and listening to what people are talking about.</p><p>Marlene Gebauer (29:52)<br>
Okay, Kris, it is time for the crystal ball question. I know we did this last year, so we will do it again this year. We looked back at your prediction from last year, and we will talk about that and then hear your predictions for this year.</p><p>In 2025, what you were saying, and I am going to paraphrase, is that firms are going to increase their rates even more than what we had been seeing, that things had leveled out around 5 percent for the past few years and that you could see it going up to 6 or 7 percent. And you were hoping there would be pushback on those rates. First of all, how correct were you as far as this year&rsquo;s data? I was going to say, I think she hit it in terms of this year&rsquo;s data. And then, looking forward, as AI becomes integrated into these workflows, are we going to see a real shift to some form of value-based pricing?</p><p>Greg Lambert (30:31)<br>
I think she was pretty good.</p><p>Kristina Satkunas (30:50)<br>
Yeah, so I mean, I hate that I was right, right? It is like business as usual. The trend keeps going along. It does not take much of a crystal ball to see that. But I will go out on a limb. I am hopeful that we have finally entered into a real age of disruption, so to speak, and that we will start seeing some traction in 2026, maybe 2027, probably even more, of that value-based billing starting to show up in some pockets that we have not seen it before, not just flat fees, but real other types of billing like we were talking about.</p><p>I still think, though, that for rates, based on what we were talking about before, I think they are probably going to go up even more than that 5, 6, 7 percent, as many firms try to optimize what is left of the billable hour model, right? Let&rsquo;s get as much as we can out of the people that we are able to bill. So I would not be surprised if, at least at the high end, we start seeing those more senior lawyers increase their rates even more than they have in the past.</p><p>Marlene Gebauer (32:06)<br>
That is just really depressing. And it seems short-sighted. Now would seem to be the time to get ahead of this, like, look, it is coming. Let&rsquo;s get ahead of this trend and be ready.</p><p>Kristina Satkunas (32:09)<br>
Okay.</p><p>But is that what is going to separate some firms from the others, right? Those that are holding on and trying to hold onto that model versus those that are trying to face what is coming and change their model. I think there will be a split between those. I do not know, maybe the data will help me parse firms that way, right? Those that are forward-thinking versus those desperately clinging.</p><p>Greg Lambert (32:45)<br>
Well, see, yeah, there was a little sliver of hope there in that.</p><p>Marlene Gebauer (32:46)<br>
Whole new report, whole new report. Great.</p><p>Kristina Satkunas (32:49)<br>
Yeah.</p><p>Yeah, that is right. I know. I am not a Debbie Downer, really.</p><p>Greg Lambert (32:56)<br>
Kris, the Director of Strategic Consulting at CounselLink, thank you again for coming back this year and sharing all these insights. It is always a pleasure to jump into the data and talk real numbers with you.</p><p>Kristina Satkunas (33:13)<br>
Thanks for having me again. I enjoyed it very much.</p><p>Marlene Gebauer (33:17)<br>
Yeah, thanks, Kris. And thanks to all of you, our listeners, for listening to The Geek in Review. If you enjoyed this dive into data, please share this with a colleague.</p><p>Greg Lambert (33:26)<br>
And Kris, where is the best place for listeners to go to find the latest Trends Report and maybe connect with you as well?</p><p>Kristina Satkunas (33:34)<br>
They can connect with me probably on LinkedIn. That would be the best way. And they could ask me for a copy of the report that way, or they could go out to the CounselLink.com website and find a link there as well.</p><p>Marlene Gebauer (33:47)<br>
And as always, the music you hear is from Jerry David DeCicca. Thank you, Jerry. And bye, everybody.</p><p>If you want, I can also turn this into a speaker-labeled, publication-ready transcript with cleaner paragraph breaks and removed filler while still preserving the substance.</p>
]]></content:encoded>
					
		
		
			<dc:creator>xlambert@gmail.com (Greg Lambert)</dc:creator></item>
		<item>
		<title>From Document Review to Fact Intelligence, Gregory Mostyn on How Wexler.ai Is Reshaping Litigation</title>
		<link>https://www.geeklawblog.com/2026/04/from-document-review-to-fact-intelligence-gregory-mostyn-on-how-wexler-ai-is-reshaping-litigation.html</link>
		
		
		<pubDate>Mon, 06 Apr 2026 02:52:50 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[deposition prep]]></category>
		<category><![CDATA[document review]]></category>
		<category><![CDATA[fact intelligence]]></category>
		<category><![CDATA[Gregory Mostyn]]></category>
		<category><![CDATA[legal AI]]></category>
		<category><![CDATA[litigation strategy]]></category>
		<category><![CDATA[podcast]]></category>
		<category><![CDATA[Wexler.ai]]></category>
		<guid isPermaLink="false">https://www.geeklawblog.com/?p=19237</guid>

					<description><![CDATA[This week on The Geek in Review, we talk with Gregory Mostyn, CEO of Wexler.ai, about how his company is building a sharper form of legal AI for litigation. In a market crowded with broad platforms that aim to handle every legal task at once, Mostyn describes Wexler as a focused system built for one... <a href="https://www.geeklawblog.com/2026/04/from-document-review-to-fact-intelligence-gregory-mostyn-on-how-wexler-ai-is-reshaping-litigation.html">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>This week on The Geek in Review, we talk with <a href="https://www.linkedin.com/in/gregory-mostyn-9b3a89b8/">Gregory Mostyn</a>, CEO of <a href="http://wexler.ai">Wexler.ai</a>, about how his company is building a sharper form of legal AI for litigation. In a market crowded with broad platforms that aim to handle every legal task at once, Mostyn describes Wexler as a focused system built for one of the hardest problems in disputes, understanding the facts. He shares how the idea grew from watching his father, a judge, carry home stacks of ring binders and spend late nights reviewing case materials by hand. That early picture of legal work, heavy with paper and pressure, became the spark for a company aimed at helping lawyers work through massive records with more depth, speed, and precision.</p><p>A central idea in the conversation is Wexler&rsquo;s view that the most useful unit of analysis in litigation is not the document, but the fact. Mostyn explains that lawyers are often handed a mountain of emails, messages, filings, and exhibits, yet what they need is a clear understanding of what happened, why it matters, and where the pressure points sit. Wexler is designed to pull out events, inconsistencies, and supporting details from that record so litigators are working from a factual map rather than a pile of files. That shift matters because disputes are rarely neat. Important evidence may be tucked inside an offhand message, a late footnote, or an exchange written in vague, coded language. Wexler&rsquo;s aim is to turn that mess into something a trial team can use to shape strategy.</p><p>Mostyn also walks through the mechanics that separate Wexler from more general legal AI products. He describes a detailed fact extraction pipeline that processes unstructured material and turns it into structured data before the system reasons over it. That design helps Wexler deal with the disorder of litigation, where timelines blur, people contradict each other, and key details are easy to miss. He also points to the scale of the platform, noting that it handles large document sets and supports work such as deposition preparation, trial preparation, summary judgment briefing, and early case assessment. One of the more striking features is real-time fact checking during depositions, where the platform helps lawyers spot contradictions in testimony as the questioning unfolds. The effect is less like using a search box and more like working with a tireless junior team member who has read the whole file.</p><p>Trust, accuracy, and restraint are another major part of the discussion. Mostyn is careful not to oversell what AI can do. He openly states that no system is perfect, yet he argues that Wexler reduces risk by staying inside the record given to it. It does not search the internet, does not drift into outside material, and ties its outputs back to specific text in the source documents. That discipline is important in litigation, where a made-up citation or invented fact is more than embarrassing, it is dangerous. Mostyn presents Wexler as a tool that helps lawyers verify, question, and sharpen their understanding of the case. The result is less time spent slogging through repetitive review and more time spent thinking about how to use the facts in a meaningful way.</p><p>The conversation closes on a bigger question about where this kind of technology leads the profession. Mostyn believes that as AI takes on more of the burden of document review and fact development, the value of human lawyering rises in other areas. Strategy, advocacy, witness preparation, courtroom performance, and judgment all become more important when the groundwork is assembled faster and more thoroughly. He also suggests that clients are beginning to care less about how many hours were spent reviewing documents and more about whether their lawyers are prepared, informed, and effective. For listeners interested in litigation, legal AI, and the next stage of law firm economics, this episode offers a thoughtful look at a company betting that the future belongs to tools built for depth, discipline, and the hard realities of dispute work.</p><p data-start="1979" data-end="2573"><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true"><strong>Listen on mobile platforms:&nbsp;&nbsp;</strong></span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://podcasts.apple.com/us/podcast/the-geek-in-review/id1401505293" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Apple Podcasts&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span>&#8288;</a><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true"><strong>&nbsp;|&nbsp;&nbsp;</strong></span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://open.spotify.com/show/53J6BhUdH594oTMuGLvANo?si=XeoRDGhMTjulSEIEYNtZOw" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Spotify&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span>&#8288;</a><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&nbsp;|&nbsp;</span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://www.youtube.com/@thegeekinreview" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;YouTube&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span></a>&nbsp;|&nbsp;<a href="https://thegeekinreview.substack.com/">Substack</a></p><p><span data-slate-node="text"><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">[Special Thanks to&nbsp;</span></span><a class="Link-sc-k8gsk-0 feDGbw e-9652-text-link sc-jWfcXB gQGioO" href="https://www.legaltechnologyhub.com/" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">Legal Technology Hub</span></span>&#8288;</a><span data-slate-node="text" data-slate-fragment="JTVCJTdCJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmNoaWxkcmVuJTIyJTNBJTVCJTdCJTIydGV4dCUyMiUzQSUyMiU1QlNwZWNpYWwlMjBUaGFua3MlMjB0byUyMCUyMiU3RCUyQyU3QiUyMnR5cGUlMjIlM0ElMjJsaW5rJTIyJTJDJTIydXJsJTIyJTNBJTIyaHR0cHMlM0ElMkYlMkZ3d3cubGVnYWx0ZWNobm9sb2d5aHViLmNvbSUyMiUyQyUyMnRhcmdldCUyMiUzQSUyMl9ibGFuayUyMiUyQyUyMnJlbCUyMiUzQSUyMnVnYyUyMG5vb3BlbmVyJTIwbm9yZWZlcnJlciUyMiUyQyUyMmNoaWxkcmVuJTIyJTNBJTVCJTdCJTIydGV4dCUyMiUzQSUyMkxlZ2FsJTIwVGVjaG5vbG9neSUyMEh1YiUyMiU3RCU1RCU3RCUyQyU3QiUyMnRleHQlMjIlM0ElMjIlMjBmb3IlMjB0aGVpciUyMHNwb25zb3JpbmclMjB0aGlzJTIwZXBpc29kZS4lNUQlMjIlN0QlNUQlN0QlNUQ="><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">&nbsp;for their sponsoring this episode.]</span></span></p><p><iframe title="Spotify Embed: From Document Review to Fact Intelligence, Gregory Mostyn on How Wexler.ai Is Reshaping Litigation" style="border-radius: 12px" width="100%" height="152" frameborder="0" allowfullscreen allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy" src="https://open.spotify.com/embed/episode/7uDeIBGMbh2Dm7TtrwDBMa?si=DAxz-nkyQHapR4m08ChsNQ&amp;utm_source=oembed"></iframe></p><p><a href="https://www.youtube.com/watch?v=7J7UWMkUSSU"><img style=" max-width: 100%; height: auto; " src="https://www.geeklawblog.com/wp-content/uploads/sites/528/embed_thumbs/7J7UWMkUSSU.png"></a></p><p>&#8288;&#8288;&#8288;&#8288;&#8288;Email: geekinreviewpodcast@gmail.com<br>
Music: &#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Jerry David DeCicca&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</p><h5>Transcript:</h5><p><span id="more-19237"></span></p><p>Greg Lambert (00:00)<br>
Hey everyone, I&rsquo;m Greg Lambert with the Geek in Review and I have Stephanie Wilkins from Legal Technology Hub with us. And Stephanie, it&rsquo;s about that time again to release the GEN.AI map. So what&rsquo;s it look like now?</p><p>Stephanie Wilkins (00:13)<br>
Yeah, it is. It&rsquo;s come to be that every new quarter at Legal Tech Hub means it&rsquo;s another update to that LTH-GNI Legal Tech map. And somehow we&rsquo;re already there for 2026. As with nearly every map iteration that came before it, we&rsquo;re once again seeing significant quarter over quarter growth in the map. As of March 2026, we&rsquo;re right around 1,000 product placements on the map, which is a milestone that we&rsquo;ve been looking forward to since we launched the first map.</p><p>We thought we might hit it by the end of 2025, but we weren&rsquo;t far off the mark with that prediction. But speaking of that first map, if you remember, or if maybe you never saw it the first time around, the first iteration of the map came out in February 2025. We saw 400 product placements for Gen.ai and LegalTech. We updated it just two weeks later, just in time for Legal Week 2025, and we had about 500 product placements. So we&rsquo;re talking&hellip;</p><p>One year ahead, Legal Week 2026, we&rsquo;re at almost 1,000. So we&rsquo;ve doubled the market in that one year, which is just really impressive. And the count went up significantly even from just three months ago. Our end of the year map for 2025 had about 850 product placements. So we&rsquo;re still on a steady incline in terms of growth. But where we&rsquo;re seeing growth has shifted a little bit over time. Initially, we saw a lot of growth in hot areas like AI legal assistance.</p><p>And while there still is some growth there, this time around we&rsquo;re seeing our biggest areas of growth in more bread and butter or less trendy areas, if you will, like law firm operations and compliance. And to me that signals that AI really is starting to make inroads in legal and in areas where it can make the most difference internally and maybe not externally what you want people to think you&rsquo;re working on. And that&rsquo;s sort of in line with what I saw at Legal Week also, that the AI&hellip;</p><p>conversation is maturing a lot and people are having smarter discussions about how we can solve actual problems. And it&rsquo;s all part of the bigger trend of moving from if to how on AI. And I think that&rsquo;s a really great thing. I have heard some people start to speculate if our map will start getting smaller eventually rather than bigger. given all the consolidation in the market, that may one day be true, but by all accounts, by everything we&rsquo;re seeing, that we&rsquo;re not at that point yet.</p><p>So we&rsquo;re still getting bigger and curious to see where we are at the end of Q2, three months from now. But if you want to see the latest map and our analysis of it, you can just find it on www.legalteknologihub.com. And that article will link to prior iterations of the map as well, so you can compare.</p><p>Greg Lambert (02:41)<br>
All right, Stephanie, make a projection. Second quarter, 2027, what&rsquo;s our number?</p><p>Stephanie Wilkins (02:48)<br>
second quarter of 2027? I&rsquo;ll go 1500.</p><p>Greg Lambert (02:50)<br>
All right.</p><p>Marlene Gebauer (02:59)<br>
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I&rsquo;m Marlene Gebauer</p><p>Greg Lambert (03:06)<br>
And I&rsquo;m Greg Lambert and Marlene for the past three years or so, the legal tech world has been obsessed with the &#8275; Swiss Army knife of AI, these tools that try to draft every email, summarize every contract and basically try and boil the ocean in one go. we&rsquo;ve entered the new phase in</p><p>Marlene Gebauer (03:26)<br>
One size fits all.</p><p>Gregory Mostyn (03:27)<br>
Yeah.</p><p>Greg Lambert (03:30)<br>
what&rsquo;s being called the specialized scalpel where journalists are being pushed aside by these very high precision tools designed for the highest stakes in litigation. And today our guest &#8275; is at the center of this shift. He&rsquo;s the &#8275; CEO of Wexler, a London based company that pioneered the category of fact intelligence.</p><p>Gregory Mostyn (03:50)<br>
you</p><p>Greg Lambert (03:51)<br>
and has rapidly become essential infrastructure of some of the world&rsquo;s most elite law firms, including Clifford Chance, Goodwin, and HSF Kramer.</p><p>Marlene Gebauer (04:02)<br>
And Gregory Mostyn comes from a family deep deeply rooted in UK legal establishment. His father&rsquo;s</p><p>a recently retired high court judge and his and his siblings are partners and barristers. know, but he eating. Greg, you took a little bit of a roundabout way into legal tech, including brief tenure as an actor on the HBO baking drama industry. We got to hear about this. Gregory, welcome to the geek and review.</p><p>Gregory Mostyn (04:11)<br>
Okay</p><p>Good job.</p><p>Banking, banking.</p><p>Greg Lambert (04:27)<br>
Thanking.</p><p>Gregory Mostyn (04:29)<br>
Yeah,</p><p>thanks for having me. Yeah, it&rsquo;s good to be here.</p><p>Greg Lambert (04:33)<br>
&#8275; So, Greg, we love a good founder&rsquo;s journey and you&rsquo;ve mentioned in some of your writings that there&rsquo;s a childhood memory of how your father was bringing home these huge physical ring binders of data to review every single night. So how did that image of the &#8275; binder era of litigation shape your decision to avoid the generalist AI boom and instead focus on</p><p>Gregory Mostyn (04:56)<br>
Yeah.</p><p>Greg Lambert (05:01)<br>
this scalpel approach that you&rsquo;ve been talking about.</p><p>Gregory Mostyn (05:05)<br>
Yeah, so yeah, as you correctly identified, I&rsquo;m not an attorney, but I&rsquo;m grown up surrounded by them. I remember basically every Sunday lunch, every sort of family dinner, huge debates about whatever was the latest case raging, not just in the UK, but sort of globally, you know, whatever it was, the kind of controversy of the time. And I do vividly remember my dad coming back having to review literally 10 ring binders before the next day.</p><p>in court like overnight, you know, staying up till 3am or something and then standing up in court the next day. And my dad was actually a real innovator in the profession. So in COVID, he was very sort of when he was a judge at that point, he was very forward about, you know, conducting hearings on Teams and zoom and all those kind of things. And he actually founded a kind of legal tech company in the 90s, as it would happen, which was a it&rsquo;s still going it&rsquo;s called class legal is like a family law company, which started off as a sort of</p><p>publisher for some sort of legal textbooks within family law in the UK. And it also helps you to create various templates and forms and things. So kind of like a bit of rules based machine learning. I don&rsquo;t know. But anyway, so. So yeah, I think that was a very vivid memory of my childhood. And when I started this, I went through an incubator called entrepreneur first, where you&rsquo;re sort of paired up with other brilliant people, supposedly brilliant. And you and you try and</p><p>come up with a real big pain point and there was no pain point I could remember more vividly than my dad having to review hundreds, thousands of pages manually, line by line, word by word, to prepare for court for the next day. So that was where we went after. we, spoke to literally hundreds, starting off in my network, people I knew, know, people, family, friends, et cetera, and growing and going out into the world across the US as well as in the UK and identified that this was a real problem, basically establishing the facts and disputes, you know.</p><p>You either hammer the law or you hammer the facts. And actually, as a sitting judge, my dad realized that half the lawyers in front of him didn&rsquo;t know the facts of their cases. They probably physically couldn&rsquo;t in a lot of cases because there were too many documents to review manually. And you&rsquo;re always going for a sort of subset of the data involved. And yeah, I think that was the Prince Harry case versus a big newspaper here where they just had to take a small section of the potential evidence to review because there wasn&rsquo;t enough time or resource to review the whole case. And I thought, hang on a minute, we could build something or actually</p><p>we can review everything and give it the attention it deserves so that every client gets the best representation. And hey, maybe those lawyers don&rsquo;t have to stay up till 3 a.m. to review it cover to cover. So yeah, that&rsquo;s been the story since then. We&rsquo;ve been ruthlessly focused trying to do something more sort of specialized rather than, as you say, try to ball the ocean. But it&rsquo;s now become this kind of full platform for analyzing, establishing and verifying facts and disputes.</p><p>Maybe it&rsquo;s more like a kind of surgeon&rsquo;s tray of scalpels rather than just just the one.</p><p>Greg Lambert (07:51)<br>
I&rsquo;m curious,</p><p>did you experiment on your siblings and your father to test these things out as you were developing them?</p><p>Gregory Mostyn (07:58)<br>
100 % at the beginning my dad was very closely involved. My stepmums also a barrister in the UK. My brother&rsquo;s a partner at Cleary. So yeah, they were my first proto beta testers, if you like, showing them figma prototypes, understanding how things would work. You know, at the time, like if we think about how far the models have improved at the time, they were still very, I mean, at the time they were incredible. But now looking back, you really had to do a ton, a ton of prompting of stitching together the different systems to get the best output.</p><p>And actually we&rsquo;re now the beneficiaries of the billions that are being poured into the models because, you know, it&rsquo;s a bit of a modular system that we&rsquo;ve created. So one new model release improves one part of our product and so on and so on. But yeah, at the time it was kind of mechanical Turk, if you like, but very, very, &#8275; very, very helpful to have that, you know, ex-boy and so on. Yeah, exactly. But yeah. Cool.</p><p>Marlene Gebauer (08:40)<br>
you</p><p>Greg Lambert (08:40)<br>
Thank you.</p><p>Gosh, I haven&rsquo;t heard mechanical Turk in a while.</p><p>Marlene Gebauer (08:50)<br>
So</p><p>for our innovation and KM leaders that are listening, we often hear them say, we already have eDiscovery tools for document review. Now, you&rsquo;re coming at this from a different angle. you&rsquo;ve argued that eDiscovery prepares documents for humans, but fact intelligence is different. It reads them like a human does. So it&rsquo;ll extract events from footnotes on page 993 or buried in a WhatsApp thread.</p><p>Gregory Mostyn (08:54)<br>
Yeah.</p><p>Mm-hmm.</p><p>Marlene Gebauer (09:19)<br>
Can you explain why the atomic unit of legal knowledge needs to shift from document to the individual fact and how that changes a firm&rsquo;s</p><p>Gregory Mostyn (09:30)<br>
Yeah, for sure. I think, you know, if you imagine like a hypothetical case, let&rsquo;s say it&rsquo;s a multi-party fraud claim, 80,000 relevant documents, you know, multiple plaintiffs, different advisors saying different things, know, there are inconsistencies, one person&rsquo;s claiming something, someone else is claiming something else, a third party is saying that never happened, someone else has got, you know,</p><p>is this another dog in the fight where they&rsquo;re trying to bring up some other discovery, which is important. It&rsquo;s not that helpful to say, here&rsquo;s some relevant documents, right? It&rsquo;s not that helpful to say, here are the documents that are responsive to certain search terms, because people don&rsquo;t speak in search terms. People speak encoded, obfuscated language. Things don&rsquo;t always add up. And so what you need is something that&rsquo;s able to basically distill the documents into the facts. Not just saying, hey, here&rsquo;s 10 relevant documents, but saying, this is what happened.</p><p>this is why it happened and this is why it&rsquo;s important to the case. And so the process of basically extracting the discrete observable happenings, if you like, i.e. the events from the documents, assigning relevance based on a list of issues that the attorney themselves provide, which may be taken from the complaint or, you know, from the pleadings, etc. And then using that as the database, which you reason over means that you&rsquo;re, you know, you&rsquo;re basically equipped with all of the information you need to build the most compelling case and</p><p>You&rsquo;re not just saying, are some relevant documents. You&rsquo;re saying, this is what happened and this is why it matters. So it&rsquo;s kind of, it&rsquo;s complimentary to rediscovery, right? We are the sort of second level, the deep strategic layer that you take once you&rsquo;ve got the relevant documents or maybe before rediscovery, right? Client self produces 10,000 documents and said, get back to me by Friday with where we stand. But the output isn&rsquo;t just, here&rsquo;s a kind of data document set, which we&rsquo;ve built up. It&rsquo;s actually, is the story because litigators tell stories.</p><p>they tell stories backed up by the facts.</p><p>Greg Lambert (11:21)<br>
Yeah, that&rsquo;s&hellip;</p><p>Marlene Gebauer (11:21)<br>
It&rsquo;s I</p><p>think it&rsquo;s important you mentioned it&rsquo;s kind of a second step because you know sometimes they&rsquo;re like you know millions of documents that you&rsquo;re talking about and I don&rsquo;t I don&rsquo;t know how many you know you can accept at one time but you know that step is still critical to kind of just narrow it down to to what&rsquo;s you know what you want to look at inquiry.</p><p>Gregory Mostyn (11:29)<br>
Yeah.</p><p>Yeah.</p><p>100%. So yeah, can do lot. We can do 250,000 documents, which is, I think, higher than most of the LLM legal platforms I&rsquo;ve come across. And that&rsquo;s kind of like AI native processing. And we hope to get to a million by the end of the year. But you&rsquo;re right. It&rsquo;s like from the smaller universe. One of my colleagues says, you wouldn&rsquo;t want a surgeon to be giving you aspirin, right? There&rsquo;s no point putting a whole custodian&rsquo;s mailbox in Wexler, because you want to know that what you&rsquo;re putting in is at least broadly relevant to the case, because</p><p>Marlene Gebauer (11:52)<br>
It&rsquo;s not bad, not bad.</p><p>Gregory Mostyn (12:11)<br>
of, you know, it&rsquo;s really extensive and painstaking what we&rsquo;re going through and extracting. So yeah, there is that first step, which is critical for the first cull, but actually that sort of smaller universal document turning that into into winning case strategy is critical. And so where we actually get used a lot is for depot prep, trial prep, summary judgments, the briefing, plus early case assessment. And obviously, it&rsquo;s jurisdictional agnostic as well. So we do arbitration investigations and so on.</p><p>Marlene Gebauer (12:36)<br>
So how does it compare to say like one of the general ones because I mean they certainly can be used in that way and do comparisons. how does it differ? What&rsquo;s the secret sauce?</p><p>Gregory Mostyn (12:43)<br>
Yeah.</p><p>So I think it&rsquo;s about that fact extraction pipeline, which I talked about. essentially, it&rsquo;s the key kind of difference is that what we do is we process the documents, we pass them through this 15 step pipeline, which doesn&rsquo;t just stuff documents into one context window and say, hey, have go look for X or go look for Y. What it does is it sort of normalizes that data. So even if something is like a one message WhatsApp saying, I flew to Paris last year,</p><p>versus something really neatly laid out saying the plaintiff flew to Paris on this year, which obviously you guys are both technically in the weeds here. You&rsquo;ll know the LLM have a bias to more structured information. And so what happens is if you just chuck documents into a generalist tool, yes, it will give you broad insights, but actually in order to really find the key contemporaneous piece of evidence, the kind of things that trial attorneys are looking for, that smoking gun to help them build the winning case, it&rsquo;s not always clear.</p><p>And so it&rsquo;s that fact extraction pipeline where we stitch together all these different models into this 15 step process, which rationalizes and elucidates that complex information, normalizes it into structured data, which they can then work over. And we hosted a dinner at Legal Week and one of the innovators there who was, it worked in litigation was saying, litigation is messy. Like it is very, very messy. Things are not clear. Things are obfuscated either deliberately or accidentally.</p><p>You&rsquo;ve got things being overwritten by other things all the time, and you need something that can basically first distill that into a structured way and then use that as the unit of analysis. So that&rsquo;s one of the key differentiators, obviously scale, which we&rsquo;ve already touched on. And then we&rsquo;ve got some other functionality, which is only possible because of this fact bank that we&rsquo;re building up. So one of them is real time, which is basically like live fact checking for depositions. And it&rsquo;s only possible because of that database we built up.</p><p>We could put it on right now and it could fact check all the things that I&rsquo;m and probably it wouldn&rsquo;t show anything. I&rsquo;m happy to say. So yeah.</p><p>Greg Lambert (14:43)<br>
It would let</p><p>us know that you flew to New York recently. &#8275; Well, speaking of &#8275; real time and I also want to talk about the digital training, Kim, that you set up. So real time allows, like you said, the litigators to flag contradictions in testimony during a live deposition as it happens. And you have famously said that, you know,</p><p>Gregory Mostyn (14:47)<br>
Exactly.</p><p>Yeah.</p><p>Yeah.</p><p>Greg Lambert (15:10)<br>
Truth is a nebulous concept, but contradiction is very verifiable. So how are tools like Wexler and your AI agent, acting more like a digital trainee or a partner rather than just a passive search tool? I&rsquo;ll let you answer that, and then I want to follow up with a couple more.</p><p>Gregory Mostyn (15:13)<br>
Yeah. Yeah. Yeah.</p><p>Mm.</p><p>I mean, it&rsquo;s interesting, think&hellip;</p><p>Basically, what Kim does really well is sift through vast amounts of information and pick out patterns. We don&rsquo;t opine on the veracity of things, right? We only look at the documents that we&rsquo;ve been given. So within the universe of documents, are there contradictions against other documents, other facts within the same data set? That&rsquo;s what we can do. This is how we minimize, basically eliminate hallucination risk, inaccuracies, those kind of things. Doesn&rsquo;t search on the internet, doesn&rsquo;t look in its training data.</p><p>is absolutely ruthlessly and rigidly told to focus on that information. And also the way we structure all the data into those facts really drives up that accuracy. So as much as it&rsquo;s a very helpful digital training, as you say, but it&rsquo;s limited to the four corners of the page of the documents plus the metadata that you&rsquo;re giving it. So that&rsquo;s really important. That&rsquo;s how we drive up the accuracy. And yeah, maybe we&rsquo;re losing out on some potential functionality, but actually it&rsquo;s much more important for us to be accurate.</p><p>and allow the lawyers to then apply the law to the facts and look on the internet and those kind of things. So yes, truth is a nebulous concept. We can&rsquo;t say for sure if something is, we can say if something looks dodgy or if something looks odd or there&rsquo;s fact patterns that don&rsquo;t quite add up. But if it&rsquo;s not contradicted by another piece of data within the same data set, that&rsquo;s not our job. That&rsquo;s not the AI&rsquo;s job to do that. And that&rsquo;s kind of our, that&rsquo;s kind of our core belief, I think. That&rsquo;s our thesis. It should be like an accelerant and enhancement. It&rsquo;s not going to take you three weeks to do this task or three days. It might take you three hours.</p><p>But importantly, those three hours will help you get better at understanding the facts of the case so that you can stand up in court or prepare for a deposition and you actually know what&rsquo;s going on, not just the AIs told me this. So I think, you know, yes, it&rsquo;s a really helpful digital trainee, but it is only looking at the documents you give it. It&rsquo;s not looking elsewhere. And it can look for patterns, but it&rsquo;s not going to sort of lead you to the wrong place. It&rsquo;s not going to look at internet and it&rsquo;s not going to hallucinate case law because that&rsquo;s not in the remit of what it does.</p><p>Greg Lambert (17:28)<br>
Do you rely on the LLM to work with these documents? And the reason I ask is, I&rsquo;ve seen it and I&rsquo;ve had people come up to me where they may dump a few hundred or a few thousand documents in and then all of a sudden it like introduce characters into that don&rsquo;t exist.</p><p>Gregory Mostyn (17:48)<br>
Yeah.</p><p>Greg Lambert (17:50)<br>
&#8275;</p><p>Gregory Mostyn (17:50)<br>
Yeah.</p><p>Greg Lambert (17:50)<br>
There&rsquo;s always like &#8275; a Robert Chin that shows up somewhere. And so how is it that you&rsquo;re taking the information and making sure that you&rsquo;re reducing the chance of hallucinations?</p><p>Gregory Mostyn (17:54)<br>
Yeah, we have it.</p><p>So there&rsquo;s a few different things. think technically the fact that we&rsquo;re operating from structured data rather than unstructured data massively drives up the &#8275; accuracy of the system. And this has been tested against other generalist platformer by our customers regularly. So basically because it doesn&rsquo;t run out of context, it doesn&rsquo;t have to invent or fabulate people when it&rsquo;s looking for the kind of smoking gun. Obviously everything, this is kind of table stakes now, but I think you&rsquo;d be surprised.</p><p>everything is sourced down not just to the document but to the sentence that it&rsquo;s been taken from. And then also we have like extreme guidelines and guidance in the back end to say, you you will turn and I&rsquo;ll answer if there&rsquo;s not nothing within the four corners of the document that supports this assertion. Can we say it&rsquo;s 100 % accurate? Of course not. And any AI vendor telling you it is, you know, you need to go and sell them back the bridge they&rsquo;ve sold you, right? So</p><p>but I think it is highly, highly accurate and more than anything, the verification flow where you can really quickly click on each source and independently verify it is helpful, one, to verify, but two, you&rsquo;re actually looking at all the documents and you&rsquo;re getting familiar with the documents as they should be viewed as if they are printed out and you&rsquo;re finding those patterns too. So yeah, there&rsquo;s technical ways, which is basically the structured data that fact extraction pipeline I talked about earlier.</p><p>And then there&rsquo;s obviously guidance and there&rsquo;s also, you know, importance of change management and training, which you two will be obviously working with your attorneys on to ensure that everyone understands that this is a new type of technology. You know, it&rsquo;s not going to return the same result every time. You can limit the variance, but actually it&rsquo;s like having a sort of second opinion. It&rsquo;s like giving, if you gave the same task to a hundred attorneys, they&rsquo;d probably return you a hundred different results. So yeah.</p><p>Marlene Gebauer (19:44)<br>
So in your recent writing, you described 2026 as the year of market chaos, yikes, where traditional law firm knowledge moats have evaporated. Now, if AI can now establish a factual record with 95 % accuracy, save 90 % of a junior associate&rsquo;s manual review time, what does that do to the traditional economic model of big law?</p><p>Gregory Mostyn (19:49)<br>
haha</p><p>Mmm.</p><p>Look, I think it&rsquo;s big question and I&rsquo;m sure I&rsquo;m not qualified to answer it in full, but I think my thoughts are one, there&rsquo;s a huge amount of work that&rsquo;s not, there&rsquo;s a huge amount of wasted time, which doesn&rsquo;t build to the client. It&rsquo;s not valuable time. It&rsquo;s dead time, basically, which we can massively reduce, meaning people can go home earlier. They can spend more time focusing on strategy and how to actually use this information. So, know, one of our customers, a partner actually, this user said it</p><p>Wexl identified an inconsistency. He reckons he would have found it maybe in a couple days, but actually he could spend a couple days planning strategy rather than just churning through documents, right? And actually how to use this in an offensive or defensive way. So he wasn&rsquo;t losing any billable time, but the time that he was billing was much more valuable for the outcome of the case and in the kind of theater of litigation, that&rsquo;s what you&rsquo;re looking for on the upper hand. Increasingly, we are seeing fixed fees with some of our clients, people looking for more kind of value based billing or maybe like a hybrid where it&rsquo;s like</p><p>project-based fixed fee, you get a menu of sort of outputs and actually I was thinking about this like, know, why do you bring in a big law firm if you&rsquo;re in a bet the company dispute? It&rsquo;s because you want the reassurance that they&rsquo;re gonna represent you because it&rsquo;s the most important thing in your company&rsquo;s history, or it&rsquo;s like one of the most important things, depends how big the company is and how litigious they are. But you wanna know that when you bring in like the Quinn and manuals or whoever it is, one of the massive firms,</p><p>they are going to go out and fight for you tooth and nail. And I think you can&rsquo;t really put a price on that. the value system should be reflective more of that rather than just the number of hours of documents that you&rsquo;re reviewing. Because if Wexler was in a big dispute, I definitely want them to be using AI, but I still want there to be humans there representing my best interest because that&rsquo;s what you pay for. That&rsquo;s the kind of assurance. So yeah, I mean, we&rsquo;re going to see a lot of creative thinking about the billable hour and how this kind of changes in the years to come.</p><p>But I think for the short term, there&rsquo;s a huge amount of wasted dead time which can be reduced and better quality of life for the litigators and better outcomes for the clients. And longer term, I think we need to start thinking more about kind of value based. I think where we work specifically, obviously a lot of the stuff we do is like oral advocacy and standing up in court. And I think people are going to want to prepare for that in the same way that kind of athletes train up for the big race. And I think that&rsquo;s going to be a really important part of the work that</p><p>that AI can help with too.</p><p>Marlene Gebauer (22:31)<br>
like your one example that you know, the partner was able to devote more time to strategic thinking. I mean, are you hearing that more broadly? Or was that just kind of an isolated example?</p><p>Gregory Mostyn (22:35)<br>
Yeah.</p><p>No, absolutely. Like all the time. Like we&rsquo;ve heard people, we had an example where, you know, they found an inconsistency which their forensic accountant had missed. And then they were able to use that inconsistency throughout the rest of the matter, you know, as a kind of key piece of strategy throughout. I think we&rsquo;re definitely seeing it&rsquo;s reducing the kind of grunt work and freeing up time to think bigger picture, you know, think about where this kind of shakes out, what are our most&hellip;</p><p>what are our best strategies? You can even do adversarial analysis where you talk to the chat assistant and say, okay, you&rsquo;re the other side now, let&rsquo;s run through some hypotheses and they can obviously review every single document and you need to know what to say in response. So yeah, we&rsquo;re definitely hearing that people are &#8275; massively reducing the kind of busy work and it&rsquo;s freeing up time to think more strategically.</p><p>Marlene Gebauer (23:32)<br>
It&rsquo;s good to hear. mean, are you hearing anything from the client side? Not your client side, but our client side.</p><p>Gregory Mostyn (23:38)<br>
The end clients? Well, indirectly, but I know that like, you know, people are, we have clients who also get access to the platform. So without getting too into the weeds, but we&rsquo;re not seat based, we&rsquo;re consumption based. So we don&rsquo;t care how many users there are. So we have, we give access to the clients as well. In the UK, we also give access to the barristers who are independent, the kind of trial attorneys. So yeah, we definitely hear people really like it. Obviously they like better value for money, but also I think they like that we&rsquo;re, you know, we&rsquo;re covering all bases here. We&rsquo;re reviewing every document and you know, my</p><p>Marlene Gebauer (23:39)<br>
Yes.</p><p>Gregory Mostyn (24:08)<br>
my colleague was a litigator for several years at Mayor Brown and he was saying, you know, I was reviewing documents at 3am in my bed, you know, just like going one, one, yes, no, yes, no. And like, that&rsquo;s going to be fraught with human error, right? And, you know, I&rsquo;d probably rather pay for AI to do it anyway, even if it was the exact same number of hours, because, you know, you know that they can cover all the bases. So exactly, it&rsquo;s not going to be tired. It&rsquo;s not the end of a long week. doesn&rsquo;t have</p><p>Marlene Gebauer (24:29)<br>
They&rsquo;re not sleepy, it&rsquo;s not sleepy.</p><p>Gregory Mostyn (24:35)<br>
you know, all the personal things going on that every human does, so, you know, maybe it does, know.</p><p>Greg Lambert (24:40)<br>
So, Greg, I&rsquo;m gonna hit you with a question that&rsquo;s off script here, but it&rsquo;s just something that as we&rsquo;ve gone around and started talking with people, and you probably heard this at Legal Week as well, training. How are you approaching the training? you&hellip;</p><p>Gregory Mostyn (24:43)<br>
Yeah.</p><p>Yeah.</p><p>Greg Lambert (24:59)<br>
able to actually leverage the AI to help you kind of learn the AI or what&rsquo;s the training method that you take for getting people up to speed?</p><p>Gregory Mostyn (25:10)<br>
Wexler, yeah, how we train people to understand. So we do a few different things. We obviously do hands-on training. We usually do it by sort of sub-practice group. We also offer top-up training for a specific case. Like, you know, people want to know and then we&rsquo;ve even done it. We&rsquo;ve like signed NDAs and we&rsquo;ve actually got really into the weeds of the case with the customers. Although that&rsquo;s obviously not our default position because of, you know, confidentiality. So we do a lot of hands-on training. We go in person, you know, London and New York. We do walk-through training and so on and so forth.</p><p>What we also do, which I think is really important, is we do a kind of workshop that&rsquo;s not about Wexler, but it&rsquo;s about AI in general. And it&rsquo;s like I was saying to the team, know, we need to convey to even the most sophisticated attorney, like how this technology can both solve a really complex reasoning problem and also not count the number of R&rsquo;s in strawberry. Like people don&rsquo;t understand how those two things can be true at the same time. And so</p><p>It&rsquo;s really important that before we even give anyone access to XLR or maybe it&rsquo;s after, but whatever, while they&rsquo;re using it, they understand that this is a new type of technology. It&rsquo;s a kind of pattern matcher. That doesn&rsquo;t mean it&rsquo;s not useful. It&rsquo;s very, very useful, but you need to know those kind of general things to be aware of and what to look out for. So yeah, we do a lot of training. We do a lot of workshops. We do kind of general AI sort of familiarity, know, educational workshops as well.</p><p>And we do top ups for individual matters. what often happens is people will be using it regularly and they&rsquo;ll be like, we&rsquo;ve got a massive case that&rsquo;s just come in. We want to do a special training for the attorneys on that. And then we&rsquo;ll do that too. So yeah.</p><p>Greg Lambert (26:45)<br>
Okay, well, &#8275; kind of dovetailing with this, before we get to the crystal ball question, we&rsquo;ve been asking our guests to share with us some of the resources, newsletters or thinkers that you are relying on to kind of keep you ahead of these compute cycles of these big AI foundational labs. So what helps you keep up with things?</p><p>Gregory Mostyn (26:57)<br>
Yeah.</p><p>Yeah.</p><p>So there&rsquo;s a brilliant technologist called Benedict Evans. I don&rsquo;t know if you know him, but he&rsquo;s a great follower. He&rsquo;s good because he has a healthy do with his skepticism. And I think he quite rightly says that, yes, it&rsquo;s incredible technology, but we&rsquo;re still early and we don&rsquo;t really know how this is going to shake out. And so it&rsquo;s quite a healthy antidote to the kind of like&hellip;</p><p>you know, AGI is coming, all those kind of things. So I like to keep my feet on the ground with him. He has a very, you know, one of the things I always go back to is one of his tweets that he did, which is basically, you know, anyone who says they&rsquo;ve solved the accuracy problem in AI is lying to you, right? Because it is going to not be 100 % accurate. That&rsquo;s just the, that&rsquo;s just, that&rsquo;s just AI, that&rsquo;s generative AI. But that doesn&rsquo;t mean it&rsquo;s not useful. It&rsquo;s still unbelievably useful. Like I use</p><p>clawed all day, pretty much, and it&rsquo;s still okay, we&rsquo;ll throw out a random person every now and then that doesn&rsquo;t mean it&rsquo;s not useful. It just means to know you need need to what to look for. So yeah, that plus, yeah, I have a bunch of other sort of think newsletters that land in my inbox each day is obviously Ethan Molek, you&rsquo;ll know and various others were on the legal side. But I try not to, I try not to read too much about what&rsquo;s going on. I kind of just like I always try and stay focused on what we&rsquo;re doing and</p><p>and just building something that great to our customers. And I&rsquo;m always amazed by actually how low the penetration is into lawyers. Like you speak to people, like practicing attorneys, and they&rsquo;re still, they might have been using Copilot or maybe the firm&rsquo;s got a Harvey license and they&rsquo;re having a play around, but they haven&rsquo;t really used it. And so, yes, there&rsquo;s so much noise and there&rsquo;s so much hype and there&rsquo;s so many VC dollars going into this market, but actually I think we&rsquo;re still very early.</p><p>Marlene Gebauer (28:57)<br>
So Gregory, now it is time for the crystal ball question. and so looking ahead the next three to five years, you know, what&rsquo;s the single big biggest change that you see coming for the role of, you know, the oral advocate or the trial.</p><p>Gregory Mostyn (29:10)<br>
Yeah, so I think this is I kind of already touched on this, but I think it&rsquo;s going to be about I think the human side of AI is going to be even more important because the sort of document review side is even less important. So the pressure on the human side, so mediations, arbitration, trial depositions is going to be even more important, the pressure will be even higher. So AI that can help you prepare for those is going to be really, really powerful. I think, you know, if you can review every single document in the world, let&rsquo;s say</p><p>And so there&rsquo;s no, there&rsquo;s not going to be any more smoking gun that hasn&rsquo;t already been found. It&rsquo;s going to be about how well you deliver that argument. It&rsquo;s going to be about how well you bring the story together. It&rsquo;s going to be about the way that you interface your client and the way that you bring the context that&rsquo;s maybe not written down on any pieces of document, on any document into the story and how you bring it all together. And then you win the hearts and minds of the jury or the judge or whoever it is based on the evidence. So I think it&rsquo;s going to be, you know,</p><p>kind of counterpoint is actually it&rsquo;s going to be even more important and the the oral advocacy is going to be even more important because the document review and the fact-finding is going to be you know largely automated you know within a few years so i think that&rsquo;s my crystal ball it&rsquo;s going to be even more important to have great oral advocacy and and lawyers are going to want to train up for those kind of things just as athletes preparing for the big race</p><p>Marlene Gebauer (30:30)<br>
to be more about the lawyering.</p><p>Gregory Mostyn (30:32)<br>
Exactly.</p><p>Greg Lambert (30:33)<br>
All right, well, Greg Mostyn from Wexler. Thank you very much for the conversation. I&rsquo;ve enjoyed this and thanks for going off script with you a little bit.</p><p>Gregory Mostyn (30:43)<br>
No problem, love that.</p><p>Marlene Gebauer (30:45)<br>
And thanks to all of you, listeners for listening to the Geek in Review. If you enjoy the show, share it with a colleague. We&rsquo;d love to hear from you, so reach out to us on LinkedIn and our Substack page.</p><p>Gregory Mostyn (30:56)<br>
Awesome.</p><p>Greg Lambert (30:56)<br>
And</p><p>Greg, where can the audience go to learn more about &#8275; Wexler or about you?</p><p>Gregory Mostyn (31:04)<br>
So wexler.ai, you can see it there in our office. Yeah, wxler.ai, very easy to remember. Maybe I did, maybe I did. Exactly, so yeah, head there, you can book a demo, you can book a chat with me. Otherwise, if you&rsquo;re on LinkedIn, I&rsquo;m always happy to chat about anything really, so yeah.</p><p>Marlene Gebauer (31:09)<br>
Did you do that on purpose? Just kidding. It&rsquo;s like, yes, I did.</p><p>Greg Lambert (31:11)<br>
He&rsquo;s got a good marketing person.</p><p>Marlene Gebauer (31:24)<br>
Terrific. Well, thank you again. and as always, the music you hear is from Jerry David DeCicca. Thank you, Jerry. And bye everybody.</p><p>&nbsp;</p>
]]></content:encoded>
					
		
		
			<dc:creator>xlambert@gmail.com (Greg Lambert)</dc:creator></item>
		<item>
		<title>Texas Trailblazers and the Hard Truth About AI in Legal Work</title>
		<link>https://www.geeklawblog.com/2026/03/texas-trailblazers-and-the-hard-truth-about-ai-in-legal-work.html</link>
		
		
		<pubDate>Mon, 30 Mar 2026 01:10:43 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[AI in legal practice]]></category>
		<category><![CDATA[alternative fee arrangements]]></category>
		<category><![CDATA[Law firm strategy]]></category>
		<category><![CDATA[Legal Innovation]]></category>
		<category><![CDATA[legal ops]]></category>
		<category><![CDATA[legal technology podcast]]></category>
		<category><![CDATA[podcast]]></category>
		<guid isPermaLink="false">https://www.geeklawblog.com/?p=19233</guid>

					<description><![CDATA[The latest episode of The Geek in Review finds Greg Lambert and Marlene Gebauer back from Dallas with a sharp, grounded recap of the Texas Trailblazers conference, an event that stayed close to the daily realities of legal work instead of drifting into glossy predictions. Their conversation centers on a legal industry trying to sort... <a href="https://www.geeklawblog.com/2026/03/texas-trailblazers-and-the-hard-truth-about-ai-in-legal-work.html">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>The latest episode of <em>The Geek in Review</em> finds Greg Lambert and Marlene Gebauer back from Dallas with a sharp, grounded recap of the Texas Trailblazers conference, an event that stayed close to the daily realities of legal work instead of drifting into glossy predictions. Their conversation centers on a legal industry trying to sort out what AI means right now, in billing, workflow, training, pricing, governance, and client expectations. What stands out most is the hosts&rsquo; focus on the practical tension between what the tools are capable of and what law firms and legal departments are structurally ready to absorb.</p><p>A major thread in the discussion is the risk of what one speaker called &ldquo;cognitive surrender,&rdquo; the habit of trusting AI output too quickly and handing off too much human judgment in the process. Greg and Marlene treat this as less of a software issue and more of a workflow and education issue. The point is not whether AI produces polished work. The point is whether organizations are building systems where review, judgment, and accountability still sit with people. Their conversation ties this concern to legal practice, education, and even K-12 learning, showing how widespread the temptation has become to accept fluent output without enough friction or scrutiny.</p><p>The episode also takes a hard look at the pressure AI is putting on the billable hour. Marlene frames the issue well when she notes that AI does not kill the billable hour so much as expose its weaknesses. Across the conference, the hosts heard repeated concern about the mismatch between efficiency gains and the financial structures law firms still rely on. If AI reduces the time needed for many tasks, then firms, associates, pricing teams, and clients all have new incentives to sort through. Greg and Marlene highlight the awkward moment the industry is in, where firms want to talk about value while clients are also eyeing the chance to pay less for faster work. The result is a growing need for honest conversations about pricing, outcomes, and what legal value should mean when time is no longer the cleanest measure.</p><p>What gives the episode its energy is the number of concrete examples pulled from the conference. The hosts discuss lower-cost multi-state surveys, large-scale analysis of rights-of-way documents, and internal workflow improvements built with existing tools like SharePoint and Copilot on little or no budget. These stories show AI not as abstract promise, but as a way to get work done that used to be too expensive, too tedious, or too slow to tackle at all. At the same time, Greg and Marlene stay skeptical in the right places, especially when the conversation turns to legal research, citation accuracy, and the idea that technology vendors have somehow solved problems that law librarians and researchers know are stubbornly difficult.</p><p>By the end of the episode, the biggest takeaway is not that the legal industry has a clear answer, but that waiting for certainty is no longer a serious option. Greg and Marlene come away from Texas Trailblazers with a sense that real progress is happening through testing, discussion, and repeated adjustment, not through perfect plans. Their recap captures an industry in transition, one where law firms, legal ops teams, vendors, and clients are all feeling the strain between old business models and new technical possibilities. The message is simple and urgent: start the conversations now, use the tools now, and get honest about what must change before the gap between what is possible and what is workable gets even wider.</p><p data-start="1979" data-end="2573"><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true"><strong>Listen on mobile platforms:&nbsp;&nbsp;</strong></span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://podcasts.apple.com/us/podcast/the-geek-in-review/id1401505293" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Apple Podcasts&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span>&#8288;</a><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true"><strong>&nbsp;|&nbsp;&nbsp;</strong></span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://open.spotify.com/show/53J6BhUdH594oTMuGLvANo?si=XeoRDGhMTjulSEIEYNtZOw" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Spotify&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span>&#8288;</a><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&nbsp;|&nbsp;</span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://www.youtube.com/@thegeekinreview" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;YouTube&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span></a>&nbsp;|&nbsp;<a href="https://thegeekinreview.substack.com/">Substack</a></p><p><span data-slate-node="text"><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">[Special Thanks to&nbsp;</span></span><a class="Link-sc-k8gsk-0 feDGbw e-9652-text-link sc-jWfcXB gQGioO" href="https://www.legaltechnologyhub.com/" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">Legal Technology Hub</span></span>&#8288;</a><span data-slate-node="text" data-slate-fragment="JTVCJTdCJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmNoaWxkcmVuJTIyJTNBJTVCJTdCJTIydGV4dCUyMiUzQSUyMiU1QlNwZWNpYWwlMjBUaGFua3MlMjB0byUyMCUyMiU3RCUyQyU3QiUyMnR5cGUlMjIlM0ElMjJsaW5rJTIyJTJDJTIydXJsJTIyJTNBJTIyaHR0cHMlM0ElMkYlMkZ3d3cubGVnYWx0ZWNobm9sb2d5aHViLmNvbSUyMiUyQyUyMnRhcmdldCUyMiUzQSUyMl9ibGFuayUyMiUyQyUyMnJlbCUyMiUzQSUyMnVnYyUyMG5vb3BlbmVyJTIwbm9yZWZlcnJlciUyMiUyQyUyMmNoaWxkcmVuJTIyJTNBJTVCJTdCJTIydGV4dCUyMiUzQSUyMkxlZ2FsJTIwVGVjaG5vbG9neSUyMEh1YiUyMiU3RCU1RCU3RCUyQyU3QiUyMnRleHQlMjIlM0ElMjIlMjBmb3IlMjB0aGVpciUyMHNwb25zb3JpbmclMjB0aGlzJTIwZXBpc29kZS4lNUQlMjIlN0QlNUQlN0QlNUQ="><span class="sc-iAJcmt kMXkFi" data-slate-leaf="true">&nbsp;for their sponsoring this episode.]</span></span></p><p><iframe title="Spotify Embed: Texas Trailblazers and the Hard Truth About AI in Legal Work" style="border-radius: 12px" width="100%" height="152" frameborder="0" allowfullscreen allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy" src="https://open.spotify.com/embed/episode/3vKoB29eLjOJo7eVn3p3f5?si=--C3Dj0qT2Wte-DtZRz6EA&amp;utm_source=oembed"></iframe></p><p><a href="https://www.youtube.com/watch?v=cASzex2I2Eo"><img style=" max-width: 100%; height: auto; " src="https://www.geeklawblog.com/wp-content/uploads/sites/528/embed_thumbs/cASzex2I2Eo.png"></a></p><h5><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;<strong>Email</strong>: geekinreviewpodcast@gmail.com<br>
</span></span><span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true"><strong>Music</strong>:&nbsp;</span></span><a class="Link-sc-k8gsk-0 hWIoWL sc-fyvmDH bJYlMc" href="https://jerrydaviddecicca.bandcamp.com/" data-slate-node="element" data-slate-inline="true" data-encore-id="textLink">&#8288;<span data-slate-node="text"><span class="sc-eLPDLy DyQdi" data-slate-leaf="true">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Jerry David DeCicca&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</span></span></a></h5><h5>Transcript:</h5><p><span id="more-19233"></span></p><p>Marlene Gebauer (00:00)<br>
Hi, I&rsquo;m Marlene Gebauer from The Geek in Review and I have Nikki Shaver for Legal Technology Hub here. And Nikki, you&rsquo;re going to tell us a little bit about some events &#8275; in London, right?</p><p>Nikki (00:10)<br>
That&rsquo;s exactly right. So all of your many listeners, I&rsquo;m sure Marlene and Greg in London town have a delight coming to them. We are bringing our LTH Velocity and Horizons conferences over the pond. We held our flagship Horizons event there last year as well, and it was really successful and sold out. And we&rsquo;re bringing that one back and also adding in Velocity, which is our conference for vendors. This, as you know, is a time when people are still learning</p><p>And there&rsquo;s so much happening every single day. It seems like there&rsquo;s a lot to figure out. And that&rsquo;s as much on the vendor side as it is on the law firm side. Often vendors don&rsquo;t really have a sense of community. There are not events that cater specifically for vendors, but Velocity is exactly that. It is an event for any legal tech vendor to enable them to come along for free to a day of very high quality content put on by Legal Tech Hub</p><p>and a range of speakers from the London region talking about things like how to prepare your company for exit, &#8275; hearing from investors on what they&rsquo;re looking for in the market, what the impact was of the anthropic entry or perceived market entry on the legal market, and what buyers want to see from vendors, how you can sell better to law firms. So highly recommend that one. And the following day on April 24th, we are having</p><p>our Horizons event, is an event for predominantly by side law firms and corporate legal departments on all of the major topics that are of particular interest for us today around things like increasing complexity around agentic systems and orchestration layers and how to handle again the anthropic market entry and what that might mean and whether vibe coding is actually the future of legal AI. So we are really excited again, those dates.</p><p>dates are April 23rd for Velocity, April 24th for Horizons. We welcome anyone who lives in and about London, but also anyone visiting in the area. is still time to sign up. Velocity is free. Horizons is kept deliberately low because we really want to have these high quality discussions and we&rsquo;d love for you to be able to bring your team. So go to legaltechnologyhub.com, drop down on our top menu for events.</p><p>take you to the LTH events page and you can find out more and sign up for our events there. Thanks Marlene.</p><p>Marlene Gebauer (02:47)<br>
welcome. That sounds like a great couple of events.</p><p>Nikki (02:50)<br>
should be.</p><p>Marlene Gebauer (03:01)<br>
and review the podcast focused on innovative and creative ideas in the legal profession. I&rsquo;m Marlene Gebauer.</p><p>Greg Lambert (03:07)<br>
And I&rsquo;m Greg Lambert and this week Marlene and I got back from Dallas and we wanted to do a kind of a recap on an event organized by Cosmonauts and LegalOps.com. Joy Heathrush from ILTA chaired day one and it really focused mostly on private practice and then Connie Brinton came in on day two from LegalOps.com and talked about the in-house community.</p><p>Marlene Gebauer (03:35)<br>
And I mean, I feel this conference stood out, &#8275; Texas trailblazers because, know, it stayed really close to the work. So you didn&rsquo;t have a lot of, know, future gazing and, and, know, what&rsquo;s next. It was more about sort of the day to day soup of legal practice, you know, where AI is actually helping and, and, you where the economics are being impacted.</p><p>Greg Lambert (03:55)<br>
Yeah. it&rsquo;s, &#8275; it&rsquo;s hard enough right now just to figure out where we are now without having to go, well, where are we going to be in six to 12 months? So I thought, I thought it was really good to kind of focus in on, where we are. And, &#8275; you know, day one with the, with the law firm focus, it kicked off with a, well, one, &#8275; Joy Heath Russia, &#8275; talked about.</p><p>Marlene Gebauer (04:02)<br>
you</p><p>Greg Lambert (04:21)<br>
where she was seeing the industry currently. And then Ron McNamee from Mary Technology introduced us to the term which I&rsquo;ve seen floating around now called cognitive surrender.</p><p>Marlene Gebauer (04:34)<br>
not to be confused</p><p>with cheap tricks, cheap tricks surrender. That&rsquo;s a whole different surrender.</p><p>Greg Lambert (04:37)<br>
Yeah, that&rsquo;s a whole different cognitive surrender</p><p>song there. But it&rsquo;s basically where you are over trusting the AI and you&rsquo;re offloading that cognitive load to the AI. And I know that we talk about, well, just review what the AI puts out, but&hellip;</p><p>you know, if everyone&rsquo;s being honest with themselves, there&rsquo;s a lot of times where we just kind of take what the AI hands us and say, &#8275; that looks great. So I like that. Let&rsquo;s talk about that. Marlene, do you see, looking back at that keynote, that it&rsquo;s not so much the firms have a technology problem as much as we&rsquo;ve got a workflow design problem that</p><p>Marlene Gebauer (05:03)<br>
You</p><p>Greg Lambert (05:22)<br>
kind of makes the attorneys and others not offload that work completely to the AI.</p><p>Marlene Gebauer (05:32)<br>
Exactly. Like this is exactly where we talk about, you know, making sure there&rsquo;s a human in the loop and sort of building that in to, know, any type of workflow or review that, know, is happening using, using AI. Cause again, it&rsquo;s, you know, it sounds so good. It&rsquo;s just really easy to just kind of say, yeah, you know, just, just, just use it. But, I think the, the smart organizations, you know, are basically incorporating this kind of education.</p><p>into their education programs for people who are using this. They incorporate it into their governance plans that there is always human review. Everything is AI assisted, not AI done completely. People just sort of build in and even bring in the people who are responsible to say, if you&rsquo;re building a workflow, where does this make sense? Where will&hellip;</p><p>Where is there their need for human review at which points of this, you know, of this action and just making sure that that everybody who does that is aware of it.</p><p>Greg Lambert (06:31)<br>
Yeah, I was talking about this with my &#8275; wife on the way in to work, and she&rsquo;s an elementary school librarian. And she kind of had that thousand yard stare in her eye. And I was like, well, what&rsquo;s going on? She&rsquo;s like, my god, we&rsquo;re having the same problem with teaching the children not to completely rely upon the.</p><p>the AI and it&rsquo;s really changing the way that we&rsquo;re educating. I I always tease to her that, you know, there&rsquo;s a lot of similarities between law firm lawyers and elementary and middle school children. But I think this is something that, you know, as we&rsquo;re developing the systems,</p><p>that we have to be careful that it doesn&rsquo;t look like we&rsquo;re just giving them the answer and that&rsquo;s gonna be really, really hard to do, especially as the tools get better and better.</p><p>Marlene Gebauer (07:21)<br>
It&rsquo;s a, it&rsquo;s, it&rsquo;s a lift. It&rsquo;s, it&rsquo;s not,</p><p>it&rsquo;s not a complete, you know, solution. &#8275; you know, it helps a little bit, but it&rsquo;s not the, the, the end solution. And I think there&rsquo;s also a, a similarity, just, you know, the population in general and, and you know, what, you know, what you&rsquo;re saying is that you have some folks that are just like, no, no, no, like, I don&rsquo;t want to use this. I don&rsquo;t trust it. And then you have other people who are just sort of jumping into the deep end of the pool before learning how to swim. &#8275; you know, I know like.</p><p>You know, my son is, and one son is in high school and he is, it&rsquo;s, it&rsquo;s quite interesting conversations at home because he is adamant like it should not be used. And I&rsquo;m like, this is my job, man. So, yeah, I think that, that there&rsquo;s a lot of kind of similarities across the board and what they were talking about.</p><p>Greg Lambert (08:08)<br>
Yeah, yeah, well, we thought, you know, filing hallucinated cases was a problem. I think that we&rsquo;ve got other problems that are coming down the All right, well, the.</p><p>Marlene Gebauer (08:14)<br>
Over 800 now. Over 100 in the US now. Every day.</p><p>Greg Lambert (08:20)<br>
Another panel which had Kelly Lugo from BCLP and and then there was another one with Chad Barton from Holland and night and we you know, kind of mentioned the Associate problem and this incentives of using the AI and whether or not, you know things things are really inverted And so on the there was an automation in action panel</p><p>where they&rsquo;re asking, firms really finally kind of getting honest about how partner compensation and associated billing targets either accelerate or kill AI adoption? it&rsquo;s an honest, I thought it was a great conversation that everyone was having about how do you convince an associate that they need to learn the AI?</p><p>if you&rsquo;re also hearing things about, the associate, this can take 50 % of the associate&rsquo;s work away, what&rsquo;s the incentive for them to do this?</p><p>Marlene Gebauer (09:26)<br>
Yeah, it&rsquo;s funny. It&rsquo;s like one of the things I said on the panel I was on was like, know, AI doesn&rsquo;t kill the billable hour, but it does expose its limits. And I think this is, this is one of those, those times. I think we&rsquo;re, really kind of at a, a crossroads where the smart firms are really going to have to start looking closely at, you know, where AI is impacting and, know, get with their pricing teams and.</p><p>figure out how they&rsquo;re going to do this because, you know, again, you know, AI is, is, something that makes you efficient and you know, the billable hour is, you know, it&rsquo;s, it&rsquo;s kind of in direct opposition to the billable hour. so, you know, while there may remain things that make sense, you know, for hourly billing, you know, the smart firms really have to start looking at this and pricing and talking to their clients about, you know,</p><p>what&rsquo;s valuable, what&rsquo;s value worth, because, know, other, otherwise, you you, you come up against this type of brick wall that we&rsquo;ve, we&rsquo;ve had for, many years where any type, anything that&rsquo;s making, um, work efficient, you know, nobody wants to, know, no one wants to adopt it because it&rsquo;s just going to, you know, hurt their pocketbook in the end. And I mean, things like, oh, sorry, things like innovation, you know, innovation hours and things are great and they count, but you know,</p><p>Greg Lambert (10:44)<br>
Yeah.</p><p>Marlene Gebauer (10:53)<br>
In the end, know, you&rsquo;re still, you&rsquo;re still kind of at, you know, at odds.</p><p>Greg Lambert (10:58)<br>
Yeah, the one thing that I heard on both days was a reference to the, hopefully I say this right, the Javits paradox where it talked about the more efficient and cheaper something gets.</p><p>that actually the more people use it, and I think people are looking at legal services this way that if, there&rsquo;s a lot of pent up demand that I think clients would love for law firms to do more work for them, but the cost and time is prohibitive. if you can get the.</p><p>know, cost and time down, does that kick in the Javits paradox to say there would just be more demand? And I think the thing that even if that happens, I think the idea that we really start needing or thought process that we need to start doing is, okay, well, how do we deal with that? Because right now I actually had someone ask me, you know, was like, are you&hellip;</p><p>looking at reducing the number of associates that you hire over the next few years. and so I don&rsquo;t, obviously I think the industry&rsquo;s gonna change. think the ability to just purely build by time and set your value by time is obviously gonna change. I just don&rsquo;t know that law firms are making a serious thought</p><p>Marlene Gebauer (12:05)<br>
Yeah, that was a question that was put out there. Yeah.</p><p>Greg Lambert (12:28)<br>
about what does that really mean and how do we kind of prepare ourselves for what&rsquo;s coming.</p><p>Marlene Gebauer (12:36)<br>
And I think it needs to be quick. mean, what, you know, what we were hearing like this, this needs to be quick. &#8275; there wasn&rsquo;t really discussion about like what type of work we&rsquo;re going to sort of bring back in in-house, but you know, I know those conversations are happening and I know, you know, in-house groups are looking at tools that they can take some of this work back in. So, you know, firms are, are</p><p>really poised, think it&rsquo;s like, it&rsquo;s, it&rsquo;s a really important moment right now for firms to really kind of jump on this and be able to sort of sell the fact that, you know, we&rsquo;ve invested in all of these things and we can do these things for you. and, and sort of take the load off of you.</p><p>Greg Lambert (13:22)<br>
Yeah, yeah, and kind of in that same vein, Kyle Poe from Legora, he gave both a standalone presentation and he was on the panel with me. And he talked about this, you know, that the conversation in 2026 going forward is, you know, shifts from</p><p>how fast is this tool, how fast can it get me to an answer to the question of, okay, now how do we price this new value that we&rsquo;re setting up? And it&rsquo;s definitely a very difficult question that law firms and pricing professionals in in-house are having to face. Because Kyle, the same, I think in the same,</p><p>instance also talked about, well, law firms now would love to go to flat fee rates that work for them so that we can really kind of focus in on the efficiency. And now he&rsquo;s hearing clients saying, whoa, if it&rsquo;s going to be less hours, maybe I just want to pay by the hour and I just pay you less. it&rsquo;s interesting to see where</p><p>These two competing factors are going to meet in the next couple of years.</p><p>Marlene Gebauer (14:41)<br>
Yeah, a couple things like, mean, he&rsquo;s saying how fast is, know, moving from how fast is the tool to how do we price the new value? So, you know, we&rsquo;re moving, you know, it&rsquo;s clearly we&rsquo;re starting to move from like, you know, how, how do we experiment with this thing? How does it work to, okay, you know, what, how is it truly impacting work and how is it impacting the price of work? The other question is,</p><p>And it kind of goes to his point about, maybe we&rsquo;ll just, you know, pay hourly and just pay less hourly. But, you know, one of the things that came up was, was that the value is still there. It&rsquo;s just the time that&rsquo;s spent is less. So how do we figure out what value actually means? And, you know, is it just the time spent? Is it the outcome?</p><p>Greg Lambert (15:23)<br>
Yeah.</p><p>Marlene Gebauer (15:33)<br>
You know, is it the fact that, that, you know, you, you know, you get insights faster, you resolve things faster and, know, everyone gets, everyone gets back to normal work. you know, all of those things have a price point. So, it&rsquo;s, it&rsquo;s, you know, it&rsquo;s an opportune time, I think for, for clients and firms to really, it&rsquo;s really talk about it, like have an honest conversation about it, you know,</p><p>You know, no more, no more like Arlene discount. Let&rsquo;s, let&rsquo;s, let&rsquo;s really talk about like, what is important to you? You know, what do we need? yeah.</p><p>Greg Lambert (16:05)<br>
Well, it was interesting because we moved from day one focused on the way law firms are viewing this. And then we moved to day two. And I will say one thing that was talked about a lot and I think Joy mentioned this on day one and I think it was mentioned as well on day two with Connie.</p><p>was that there&rsquo;s a lot of power in the hands of the in-house lawyers and general counsel of the clients. And it wasn&rsquo;t until I was reviewing my notes and drafting the recap on the Substack page that I realized there were a lot of&hellip;</p><p>There were not a lot of in-house attorneys that showed up for day two. It was a large crowd, but there were lots of legal ops people. I think there were still a number of law firm people that were there, but at least on the panels and the discussion that was coming from the stage.</p><p>It was a noticeable absence of the in-house attorneys on day two. And so you&rsquo;ve got this belief that &#8275; there&rsquo;s a lot of power in the hands of the people and they weren&rsquo;t there. It was kind of weird.</p><p>Marlene Gebauer (17:28)<br>
You know, I hear mixed things and it&rsquo;s like, you know, I hear like, Oh, know, clients are, you know, very far ahead in this area. And then I hear clients are not far ahead in this area. So I think it would have been really, really helpful to me to be able to sort of talk to some of these people and be like, Hey, you know, so where are you guys in your journey? Um, and, know, get some, some, you know, actual feedback on that.</p><p>Greg Lambert (17:53)<br>
Yeah, well, if you are an in-house attorney and you&rsquo;re listening to this, yeah, come to the table. Come on. People are begging to hear from you. They want, really want direction. So it was good to see Connie on day two, kick things off, and she had a really good opening.</p><p>Marlene Gebauer (17:59)<br>
Come talk to us. We want to hear from you.</p><p>You really are.</p><p>Greg Lambert (18:21)<br>
discussion where she really kind of did put the impetus on, you know, it is time for people to stop talking and time for some real transformations to go. And it kicked off John LeBare, who&rsquo;s general counsel at Harvey, and he had an interesting talk. He also brought up the Jeavitt&rsquo;s paradox and kind of where Harvey</p><p>you know, where Harvey is and how he&rsquo;s using AI tools in his day to day. Now there was something and to his credit, he did say he was going to test this out on a non Silicon Valley audience. And I for one, I didn&rsquo;t get a chance because he immediately took off back to California after his talk, but he compared</p><p>&#8275; lawyers to software engineers. And &#8275; he said he did want to see how this went over with this group. So John, hopefully you&rsquo;re listening &#8275; and let me give you my opinion. And I thought, and I talked with Joy Heathrush afterwards as well, and she did too that he got.</p><p>to the point, but the way that he got to the point didn&rsquo;t really land. that was, exactly. So he was saying that essentially that lawyers and computer software engineers essentially do the same type of work. They get to an output.</p><p>Marlene Gebauer (19:43)<br>
Agreed with the end result, but didn&rsquo;t agree with the analysis. Yes.</p><p>Greg Lambert (20:05)<br>
&#8275; and, you know, and, and I really kind of thought it was a good try, but the, the outputs that computer engineers have and the outputs that lawyers have, one can be independently validated for &#8275; correctness. Software code either works or it doesn&rsquo;t work. Whereas, yeah, it&rsquo;s a product.</p><p>Marlene Gebauer (20:12)<br>
tell lawyers that.</p><p>It&rsquo;s a product. &#8275; You&rsquo;re</p><p>making something work or not work.</p><p>Greg Lambert (20:31)<br>
Yeah. so, I thought, because I know what he was doing and you hear this a lot and it almost doesn&rsquo;t matter what industry is that AI has solved for computer code. That seems to be an admission in the industry that you hear it from Anthropic that their coders don&rsquo;t even look at the code anymore.</p><p>They look at the results and the results work, then there&rsquo;s no need to edit the code. And so with engineers, they&rsquo;re getting to this point now where their jobs have shifted more from the writing of the code to more the coordination of the events that are going on and making sure that everyone is on.</p><p>on track for what it is that they need to accomplish. And so the engineers&rsquo; jobs have changed. And it&rsquo;s really interesting because product managers now see themselves almost as quasi-engineers and engineers now see themselves as quasi-product managers. there&rsquo;s this kind of what I&rsquo;ve heard, you hear T-shaped lawyer, right? Where you&rsquo;re broad across the board on multiple skills and then you could dive deep.</p><p>you know, in one skill. And I&rsquo;ve heard it, you know, the kind of sideways E or F where, you you&rsquo;re still broad in multiple areas and then you&rsquo;re like deep in one or two or three areas. And that&rsquo;s going to be kind of the new normal. And I think that part is going to transfer over to legal as well to where, yeah, you can be really a really good</p><p>Marlene Gebauer (21:45)<br>
to the legal analysis,</p><p>There&rsquo;s the T, there&rsquo;s the D, yeah.</p><p>Greg Lambert (22:12)<br>
a corporate lawyer, but you&rsquo;re also gonna have to be knowledgeable in one or two more areas, especially in using the technology that you have.</p><p>Marlene Gebauer (22:22)<br>
just being a legally good expert from, you know, from basically legal advice. we are already seeing that that is not enough. mean, you, you have to know your client&rsquo;s business. You have to understand the technology you&rsquo;re using, all of those things. mean, I agree with him that, and you know, we talked to Joy about this, that, that, that they&rsquo;re both problem engineers and lawyers are both problem solvers. They&rsquo;re both critical thinkers.</p><p>But, you know, as you said before, you&rsquo;re trying to doing code, you&rsquo;re trying to make something work. Now, you know, your client can come to you and say, you know, I want this result, but a lot of that is out of your hands. you know, it&rsquo;s like, I want a deal to come out like this. It&rsquo;s like, well, there&rsquo;s a negotiation process. that&rsquo;s that takes place and.</p><p>or, I want a specific outcome in litigation. It&rsquo;s like, well, you know, that, that there&rsquo;s a million factors that, could impact that, you know, your jurisdiction, your judge, your, if you have a jury, um, you know, how, how, um, you know, how, how impressive your argument is, how strong your argument is. So there&rsquo;s just a lot of things that, that I think, you know, when you&rsquo;re offering a service, uh, are, are a little different.</p><p>Greg Lambert (23:31)<br>
Yeah, I agree. So we&rsquo;ll just ask John to sharpen his pencil and then come back with another analogy, at least outside of Silicon Valley. So one of the things from day two that was really interesting that I thought, and there were some in-house lawyers on this, but it was more on operations than it was on actual practice.</p><p>Marlene Gebauer (23:40)<br>
Yeah.</p><p>Greg Lambert (23:56)<br>
was the practical use cases. &#8275; And there were some really good actual use cases that were talked about. &#8275; Justin Schwartz from Eparoc, he talked about one of my favorite topics is the 50 state survey, or he referred to it as just the multiple state survey, and where he asked his outside counsel if&hellip;</p><p>Marlene Gebauer (24:11)<br>
Ha</p><p>Greg Lambert (24:18)<br>
if they used AI, what would a $10,000 multi-state survey cost? And he said, one, the outside counsel was surprised that they were allowing them to use the AI to do this because other clients refused to let them do it. And so they came back and he said, it went from a $10,000 cost to a $5,000 cost.</p><p>And I think both sides were very happy to do that. I think 50 state surveys are one of those things that clients want, that they need in order to, especially if they&rsquo;re in regulatory, but they&rsquo;re super expensive. And quite frankly, it&rsquo;s not the favorite thing for lawyers to do. They don&rsquo;t like pulling these things together either. So.</p><p>Marlene Gebauer (24:56)<br>
regulatory, yeah. &#8275;</p><p>I don&rsquo;t think we liked</p><p>doing it when we had to do it in the library. lot of work.</p><p>Greg Lambert (25:12)<br>
I do not.</p><p>So I thought that was really interesting to, again, they talked to each other. They said, OK, I want this, but I don&rsquo;t want to pay that much for it. And you say, if you&rsquo;re going to make us do it the old way, that&rsquo;s what it costs. If you let us do it the new way, then we&rsquo;re happy with</p><p>with half of that cost because it probably takes us half or less of that time to do it.</p><p>Marlene Gebauer (25:46)<br>
That was what I liked about it too, the fact that they kind of had that conversation. I think it also opens up the opportunity to address work that just never got done because it just didn&rsquo;t make economic sense. And so if this is a pain point for a client,</p><p>Like now you have an opportunity, like these 50 state surveys. mean, the time that it takes now is, know, compared to when we, when we used to do it, like we stick days. And I mean, that was a while ago, so it probably doesn&rsquo;t, it didn&rsquo;t take days, &#8275; you know, when they were comparing now, but still it took a lot more time. And now this is something that they can probably offer on a regular basis and an updated basis. Whereas, you know, maybe this wasn&rsquo;t all the time.</p><p>before because it&rsquo;s just the cost was in the way, but you know, any of these types of, of things where, we have, broadened our abilities based on, AI, you know, that&rsquo;s definitely a conversation to have, you know, between clients and, and, and firms to just see it&rsquo;s like, where can we, where can, where can we do more?</p><p>Greg Lambert (26:52)<br>
Yeah, one other example that I wanted to highlight because I think this is something that it kind of expands a little bit from the multi-state surveys. Michael seen us from Phillips 66 shared how their company has all of these rights away and I think he was saying there was like</p><p>over 250,000 of these that they had documents that explain what the rights away were. And what was interesting, and I don&rsquo;t know that, you if you weren&rsquo;t listening closely, if you caught it, he was basically saying, this is work that, you know, that we would love to have done, but there&rsquo;s no way we can do it. And so they actually took all of their&hellip;</p><p>Marlene Gebauer (27:20)<br>
can get to the gas, get to the pumps and stuff, right?</p><p>Greg Lambert (27:43)<br>
documents that showed the rights of way and uploaded them into the AI system and extracted all of that information. And he said it saved them well over a million dollars in cost to do that. But the key thing was they probably didn&rsquo;t even do it beforehand because the cost was so high. So now they at least know what their risks are.</p><p>And we&rsquo;re able to get to that in an easy way. So this is work that wasn&rsquo;t getting done because of cost and now it is getting done. So I think it just shows an example of if it&rsquo;s cheap enough, what all you can do.</p><p>Marlene Gebauer (28:24)<br>
either, you know, not getting done or getting done piecemeal on an as need basis. And, you know, what a wonderful example of being able to kind of extract, you know, information like, you know, trends or, or, know, different types of language and, know, what&rsquo;s happening over time and being able to, you know, to see, okay, you know, what should we be paying attention to? You know, what are the, what are the risk factors here or what do need to change? And then being able to sort of change that on mass.</p><p>Greg Lambert (28:52)<br>
Yeah, yeah. And the last one I wanted to highlight was Elizabeth Poole from Boomi.</p><p>And I like, she had some really good examples, but the one, and this one was close to my heart, and that was &#8275; she developed a way of taking the intake system and revamping it using a combination of SharePoint and Co-Pilot, and she said her budget to do this,</p><p>was zero. you know, that&rsquo;s the ultimate, you know, this is the vibe coding &#8275; example. Yeah. So, yeah. Well, you know, necessity is the mother of invention. And when you got no budget, you got all kinds of necessity.</p><p>Marlene Gebauer (29:29)<br>
That&rsquo;s the ultimate incentivizer. It&rsquo;s like, figure out something. I have no money.</p><p>Mm-hmm.</p><p>I,</p><p>and I love these types of examples where it&rsquo;s like, you know, I had a, you know, I had a matchbox and, and, know, and a rubber band and like, you know, I made something amazing. And so you just basically use what you have. And it&rsquo;s just a real example of, of, know, the, creative abilities of, of people and organizations, you know, when they&rsquo;re.</p><p>You know, when they&rsquo;re kind of tech curious and saying, okay, we have what we have, like, how can we make this work to solve the problem?</p><p>Greg Lambert (30:12)<br>
as long as they stay &#8275; tech curious and not tech furious. Sorry, little Scott Pilgrim. All right, there is nothing wrong with that. Our friend moving on to another panel, or actually this was a standalone presentation that&hellip;</p><p>Marlene Gebauer (30:16)<br>
No Tech Furious,</p><p>Nothing wrong with referencing Scott Pilgrim.</p><p>Greg Lambert (30:33)<br>
multi-guest on the Geek in Review, Christina Sikounis, talked about, so she got up and talked about the upcoming report that she&rsquo;s putting out on the industry and pricing and how much customers are, or how much clients are paying from Council Link.</p><p>Marlene Gebauer (30:38)<br>
I love that she was there.</p><p>Greg Lambert (30:56)<br>
She talked, I had to feel sorry for her because she&rsquo;s been on I think since about 2020. So maybe the past six years on and off. And I always see this like hope in her eye. And then I talk about &#8275; the flat rate and alternative fees. it&rsquo;s like, Christina, where are we this year? And she&rsquo;s like,</p><p>Marlene Gebauer (31:09)<br>
Ha ha ha ha ha!</p><p>Someone&rsquo;s gonna.</p><p>Greg Lambert (31:18)<br>
I thought we were going to go up, but here we are again right at about 10 % for AFAs. Yeah, come on. Come on. She&rsquo;s super nice. Make Chris happy. &#8275; But she was talking about the fact that while AI may be making production cheaper, there&rsquo;s a new normal in fee rates.</p><p>Marlene Gebauer (31:18)<br>
Sorry.</p><p>All right, guys, we got to make it work for Christina. Everybody, everybody chip in. We want to make her smile.</p><p>No.</p><p>Greg Lambert (31:43)<br>
raising them. And she pointed out that before 2022 that on average the rates and this is a blended amount on how they measure it was around 3%. She said about 2022 and after that has jumped to 5%. And I know I looked around because people were like, that seems low because I think we were like at 10%.</p><p>So and she mentioned that, again, the way they measured it, came out to that, but it was still a huge increase from 3 % to 5 % is large. That&rsquo;s a, I think if my math is right, that&rsquo;s a 60 % increase year over year of what you were paying. And one of the things that an audience member had asked her is like, well,</p><p>you know, in two or three years, what&rsquo;s your next prediction? And she said, well, if we could normalize 5%, you know, there&rsquo;s gonna be a push to normalize 7 % or more, and it&rsquo;s gonna be interesting.</p><p>Marlene Gebauer (32:34)<br>
we do for another one.</p><p>It&rsquo;s really</p><p>interesting based on what we were just talking about before. Okay, are we really going to have these heartfelt conversations about what value is and are we really going to try and adjust this model in a way that makes sense now? mean, the billable hour made sense years and years ago because it was a real easy way to measure. But AI has come and disrupted that.</p><p>You know, it&rsquo;s not a good way to measure anymore, but yet everybody&rsquo;s so invested in it. are we going to have those conversations? Are we going to make those adjustments to, really reflect, what, what, know, what the value is and what the cost should be, or are we just gonna like up rates and then just spend less time? You know, I don&rsquo;t know.</p><p>Greg Lambert (33:33)<br>
Yeah, yeah. Another panel which was really interesting because they had people from Google, AT &amp;T, Striker, they brought up an issue that was really interesting about the challenge of being able to add AI into the process but not make it unnecessarily complex in the existing tech stacks.</p><p>And I think one of the things that a lot of us, especially on our side of things, Marlene, or on the legal ops side, is that if we start allowing individuals to create their own software to correct their one problem that they&rsquo;re facing, how do you manage that? How do you govern that? I mean, you don&rsquo;t want to&hellip;</p><p>You know, don&rsquo;t want to tamp down the creativity, but man, it could get complex quick.</p><p>Marlene Gebauer (34:23)<br>
real questions.</p><p>I mean, you we&rsquo;re having we&rsquo;re having enough trouble like with with governance, just like with all of the changes that are happening. And it&rsquo;s like we&rsquo;re finally, I think, at a fairly decent point where we have our arms around it a little bit. I mean, but there&rsquo;s still questions to be answered. But can you imagine every time somebody builds some small solution that&rsquo;s, know, for a small practice group or an individual, you know, OK, are we going to have to sort of go are they going to have to go through sort of an approval process like</p><p>like we have to do with just general use of these AI tools. And I&rsquo;m 100 % unnecessary complexity. It&rsquo;s like as long as it&rsquo;s complex, people don&rsquo;t adopt it. If it&rsquo;s easy, people do. And it&rsquo;s the same thing with your tech stacks. Like if it&rsquo;s easy to bring in and maybe bring out, then there&rsquo;s much more appetite for it.</p><p>You know, so I mean, maybe this is where you see the MCP coming in too.</p><p>Greg Lambert (35:28)<br>
Yeah, it was kind of interesting because we kind of almost came full circle because Richard Gorlick from Chrono Tracer out of Austin had this kind of warning about the lawyer freak out of 2027. And he kind of brought the same issue</p><p>that John LeBeer from Harvey did with, okay, we&rsquo;re seeing substantial changes in certain industries like computer programming and computer engineering. We know it&rsquo;s going to happen in legal in some way that things are going to significantly change.</p><p>And so he was talking, and I think he even brought up like the five stages of grief that AI, and that was not the first time I had heard that, but it&rsquo;s like denial and then you go through all the, and then finally acceptance at the end. But again, I think that talks more about the&hellip;</p><p>Marlene Gebauer (36:25)<br>
Yeah, I&rsquo;ve heard that too.</p><p>Sadness.</p><p>Greg Lambert (36:44)<br>
and I can bring back my favorite saying is, you know, all problems are communication problems. And if you&rsquo;re not having those conversations now to at least be prepared for things and not have that conversation when it&rsquo;s too late.</p><p>I think that&rsquo;s the advice that I think we kind of all walked away with. But if not, that&rsquo;s the advice that I&rsquo;m giving everybody now is like, if you&rsquo;re not having those conversations now, when it is an emergency, it&rsquo;s too late.</p><p>Marlene Gebauer (37:16)<br>
Yeah, I mean, it is absolutely critical that, that, you know, internally we&rsquo;re having these conversations with, you know, everybody who&rsquo;s, who&rsquo;s impacted. and, you know, it&rsquo;s funny, I was on a, I was on a webinar this morning and it was talking about sort of training and education and, know, part of the discussion is about, well, it&rsquo;s, you know, you just sort of training people how to use things.</p><p>But it goes much further than that. mean, we have to be very, very transparent. You know, we have to teach people about what it does and doesn&rsquo;t do. we have to teach, you know, the governance aspects of it, and, and established guardrails. So people are, you know, comfortable with it. We have to be sharing use cases, you know, within departments and across departments, particularly the ones that, that, you know, have, you know, significant impact, scalable, repeatable, because.</p><p>You&rsquo;re right. You know, it&rsquo;s like, they&rsquo;re, and he&rsquo;s right. Like it&rsquo;s, it&rsquo;s coming. We all know it&rsquo;s coming. And so we have to do the best we can. And that&rsquo;s kind of what, you know, our jobs are is to sort of prepare people and, know, get them comfortable as, as much as we can with it. And it&rsquo;s funny, like, you know, I will say that sometimes in my experience, like, you know, the ones who are grumbling about it are often the ones who use it pretty well. So, you know, they get it. so I, I, but I have, I have, I have.</p><p>good hope for that. think that again, you&rsquo;re seeing the fact that we&rsquo;re having the discussions right now, I think makes a lot of difference.</p><p>Greg Lambert (38:40)<br>
Well, I want to do &#8275; just a &#8275; quick bonus topic here. that&rsquo;s, there was a couple of presentations, &#8275; one from Harry St. John from Lagora.</p><p>Harry&rsquo;s the VP of Revenue and then the one that probably came out of left field the most was John Rizner, who&rsquo;s head of AI at Legal Drafting for Filevine and this one was like the dream for the librarian in me because one, Harry introduced in it, I think he was kind of like</p><p>Marlene Gebauer (38:59)<br>
Ha ha.</p><p>Greg Lambert (39:16)<br>
not going to introduce it, but my hand shot up immediately when I saw it. And that is, Lagora is introducing a legal research option on their platform, which is going to be very interesting because I think we&rsquo;ve said this before. I think a lot of people think that&hellip;</p><p>AI is like a magic bullet for legal research. And I&rsquo;m here to tell you, if you haven&rsquo;t done legal research in a while, legal research is really, really hard to do. It&rsquo;s complicated and it&rsquo;s hard to get right. So it&rsquo;s interesting to see that pop up on that because I actually had, I was talking with a venture capitalist and</p><p>Marlene Gebauer (39:47)<br>
to get right.</p><p>Greg Lambert (39:59)<br>
they were asking, well, do you think the Harveys and Lagoras are gonna go after legal research? And I was like, no, they may plug it in, but I don&rsquo;t see them going. I guess I was wrong. So, but really the, good.</p><p>Marlene Gebauer (40:07)<br>
Really? No, they are.</p><p>why,</p><p>just for one second, like, don&rsquo;t know why, like, but, but research for the longest time has sort of been like the, like the Holy grail for, for all of these AI comes like, I don&rsquo;t know why, like, why, like we&rsquo;ve had this conversation, like, why are you going after the hardest thing? It&rsquo;s like, go after the easy stuff first.</p><p>Greg Lambert (40:28)<br>
Yeah, well, you</p><p>to their credit, know, &#8275; Harvey and LaGorra didn&rsquo;t attack it first like, you know, Westlaw and Lexus did. But, you know, but also Westlaw and Lexus, was their, you know, that&rsquo;s their bread and butter. So.</p><p>Marlene Gebauer (40:38)<br>
Yeah, but that&rsquo;s</p><p>There are</p><p>other, there are other companies that have, popped up the, smaller companies. mean, you know, mid page being one of them and Wexler I think is another one where it&rsquo;s there. That&rsquo;s what they&rsquo;re focused on.</p><p>Greg Lambert (40:46)<br>
yeah.</p><p>Yeah, well, if.</p><p>I will tell you this, if legal research could be solved by just having the case law, then we&rsquo;d all be using Google Scholar for all of our work. And I&rsquo;m here to tell you, unless you&rsquo;re an academic and you just, you you need access to the core cases, it&rsquo;s much more complicated than that if you&rsquo;re actually doing real live legal research for real live clients.</p><p>Marlene Gebauer (40:58)<br>
Yeah.</p><p>I agree. Agree.</p><p>Yeah.</p><p>Greg Lambert (41:17)<br>
But I did want to point out John Risner from Filevine because this was a law librarian&rsquo;s dream presentation. The rich. my goodness.</p><p>Marlene Gebauer (41:17)<br>
It&rsquo;ll be in&hellip;</p><p>The Riz presentation and it was good. It was really good. I know both of us were like,</p><p>where did this come from? Right? It just popped up. It was awesome.</p><p>Greg Lambert (41:35)<br>
It was awesome that yeah, I think this one was</p><p>the most out of left field presentation because he had talked about, know, we laugh when we talk about these stories of lawyers who.</p><p>Marlene Gebauer (41:41)<br>
Yes.</p><p>people are going to hate this when you tell them it&rsquo;s like, going to hate to hear</p><p>this.</p><p>Greg Lambert (41:49)<br>
have cited to</p><p>hallucinated cases that have bad citations in their documents. And I&rsquo;m here to tell you that was happening before AI was a thing. &#8275; People were putting in wrong citations, bad citations, using the incorrect quotes and incorrect analysis. That is not an AI phenomenon.</p><p>Marlene Gebauer (41:53)<br>
Yes. Awful stuff.</p><p>It was.</p><p>Companies had, didn&rsquo;t have citations that they should have had, you know, it&rsquo;s like, mismatch</p><p>citations. Yeah. It was always happening.</p><p>Greg Lambert (42:17)<br>
But he had actually</p><p>gone through and did a really deep dive on American case law and just how much we, you know, it&rsquo;s like how much Shepard&rsquo;s and Keysight and B Law, the Cytator systems, you know, it&rsquo;s like they don&rsquo;t even agree with each other.</p><p>Marlene Gebauer (42:27)<br>
citators.</p><p>The bad news is like they don&rsquo;t agree and they have significant error and it&rsquo;s like great. it plays well into like a conversation I&rsquo;ve been having with librarians and I was having with you. It&rsquo;s just like, okay, how, like, because again, it goes back to these hallucination cases. Like now everybody&rsquo;s got their eyes on that. And so what is the appropriate way?</p><p>to check your citations because not only do have to check to make sure the citation is correct, you have to make sure that the content in the citation is correct. You know, and if it&rsquo;s, it&rsquo;s a, know, key number that is referring back to the right thing and you know, yes, and all of these, these tools, if they&rsquo;re, know, if they&rsquo;re grounded appropriately, you can go back and look at the actual document. But like we&rsquo;ve been trying to get away from that for like years and years and years. so that&rsquo;s what Shepard&rsquo;s was for. That&rsquo;s what West, you know, citation West check was for.</p><p>&#8275; is well, you didn&rsquo;t have to do that and now you find that they&rsquo;re wrong and then this is also wrong. So it&rsquo;s like, what are we supposed to do?</p><p>Greg Lambert (43:38)<br>
Yeah.</p><p>Yeah, well,</p><p>you know, I think it&rsquo;s one more thing that.</p><p>Marlene Gebauer (43:44)<br>
Somebody tell me. No, this</p><p>is serious question. I want to know.</p><p>Greg Lambert (43:48)<br>
Well, the death of the law librarian has been greatly over exaggerated over the years. think that&rsquo;s one more example of &#8275; why it takes somebody really smart to keep on top of these vendors that think they&rsquo;ve created the easy button for legal research. &#8275; Well, I thought it was a great conference.</p><p>Marlene Gebauer (44:08)<br>
Exactly.</p><p>Greg Lambert (44:13)<br>
What are your final thoughts on this one, Marlene?</p><p>Marlene Gebauer (44:18)<br>
You know, I think, you know, if there&rsquo;s, one common thread, you know, it&rsquo;s nobody&rsquo;s waiting for perfect clarity anymore. And that would be a mistake if you do, you know, people are building, they&rsquo;re testing in real time. You know, they&rsquo;re, they&rsquo;re, they&rsquo;re looking to find, solutions that are bringing true value.</p><p>Greg Lambert (44:36)<br>
Yeah, well, and one, think, you know, they&rsquo;re not waiting for it to be perfect before they start testing it. You&rsquo;re seeing that in the vendors. you know, iteration is now the way. Used to laugh, you know, we used to go to AA, double L or Ilta to see what the big change was for the year. And now it&rsquo;s like every week, there&rsquo;s a big change. &#8275;</p><p>Marlene Gebauer (44:43)<br>
Yeah. Yeah.</p><p>And this is why horizon</p><p>scanning is so important nowadays, because again, you know, have to see kind of what&rsquo;s, you know, what&rsquo;s being out there, what&rsquo;s being built and, and, know, what sort of new angle is there that, that maybe isn&rsquo;t being addressed other places.</p><p>Greg Lambert (45:13)<br>
Yeah, yeah. And my takeaway, it&rsquo;s both with the conference and a lot of things that I&rsquo;m hearing across not just our industry, but just across society, there&rsquo;s lots of things that AI can do right now. But our culture and our businesses and just the way that we&rsquo;re structured, there&rsquo;s a huge gulf between what&rsquo;s possible and what&rsquo;s doable.</p><p>&#8275; And it&rsquo;s, we&rsquo;re still trying to work out the doable while the possible just keeps going further and further away. So I think just like we said earlier, the key right here is having these conversations now, working with the tools now.</p><p>And even with all the, you know, the warts and all of getting in there and, you know, kind of at least trying to advance with the technology so that it doesn&rsquo;t, you you don&rsquo;t get just completely left.</p><p>left behind, but also so that it gives you that time to also ingest what this means for you as an individual and you within a profession and where that goes. you&rsquo;re not talking about it, if you&rsquo;re not working with the tools and experimenting and investigating, good luck to you.</p><p>Marlene Gebauer (46:32)<br>
Well, I think Texas Trailblazers gave us a great snapshot of the transitions that are happening right now. So thanks for listening, everybody, and we&rsquo;ll see you next week. And as always, the music here is from Jerry David DeCicca Bye, everybody.</p><p>Greg Lambert (46:47)<br>
All right, thanks Jerry, bye.</p><p>&nbsp;</p>
]]></content:encoded>
					
		
		
			<dc:creator>xlambert@gmail.com (Greg Lambert)</dc:creator></item>
		<item>
		<title>What I Took Away from Texas Trailblazers</title>
		<link>https://www.geeklawblog.com/2026/03/what-i-took-away-from-texas-trailblazers.html</link>
		
		
		<pubDate>Fri, 27 Mar 2026 14:23:39 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[legal conference]]></category>
		<category><![CDATA[legal tech]]></category>
		<category><![CDATA[Texas Trailblazers]]></category>
		<guid isPermaLink="false">https://www.geeklawblog.com/?p=19228</guid>

					<description><![CDATA[(See Day Two Coverage for the In-House Programs over on The Geek in Review Substack page &#8211; GL) Day One of Texas Trailblazers in Dallas had a different tone than most legal tech conferences I attend. The conversations stayed close to the work. Less speculation, more discussion about what people are doing right now, where... <a href="https://www.geeklawblog.com/2026/03/what-i-took-away-from-texas-trailblazers.html">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p><strong>(See Day Two Coverage for the In-House Programs over on <a href="https://thegeekinreview.substack.com/p/texas-trailblazers-day-two-where">The Geek in Review Substack page</a> &ndash; GL)</strong></p><p>Day One of <a href="https://www.texas-trailblazers.com/">Texas Trailblazers</a> in Dallas had a different tone than most legal tech conferences I attend. The conversations stayed close to the work. Less speculation, more discussion about what people are doing right now, where it is working, and where it is breaking.</p><p>Organized by Cosmonauts in partnership with LegalOps.com and held at The Statler Dallas, the Private Practice Day brought together law firm leaders, legal operations professionals, and technology vendors for a full day of keynotes, panels, and product demos. Joy Heath Rush, CEO of ILTA, chaired the day and opened with a fireside on the new era of conversations between general counsel and outside counsel. Her framing set the tone: AI is living at the intersection of business and technology, and it is creating conversations that simply did not exist two years ago.</p><p>Across the sessions and hallway conversations that followed, a few themes kept showing up. They were consistent whether the speaker came from a law firm, an in-house team, or a vendor building the tools.</p><h2 class="header-anchor-post">Trust in AI is becoming a workflow problem</h2><p>Rowan McNamee, Co-Founder and COO of Mary Technology, opened the sponsor keynote with a point that came up repeatedly throughout the day. AI outputs are persuasive. They read well. They look complete. That creates a tendency to move forward without enough friction in the process.</p><p>McNamee cited a recent study on what researchers call &ldquo;cognitive surrender,&rdquo; based on the work of Sean Hay and building on the framework in&nbsp;<em>Thinking, Fast and Slow</em>. Under time pressure, people rely on AI even when they know they should verify the result. In a series of experiments, override rates improved from 20% to 42% when participants had real money on the line. Even then, more than half still followed the AI&rsquo;s answer. &ldquo;You can reduce it, but you can&rsquo;t eliminate it just by telling people to be careful,&rdquo; McNamee noted.</p><p>In a legal setting, the implication is straightforward. Review cannot be optional. It has to be built into the process in a way that does not depend on someone remembering to slow down.</p><p>That concern echoed later in the day during the panel I moderated. Laura Ewing-Pearle, Senior Manager of eDiscovery and Practice Support Technology at Baker Botts, described a clear split even among e-discovery practitioners who are enthusiastic about AI. They lean heavily toward document interrogation and querying. They are far less comfortable with AI making responsiveness calls, and they are &ldquo;definitely not comfortable with AI making privilege calls.&rdquo; The line between assistance and reliance is still being worked out in real time.</p><p>Adding urgency to the conversation, panelists referenced a recent New York case in which a court ruled that AI-generated legal advice obtained through a public tool was not protected by attorney-client privilege. The ruling was specific to a consumer-facing chatbot, but the message landed clearly: enterprise AI tools with proper security and licensing are becoming a matter of professional risk management.</p><h2 class="header-anchor-post">Adoption follows incentives, not access</h2><p>The panel on Culture, Change, and Collaboration brought together Emma Dowden of Burges Salmon, Thom Wisinski of Haynes Boone, Kelley Lugo of Bryan Cave Leighton Paisner, Rowan McNamee, and Tammy Covert of Nachawati. The discussion stayed focused on what actually drives adoption inside organizations.</p><p><span id="more-19228"></span></p><p>Kelley Lugo described how the initial reaction at many firms was to block AI tools. That position has shifted. Lawyers are experimenting, often on their own. But the challenge is whether there is a reason to use AI consistently. She offered a concrete example: in BCLP&rsquo;s UK real estate group, roughly 80% of matters are handled on a fixed-fee basis. Write-offs on those fixed matters directly hurt partner compensation. That pain point, she explained, is exactly where AI efficiency aligns with real motivation. When the people doing the work see a direct connection between AI-assisted speed and their own metrics, engagement follows.</p><p>Emma Dowden framed it in terms of mindset and behavior. Transformation efforts stall when they focus on the technology without addressing how people are measured and rewarded. She also noted that the pace of change is creating a real business risk. In the UK, the Big Four are telling firms what percentage of their revenues are at risk from disruption, and the numbers are serious.</p><p>Lugo shared a story that captured the adoption dynamic well. A senior leader at her firm went on a two-week safari, came back, and told her he had not thought about work at all the entire trip. But he had thought about AI the entire time. When people get enough distance to think clearly about where the profession is headed, the conclusions tend to be similar.</p><p>No one on this panel suggested that training alone will move the needle. The focus stayed on incentives, visibility, and examples that resonate with how lawyers already operate.</p><h2 class="header-anchor-post">Building sustainable innovation foundations</h2><p>The fireside on Beyond Technology: People, Process, and Policy brought together Joanna Penn, Chief Transformation Officer at Husch Blackwell, Rochelle Rubin, Director of Client and Business Operations at Barnes &amp; Thornburg, and Louise Thomas, Director of Transformation at Burges Salmon.</p><p>Husch Blackwell&rsquo;s approach stood out for its scale. Their virtual office, &ldquo;The Link,&rdquo; now has over 850 people, including more than 250 attorneys, working in a distributed model that lets the firm recruit specialized talent in markets where they have no physical office. They are also hiring &ldquo;architects&rdquo; whose job is to translate between technical and legal teams. That is a role you are starting to see more firms create.</p><p>Barnes &amp; Thornburg shared a result that had nothing to do with technology and everything to do with process. Their client interview program, built on the simple act of asking clients what the firm could do better, produced a 90% or greater revenue increase within 12 to 18 months for the clients who received that in-person attention. The takeaway: sometimes the highest-value innovation is a conversation.</p><h2 class="header-anchor-post">Workflow design is where the gains are showing up</h2><p>Kyle Poe, VP of Legal Innovation and Strategy at Legora, presented a framework for measuring AI ROI that moved the conversation past speed alone. His point was that firms seeing real results are looking at outcomes and pricing, not just how many minutes a task takes. The ROI, he argued, varies dramatically by use case. Some tasks show minimal time savings. Others produce modest reductions, like going from ten hours to eight. And some are transformative, compressing weeks of due diligence into hours. He also projected that firms will likely raise rates around 25% to offset AI costs while repositioning how they describe the value they deliver.</p><p>That idea came into sharper focus during the Automation in Action panel, which I moderated. Paul Pryzant, a partner at Seyfarth Shaw with 45 years in M&amp;A, described how his group now runs deep research on target companies on day one of receiving a letter of intent. Information that used to take two or three weeks to surface during due diligence is now available immediately. Pryzant also emphasized starting from the client relationship: &ldquo;My favorite question is, how are you using AI? How are members of your team using it?&rdquo; That approach opens a two-way conversation rather than a one-sided pitch.</p><p>Rutvik Rau, CEO and Co-Founder of August, reinforced the importance of starting with the deliverable and working backward. That approach forces clarity about what steps are required, where the bottlenecks sit, and which parts require judgment. The most successful users, he said, are the ones with a strong sense of how they deliver a service to their client. Curiosity and creativity matter more than technical skill.</p><p>Chad Barton, a corporate transactions partner at Holland Knight, described how his lending team built AI into their existing process for handling matters with rigid institutional parameters. He also raised a tension that no one else on the panel had addressed directly. When AI makes associates dramatically more efficient, it creates a compensation problem. &ldquo;If I&rsquo;m really efficient, I can&rsquo;t hit my target,&rdquo; is something associates are already thinking about. Barton was clear that this falls on firm leadership to address, especially during compensation reviews. Partners need to defend associates who produce excellent work product even if the hours look different.</p><p>Kyle Poe added a practical observation about change management. &ldquo;Workflow is just the orchestration of multiple things you could do within the product,&rdquo; he said. Teams that learn the individual tools first and then map the process manually tend to have a clearer path to automation. He also described building workflows in plain English instructions, the same way you would brief a literal-minded associate, and then iterating based on mistakes.</p><h2 class="header-anchor-post">The Legora demo and a question about legal research</h2><p>During the break, Harry St. John, VP of Revenue at Legora, ran a product demo that filled the room. Legora&rsquo;s platform centers on two main tools: an assistant for conversational document interrogation and a tabular review for bulk extraction across large document sets. St. John showed how the two connect. Extraction results from tabular review can be queried through the assistant, and outputs can be pushed into Microsoft Word using firm-specific templates. A newer feature called WordEdits generates multiple unique Word documents from a single instruction set, useful for tasks like drafting a set of resignation letters for every officer in a transaction.</p><p>I asked St. John about the legal research capability they had flagged, given that at Jackson Walker we tell our people not to use AI for legal research unless they are inputting their own materials. He said Legora has ingested district, appellate, and Supreme Court caselaw from a number of providers, with their legal research database launching soon. He was transparent that bankruptcy coverage was not included yet.</p><h2 class="header-anchor-post">Pricing is moving into the center of the conversation</h2><p>The panel titled Billing on Trial: Rethinking the Economics of Legal Work brought together Jon Lindrus of Foley and Lardner, Jonathan Safran of Polsinelli, Marlene Gebauer of K&amp;L Gates, and Katon Luaces of PointOne.</p><p>Time-based billing is still the dominant model, but it is under more scrutiny than ever. AI changes how long work takes. That raises questions about how value is measured and communicated. One of the clearer moments came earlier in the day during my panel when Kyle Poe described a dynamic playing out between firms and in-house counsel. Some firms are trying to benchmark what a task would have taken with humans last year, apply a discount, and offer that as a fixed fee. But if AI produces savings far beyond 20%, clients will eventually ask why they should lock in rates that still capture most of the efficiency gains for the firm. The honest answer is that both sides are still figuring this out.</p><p>Jonathan Safran, in a pre-conference interview published by the Cosmonauts team, made a point that resonated throughout the day. Clients say they want alternative fee arrangements, but many still ask for shadow billing to verify what they would have paid under an hourly model. That tension makes standardization difficult. And on the firm side, three partners at the same firm might manage the same project three very different ways, which makes fee estimation genuinely hard.</p><h2 class="header-anchor-post">Disrupting the citator: open data meets modern AI</h2><p>John Rizner, Head of AI Legal Drafting at Filevine, gave a presentation that most of the audience probably had not expected. His argument: the legal research market is approaching an inflection point. An unprecedented volume of American caselaw is now freely accessible and processable by digital tools. At the same time, language models have reached a level of legal sophistication that would have been unimaginable five years ago. Taken together, Rizner argued, these developments create the possibility of meaningful competition in legal research and call into question long-held assumptions about who can build a citator and what it should cost.</p><h2 class="header-anchor-post">A practical example from litigation</h2><div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div></div>
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">Craig Nesbitt, Solutions Architect at DISCO, presented a session comparing modern litigation workflows to high-performance cycling. The analogy worked because it stayed grounded in execution. Reducing &ldquo;drag&rdquo; in litigation comes from removing friction in how data is handled, reviewed, and analyzed. AI plays a role, but it sits alongside process design and support services. The point was about coordinating multiple elements so the overall system performs better.</div>
</div><h2></h2><h2 class="header-anchor-post">The emotional journey ahead</h2><div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div></div>
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">Richard B. Gorelick, Founder and CEO of ChronoTracer, closed the formal sessions with a ten-minute pitch titled &ldquo;The Lawyer Freakout of 2027.&rdquo; His argument was that software developers have already gone through the emotional journey that lawyers are about to face: denial, anger, bargaining, depression, and acceptance. Lawyers, he predicted, will undergo a similar identity crisis as AI transforms the nature of their work. It was a useful note to end on. The tools are advancing. The process questions are getting answered. But the human side of this transition, how people feel about the work they do and who they are when they do it, is still largely unaddressed.</div>
</div><h2 class="header-anchor-post">Where this leaves us</h2><p>By the end of the day, the conversation felt less like an introduction to new technology and more like a working session on how legal services are being reshaped. Thomson Reuters data shared during one of the sessions put current AI adoption rates at 35 to 40 percent across the industry. That is meaningful progress, but it also means more than half the profession is still watching from the sidelines.</p><p>A few observations stand out from the day:</p><ul>
<li><strong>Trust in AI needs structure, not reminders.</strong>&nbsp;Even with financial incentives, people follow AI outputs more than they should. Verification has to be designed into the process.</li>
<li><strong>Adoption depends on incentives and visible use cases.</strong>&nbsp;When AI efficiency connects directly to compensation or profitability, people engage. When it does not, they wait.</li>
<li><strong>Workflow clarity is a prerequisite for meaningful automation.</strong>&nbsp;The firms getting results are mapping their processes before automating them.</li>
<li><strong>Pricing discussions are tied directly to how work is performed.</strong>&nbsp;As tasks get faster, the economics of client relationships change alongside them.</li>
<li><strong>Clients are already engaged and influencing expectations.</strong>&nbsp;They are using AI in their own organizations and forming opinions about how outside counsel should be using it too.</li>
<li><strong>The human side of this transition deserves more attention.</strong>&nbsp;Junior associates worried about billable hours, senior partners rethinking their identity, and support staff facing higher-stress work after the easy tasks are automated. These are real concerns that training programs and leadership need to address.</li>
</ul><p>The examples came from people who are testing, adjusting, and in some cases rethinking long-standing approaches to legal work. That is what made Day One useful. The focus stayed on decisions that need to be made now.</p><p>And if there was a consistent thread across all of it, it is this: the tools are moving quickly. The harder work is aligning people, processes, and expectations around them.</p><div>
<hr>
</div><p><em>Texas Trailblazers was held March 25-26, 2026 at The Statler Dallas, organized by Cosmonauts in partnership with LegalOps.com. Day One focused on private practice; Day Two on in-house operations. Joy Heath Rush (ILTA) chaired Day One.</em></p>
]]></content:encoded>
					
		
		
			<dc:creator>xlambert@gmail.com (Greg Lambert)</dc:creator></item>
	</channel>
</rss>