<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>buckleyPLANET</title>
	<atom:link href="https://buckleyplanet.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://buckleyplanet.com/</link>
	<description>strategy, collaboration, productivity, and AI</description>
	<lastBuildDate>Sat, 18 Apr 2026 22:35:16 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Strategic Confidence Without Operational Honesty Is a Liability</title>
		<link>https://buckleyplanet.com/2026/04/strategic-confidence-without-operational-honesty-is-a-liability/</link>
		
		<dc:creator><![CDATA[Christian Buckley]]></dc:creator>
		<pubDate>Sun, 19 Apr 2026 10:00:00 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Change Management]]></category>
		<category><![CDATA[Future of Work]]></category>
		<category><![CDATA[Management 2.0]]></category>
		<category><![CDATA[AI Planning]]></category>
		<category><![CDATA[AI strategy]]></category>
		<guid isPermaLink="false">https://buckleyplanet.com/?p=19045</guid>

					<description><![CDATA[<p>In Part 1 of this article, I walked through what I consider the most underreported finding in Deloitte&#8217;s 2026 State of AI in the&#46;&#46;&#46;</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/strategic-confidence-without-operational-honesty-is-a-liability/">Strategic Confidence Without Operational Honesty Is a Liability</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">In <a href="https://buckleyplanet.com/2026/04/ai-planning-vs-execution-readiness/">Part 1</a> of this article, I walked through what I consider the most underreported finding in Deloitte&#8217;s <a href="https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html">2026 State of AI in the Enterprise</a> report: that while strategic and governance preparedness are rising, the operational dimensions — technology infrastructure, data management, and talent — are all declining year over year. Organizations are getting more confident about their AI plans while becoming less equipped to execute them.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That post is the data. This second post is the leadership question that the data raises.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><a href="https://buckleyplanet.com/wp-content/uploads/2026/04/Strategic-Confidence-Without-Operational-Honesty-Is-a-Liability.webp"><img decoding="async" loading="lazy" class="alignright  wp-image-19046" src="https://buckleyplanet.com/wp-content/uploads/2026/04/Strategic-Confidence-Without-Operational-Honesty-Is-a-Liability-300x200.webp" alt="Strategic Confidence Without Operational Honesty Is a Liability" width="263" height="175" srcset="https://buckleyplanet.com/wp-content/uploads/2026/04/Strategic-Confidence-Without-Operational-Honesty-Is-a-Liability-300x200.webp 300w, https://buckleyplanet.com/wp-content/uploads/2026/04/Strategic-Confidence-Without-Operational-Honesty-Is-a-Liability-1024x683.webp 1024w, https://buckleyplanet.com/wp-content/uploads/2026/04/Strategic-Confidence-Without-Operational-Honesty-Is-a-Liability-768x512.webp 768w, https://buckleyplanet.com/wp-content/uploads/2026/04/Strategic-Confidence-Without-Operational-Honesty-Is-a-Liability-520x347.webp 520w, https://buckleyplanet.com/wp-content/uploads/2026/04/Strategic-Confidence-Without-Operational-Honesty-Is-a-Liability.webp 1536w" sizes="auto, (max-width: 263px) 100vw, 263px" /></a>If the gaps are visible in the numbers, and the numbers are published in a widely circulated report that most senior leaders have at least glanced at, why are the operational gaps widening rather than closing? The answer is not that leaders don&#8217;t care. Most do. The answer is something more structural and more uncomfortable: strategic confidence, left unchecked, has a way of substituting for operational honesty. And in AI, that substitution is becoming one of the most consequential leadership blind spots of this technology cycle.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">What &#8220;Prepared for a Different Future&#8221; Actually Means</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The Deloitte report includes an observation from the head of AI strategy at a major European bank that I keep returning to. He described how many organizations prepared for an AI future by building infrastructure and governance for traditional AI models, but then found those investments largely obsolete when large language models arrived. Nearly 80 to 90 percent of new use cases became generative AI, and the foundations organizations had built were designed for a different kind of system entirely.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">His point was not that preparation is futile. It was that preparation designed around a fixed target is inherently fragile in an environment where the target keeps moving.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The most common approach to AI readiness is to identify a current capability gap, invest in closing it, and move forward. That approach made reasonable sense when enterprise technology evolved slowly enough that closing last year&#8217;s gap still left you reasonably prepared for next year. It does not make sense in an environment where agentic AI is projected to go from 23% adoption to 74% adoption within two years — the same two-year window in which most organizations expect to close their current preparedness gaps.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The organizations that will be genuinely prepared are not necessarily the ones working hardest to close today&#8217;s specific gaps. They are the ones building organizational capacity for <em>continuous adaptation</em>. <strong>The specific capability matters less than the infrastructure for developing capabilities</strong>: talent development processes that work fast enough to keep pace with tool evolution, data governance practices built to accommodate new models and new use cases rather than locked to specific deployments, and technical infrastructure designed to be modular and updatable rather than purpose-built for the current moment.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">MIT&#8217;s NANDA initiative, led by researcher Aditya Challapally (&#8220;<a href="https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/">The GenAI Divide: State of AI in Business 2025</a>&#8220;) found that 95% of generative AI pilots fail to reach production, despite record investment. McKinsey similarly found that while 90% of companies now use AI, only one-third have scaled it across functions. The common thread in those failure rates is not technology. It is organizational readiness of the kind that strategy documents and vendor announcements cannot produce.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">The Confidence Trap</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Here is the risk I want to name directly: executive confidence in AI strategy can become a liability if it is not grounded in an honest assessment of operational readiness.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">When leaders feel strategically prepared, they tend to accelerate. They greenlight more projects, set more ambitious timelines, and communicate more expansive commitments to boards and shareholders. All of that activity increases pressure on the operational infrastructure, which is, by the numbers, becoming less ready relative to the ambitions being set.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The organizations that manage this well are the ones with leaders who distinguish clearly between strategic conviction (which they should have) and operational confidence (which should be proportional to what the data actually shows). They use strategic clarity to set direction. They use honest operational assessment to set realistic timelines.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The organizations that manage this poorly are the ones where confidence in the strategy bleeds into an assumption of readiness, where the absence of visible problems is mistaken for the presence of genuine capabilities, and where the measurement systems are designed to report progress rather than surface gaps.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">I have watched this pattern play out across every major technology wave of the past three decades. The organizations that emerged from those waves with the most durable advantage were rarely the fastest movers. They were the ones that moved with accurate self-knowledge — honest about what they could actually deliver, deliberate about building the foundations that made scale sustainable.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The AI version of this trap has a particular texture that I want to describe precisely, because it is slightly different from previous cycles. In earlier waves (cloud adoption, mobile, social) the confidence trap was mostly a resource allocation problem. Organizations over-invested in the strategic layer and under-invested in the operational layer, but the consequences were generally bounded. Pilots failed. Rollouts stalled. Budgets were wasted.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">In AI, the confidence trap carries a higher risk because the technology is not waiting for organizational readiness to catch up. Agents are deploying. Automations are running. Decisions are being made at scale by systems that were stood up during the optimistic phase, before the operational foundations were solid. The gap between strategic confidence and operational readiness is no longer just a planning problem. It is an active risk in production environments — right now, in organizations that believe they are further along than they are.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">The Leadership Behavior That Closes the Gap</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">This is not a technology problem that can be solved by a better platform or a smarter vendor. It is a leadership behavior problem. And it has a specific shape.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The leaders who close this gap consistently share a few observable characteristics. They distinguish between the enthusiasm they communicate externally and the assessment they conduct internally. They create reporting structures that surface bad news fast, rather than filtering it before it reaches the top. They measure outcomes — behavior change, production deployment rates, actual business impact — rather than inputs like training completion, license deployment, and pilot count.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">They also resist what I&#8217;d call the announcement substitution effect: the tendency to treat a well-communicated AI strategy as evidence of AI readiness. Announcing a strategy is not the same as executing one. Publishing a responsible AI framework is not the same as governing AI in practice. Deploying Copilot to 10,000 employees is not the same as 10,000 employees working differently because of it.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The leaders I&#8217;ve observed get this right are the ones who stay curious about the distance between their stated position and their actual position. They ask the uncomfortable questions: what percentage of our pilots have actually reached production? How many employees have meaningfully changed how they work? Can we demonstrate that our data governance practices are actually governing our AI deployments? If they can&#8217;t answer those questions confidently, they treat the inability to answer as the signal — not as a communications gap to be managed, but as an operational gap to be closed.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">A Practical Diagnostic</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The Deloitte preparedness framework — five dimensions, each assessed on a spectrum from not prepared to highly prepared — is more useful as a diagnostic than as a scorecard.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The scorecard question is &#8220;how prepared are we?&#8221; It is designed to produce a number that can be compared to a benchmark and reported upward.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The diagnostic question is different: &#8220;where is the gap between what we are saying about our AI strategy and what our organization can actually execute?&#8221; That question is designed to surface the distance between intention and capability — and it is considerably more useful, and considerably more uncomfortable to ask.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">If your organization&#8217;s strategic confidence is significantly higher than your operational readiness across infrastructure, data, and talent, that gap is not a communications problem to be managed. It is an execution risk to be addressed. The most productive thing leaders can do with that information is stop treating strategic preparedness as a proxy for overall readiness and start building the operational foundations that make the strategy executable.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That means investing in data quality and governance as AI infrastructure rather than IT hygiene. It means building talent development programs that produce behavior change rather than completion certificates. It means treating technical infrastructure modernization as a strategic priority rather than a cost center. It means measuring actual operational readiness with the same rigor applied to measuring strategic ambition.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">Where This Leads</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The gap between planning confidence and execution readiness is not new. It has shown up in every major technology wave I have observed over 35 years. But in AI, it is currently wider than it has been in any previous cycle, and it is widening at precisely the moment when the technology is moving fastest.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>The organizations that close this gap deliberately and honestly will have a meaningful advantage</strong>. Not because they moved faster, but because they moved with accurate knowledge of where they actually were. The ones that allow strategic confidence to substitute for operational readiness will find out the hard way that the two are not the same thing.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">This dynamic — the leadership behavior that creates and sustains the gap between strategic confidence and operational reality — is one of the core themes I&#8217;m exploring in my forthcoming book, <em><strong>The AI Leadership Gap</strong></em>. It turns out that the most consequential decisions in AI adoption are not technical decisions. They are leadership decisions. And the organizations that get those right will have an advantage that no tool deployment can replicate.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">More on that soon. In the meantime: act serious, treat it seriously, get serious results — starting with an honest look at the distance between where you say you are and where you actually are.</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/strategic-confidence-without-operational-honesty-is-a-liability/">Strategic Confidence Without Operational Honesty Is a Liability</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Blue Plate Special: Death From Above 1979</title>
		<link>https://buckleyplanet.com/2026/04/blue-plate-special-death-from-above-1979/</link>
		
		<dc:creator><![CDATA[Christian Buckley]]></dc:creator>
		<pubDate>Sat, 18 Apr 2026 06:34:43 +0000</pubDate>
				<category><![CDATA[Blue Plate Special]]></category>
		<category><![CDATA[Music]]></category>
		<category><![CDATA[Personal]]></category>
		<guid isPermaLink="false">https://buckleyplanet.com/?p=19048</guid>

					<description><![CDATA[<p>Emerging from Toronto’s underground in 2001, Death From Above 1979 carved out a visual and sonic identity defined by high-contrast minimalism and calculated chaos.&#46;&#46;&#46;</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/blue-plate-special-death-from-above-1979/">Blue Plate Special: Death From Above 1979</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><img decoding="async" loading="lazy" class="alignright" src="https://i.discogs.com/Czw8ZvB_8yy7yXmanNkCemBu7uFTshdjKAUDNs9bwyU/rs:fit/g:sm/q:90/h:404/w:600/czM6Ly9kaXNjb2dz/LWRhdGFiYXNlLWlt/YWdlcy9BLTI3MzYx/MS0xNTUyOTMxNjg5/LTIzNTIuanBlZw.jpeg" alt="Death From Above 1979 Discography: Vinyl, CDs, &amp; More | Discogs" width="203" height="137" />Emerging from Toronto’s underground in 2001, <a href="https://deathfromabove1979.com/">Death From Above 1979</a> carved out a visual and sonic identity defined by high-contrast minimalism and calculated chaos. The duo—bassist Jesse F. Keeler and drummer/vocalist Sebastien Grainger—eschewed the traditional rock quartet setup in favor of a &#8220;stripped-down but blown-out&#8221; aesthetic. Their visual tapestry is anchored by their iconic &#8220;elephant-trunk&#8221; logo and a stage presence that feels both industrial and carnal. Musically, they synthesized the raw grit of hardcore punk with the rhythmic precision of dance-punk, utilizing heavily overdriven bass and frantic percussion to fill the void usually occupied by guitars. Influenced by the DIY ethos of the Toronto scene and the unrelenting drive of bands like AC/DC, their 2004 debut, You&#8217;re a Woman, I&#8217;m a Machine, became a blueprint for the &#8220;danceable noise&#8221; movement of the early 2000s, turning grimy basement energy into a polished, monolithic force.</p>
<p>The band’s history is as volatile as their sound, marked by a high-profile name dispute with James Murphy’s DFA Records that forced the &#8220;1979&#8221; suffix and a sudden 2006 breakup at the height of their initial fame. After a five-year hiatus where Keeler explored electronic textures with MSTRKRFT and Grainger pursued solo work, the duo famously reunited at Coachella in 2011. This second act proved to be more than a nostalgia trip; they expanded their visual and sonic palette with 2014’s The Physical World and the raw, aggressive Outrage! Is Now in 2017. Their evolution continued into 2021’s Is 4 Lovers, where they reclaimed their original numeric branding and leaned into more adventurous, self-produced sonics. Now solidified as pillars of Canadian rock, the band continues to bridge the gap between sweaty punk clubs and major festival stages, proving that their two-man wall of sound is an indestructible fixture of the modern alternative landscape.</p>
<p>Some of my favorites from their catalog:</p>
<h3>Crystal Ball &#8211; from the album <em>The Physical World</em> (2014)</h3>
<div class="video-container"><iframe title="Crystal Ball" width="500" height="375" src="https://www.youtube.com/embed/uvbu4QsCcX8?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>Freeze Me &#8211; from the album <em>Outrage! Is Now</em> (2017)</h3>
<div class="video-container"><iframe title="Death From Above 1979 - Freeze Me (Official Music Video)" width="500" height="281" src="https://www.youtube.com/embed/sdQqgVzex_w?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>One + One &#8211; from the album <em>Is 4 Lovers</em> (2021)</h3>
<div class="video-container"><iframe title="Death From Above 1979 - One + One (Official Music Video)" width="500" height="281" src="https://www.youtube.com/embed/7scGleNqEnk?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>Keep It Real Dumb &#8211; from the single <em>Keep It Real Dumb</em> (2018)</h3>
<div class="video-container"><iframe loading="lazy" title="Keep It Real Dumb" width="500" height="375" src="https://www.youtube.com/embed/_s88jPS9KmU?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>Little Girl &#8211; from the album <em>You&#8217;re A Woman, I&#8217;m A Machine</em> (2004)</h3>
<div class="video-container"><iframe loading="lazy" title="Little Girl - Death From Above 1979 10-30-2018" width="500" height="281" src="https://www.youtube.com/embed/ydPBtfOcoTI?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>White Is Red &#8211; from the album <em>The Physical World</em> (2014)</h3>
<div class="video-container"><iframe loading="lazy" title="Death From Above 1979 - White Is Red (Official Video)" width="500" height="281" src="https://www.youtube.com/embed/n0JEG_wf0pQ?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>Statues &#8211; from the album <em>Outrage! Is Now</em> (2017)</h3>
<div class="video-container"><iframe loading="lazy" title="Statues" width="500" height="375" src="https://www.youtube.com/embed/qrvcBaxJf8I?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>Glass Homes &#8211; from the album <em>Is 4 Lovers</em> (2021)</h3>
<div class="video-container"><iframe loading="lazy" title="Glass Homes" width="500" height="375" src="https://www.youtube.com/embed/eqfzXlTE3L8?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>Trainwreck 1979 &#8211; from the album <em>The Physical World</em> (2014)</h3>
<div class="video-container"><iframe loading="lazy" title="Death From Above 1979 - Trainwreck 1979 (Official Music Video)" width="500" height="281" src="https://www.youtube.com/embed/vrZxt476ef4?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>The post <a href="https://buckleyplanet.com/2026/04/blue-plate-special-death-from-above-1979/">Blue Plate Special: Death From Above 1979</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI Planning vs Execution Readiness</title>
		<link>https://buckleyplanet.com/2026/04/ai-planning-vs-execution-readiness/</link>
		
		<dc:creator><![CDATA[Christian Buckley]]></dc:creator>
		<pubDate>Fri, 17 Apr 2026 10:00:20 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<category><![CDATA[Change Management]]></category>
		<category><![CDATA[Microsoft Copilot]]></category>
		<category><![CDATA[Technology That Interests Me]]></category>
		<category><![CDATA[AI strategy]]></category>
		<guid isPermaLink="false">https://buckleyplanet.com/?p=19042</guid>

					<description><![CDATA[<p>There is a finding buried in Deloitte&#8217;s 2026 State of AI in the Enterprise report that deserves more attention than it has received. Most&#46;&#46;&#46;</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/ai-planning-vs-execution-readiness/">AI Planning vs Execution Readiness</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">There is a finding buried in Deloitte&#8217;s <a href="https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html">2026 State of AI in the Enterprise report</a> that deserves more attention than it has received. Most coverage of the report focuses on the acceleration story: more workers with AI access, more pilots moving to production, more investment, more confidence. All of that is, of course, real.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">What the coverage tends to skip is a directional finding that cuts against the optimism. When Deloitte asked leaders to rate their organization&#8217;s preparedness across five dimensions — technology infrastructure, strategy, data management, risk and governance, and talent — something counterintuitive showed up in the year-over-year comparison.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><a href="https://buckleyplanet.com/wp-content/uploads/2026/04/AI-Planning-vs-Execution-Readiness.webp"><img decoding="async" loading="lazy" class="alignright  wp-image-19043" src="https://buckleyplanet.com/wp-content/uploads/2026/04/AI-Planning-vs-Execution-Readiness-300x200.webp" alt="AI Planning vs. Execution Readiness" width="287" height="191" srcset="https://buckleyplanet.com/wp-content/uploads/2026/04/AI-Planning-vs-Execution-Readiness-300x200.webp 300w, https://buckleyplanet.com/wp-content/uploads/2026/04/AI-Planning-vs-Execution-Readiness-1024x683.webp 1024w, https://buckleyplanet.com/wp-content/uploads/2026/04/AI-Planning-vs-Execution-Readiness-768x512.webp 768w, https://buckleyplanet.com/wp-content/uploads/2026/04/AI-Planning-vs-Execution-Readiness-520x347.webp 520w, https://buckleyplanet.com/wp-content/uploads/2026/04/AI-Planning-vs-Execution-Readiness.webp 1536w" sizes="auto, (max-width: 287px) 100vw, 287px" /></a>Strategic preparedness went up three percentage points. Risk and governance preparedness went up six. Both are areas driven primarily by executive decision-making and policy development, areas where intent and communication can move the needle quickly.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">But technology infrastructure preparedness went down four points. Data management preparedness went down three points. Talent preparedness went down two points. These are the operational dimensions — the ones that require sustained investment, structural change, and time to build.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Read that again slowly. <strong>Organizations are simultaneously getting more confident about their AI strategy and less operationally ready to execute it.</strong> The gap between what leadership believes is true and what the organization can actually deliver is widening, not narrowing.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That is not a technology problem. That is a leadership problem. And it is one of the most important signals in the report.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">Why Confidence and Readiness Are Moving in Opposite Directions</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The pattern makes sense once you understand what drives each dimension. Strategic confidence is relatively easy to build. You hire a Chief AI Officer. You publish an AI strategy document. You announce a set of AI initiatives in an all-hands meeting. You read the same reports your competitors are reading and reach the same conclusions about where the market is going. All of that activity registers as strategic preparedness, and it should — direction-setting matters.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">What it doesn&#8217;t do is build data pipelines. It doesn&#8217;t modernize legacy infrastructure. It doesn&#8217;t retrain the workforce. It doesn&#8217;t create the governance frameworks that allow AI to operate safely at scale. Those things require a different kind of investment: slower, less visible, more expensive, and much harder to report in a board presentation.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The head of AI strategy at a major European bank described this tension directly in the Deloitte report. He noted that many organizations prepared for an AI future by building infrastructure and governance for traditional AI models. Then <a href="https://buckleyplanet.com/2026/04/what-is-an-llm-and-how-do-you-pick-the-right-one-for-your-business/">large language models</a> (LLMs) arrived and upended those investments entirely. Suddenly, nearly 80 to 90 percent of new use cases were generative AI, and the infrastructure organizations had built was designed for a different future.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That observation points to something important beyond the technology shift. Organizations that were genuinely operationally prepared for the previous generation of AI still found themselves underprepared for the current one. Preparation is not a destination you arrive at. It is a continuous process of closing gaps that keep moving.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">The Talent Number Is the Most Alarming</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Of the five preparedness dimensions, talent is the lowest and the only one that declined year over year. Only 20% of organizations report that their talent is highly prepared for AI. That number was higher last year.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Think about what that means in context. Organizations have spent the past two years running AI training programs, deploying AI tools, hiring AI specialists, and making public commitments to workforce development. All of that activity, and the percentage of organizations that feel their talent is highly prepared for AI went down.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">There are a few ways to read this. The most charitable interpretation is that the bar keeps moving. As AI capabilities expand rapidly into agentic and physical AI, what counts as &#8220;highly prepared&#8221; gets harder to achieve. Organizations that felt prepared for generative AI are now measuring themselves against a broader and more demanding standard.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The less charitable interpretation is that <strong>most AI training programs are not producing meaningful capability gains.</strong> Organizations are running awareness events and calling it workforce development. They are measuring training completion instead of <em>behavior change</em>. They are building AI fluency without building AI capability. As I started reading through and writing about some of my insights from the Deloitte report, I <a href="https://buckleyplanet.com/2026/04/how-to-make-ai-learning-actually-stick/">wrote about</a> why training without reinforcement, role-specific application, and management accountability fails to produce lasting behavior change. The talent preparedness data is what that failure looks like at scale.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The most honest interpretation is probably some combination of both: the standard is rising faster than the capability is building, in part because the capability-building work has been approached less seriously than the deployment work.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The <a href="https://www.hpcwire.com/bigdatawire/2026/03/03/deloittes-state-of-ai-2026-why-enterprise-execution-is-falling-behind-adoption/">BigDATAwire analysis</a> of the Deloitte report framed this bluntly: <strong>organizations are operationally not prepared to achieve their AI goals</strong>, and this widening execution gap is the core theme of the 2026 findings. That is not a fringe interpretation. It is what the data says.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">The Data Foundation Problem</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The data management preparedness decline deserves its own conversation because it sits underneath almost every other AI challenge organizations face.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;"><strong>You cannot build reliable AI systems on unreliable data. </strong></p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;"><strong>You cannot govern AI outputs that you cannot trace to their inputs. </strong></p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;"><strong>You cannot scale AI workflows built on data that was adequate for dashboards but inadequate for automated decisions. </strong></p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">These are not new observations — they have been true throughout the history of enterprise analytics — but the stakes are higher now because the consequences of bad data are no longer limited to a flawed report. They extend to decisions made autonomously at scale.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">As one analyst put it, &#8220;We have data&#8221; is not the same as &#8220;we have data we trust, with known provenance, with clear usage rights.&#8221; In 2026, the most effective enterprise AI strategies start with the foundation question: what data can we actually trust, and what needs to be fixed before we automate decisions based on it?</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Gartner&#8217;s estimate is pointed: 60% of agentic AI projects will fail in 2026 due to a lack of AI-ready data. That is not a projection about model quality or tool capability. It is a projection about the data foundations that organizations have spent years not fully investing in — and that AI, unlike more forgiving technologies, cannot compensate for.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The Deloitte data shows that data management preparedness declined three points year over year, even as AI deployment accelerated. Organizations are building on foundations they have not sufficiently strengthened. The pilots may be working. The production deployments are where the data problems become visible and expensive.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">The Widening Gap</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Taken together, the talent and data findings point to the same underlying dynamic: organizations are accelerating on the surface — more tools, more pilots, more announcements — while the foundational work that makes acceleration sustainable is falling further behind.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That is not a comfortable place to be. And it raises a harder question than the data alone can answer:<strong> if leadership knows the operational gaps exist, why are they widening rather than closing?</strong></p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That question moves us from diagnosis into leadership behavior, and that is where the real conversation begins. In Part 2 of this post, I&#8217;ll examine why strategic confidence without operational honesty is not just a planning problem. It is a leadership liability — and one of the most consequential blind spots I&#8217;ve observed across 35 years of enterprise technology adoption.</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/ai-planning-vs-execution-readiness/">AI Planning vs Execution Readiness</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>CollabTalk Podcast &#124; Episode 185 with Stuart Webb</title>
		<link>https://buckleyplanet.com/2026/04/collabtalk-podcast-episode-185-with-stuart-webb/</link>
		
		<dc:creator><![CDATA[Christian Buckley]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 11:00:02 +0000</pubDate>
				<category><![CDATA[CollabTalk]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Management 2.0]]></category>
		<category><![CDATA[Podcast]]></category>
		<guid isPermaLink="false">https://buckleyplanet.com/?p=19057</guid>

					<description><![CDATA[<p>&#160; In Episode 185 of the CollabTalk Podcast we discuss how a science-based approach to experimentation can help leaders scale faster by replacing “hope”&#46;&#46;&#46;</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/collabtalk-podcast-episode-185-with-stuart-webb/">CollabTalk Podcast | Episode 185 with Stuart Webb</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="video-container"><iframe loading="lazy" title="The PATH to Scale: How Science-Inspired Thinking Drives Business Growth (#CollabTalk Podcast Ep.185)" width="500" height="281" src="https://www.youtube.com/embed/llkuT9LXGAo?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<p>In Episode 185 of the <strong>CollabTalk Podcast</strong> we discuss how a science-based approach to experimentation can help leaders scale faster by replacing “hope” with controlled tests, measurable outcomes, and a living “plan of record” that updates as reality changes. My guest introduces the PATH framework—Purpose, Actions, Team, Harmony—as a practical way to systemize growth, remove founder bottlenecks, and make execution repeatable through clear communication and documentation. The episode also explores how AI can help filter signal from noise, as long as humans guide and validate what matters. You can listen to the podcast above and follow me using your favorite app, such as Spotify, Apple Podcasts, Stitcher, Soundcloud, or the iHeartRadio app. Be sure to subscribe!</p>
<p><iframe loading="lazy" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/soundcloud%253Atracks%253A2305075667&amp;color=%23ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;show_teaser=true" width="100%" height="166" frameborder="no" scrolling="no"></iframe></p>
<div style="font-size: 10px; color: #cccccc; line-break: anywhere; word-break: normal; overflow: hidden; white-space: nowrap; text-overflow: ellipsis; font-family: Interstate,Lucida Grande,Lucida Sans Unicode,Lucida Sans,Garuda,Verdana,Tahoma,sans-serif; font-weight: 100;"><a style="color: #cccccc; text-decoration: none;" title="The CollabTalk Podcast" href="https://soundcloud.com/collabtalk" target="_blank" rel="noopener">The CollabTalk Podcast</a> · <a style="color: #cccccc; text-decoration: none;" title="Episode 185 | The P.A.T.H. to Scale: How Science-Inspired Thinking Drives Business Growth with Stuart Webb" href="https://soundcloud.com/collabtalk/episode-185-the-p-a-t-h-to" target="_blank" rel="noopener">Episode 185 | The P.A.T.H. to Scale: How Science-Inspired Thinking Drives Business Growth with Stuart Webb</a></div>
<p>&nbsp;</p>
<p>Joining me on this podcast:</p>
<ul>
<li><strong class=""><img decoding="async" loading="lazy" class="alignright tc-smart-load-skip tc-smart-loaded" src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" alt="Norm Young from UnlimitedViz and tyGraph" data-src="https://www.buckleyplanet.com/wp-content/uploads/2022/04/Norm-Young-150x150.jpg" /><img decoding="async" loading="lazy" class="alignright tc-smart-load-skip tc-smart-loaded" src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" alt="David McCarter's Speaker Profile @ Sessionize" width="137" height="137" data-src="https://sessionize.com/image/d850-400o400o2-b0-53c4-459d-9337-12f9d331a6e6.eefe84a3-f4fd-4395-a619-c94f8b1becbe.jpg" />Stuart Webb </strong>began his career in medical research at Oxford University, pursuing a doctorate in persistent human viruses. During this time, he recognized the importance of process improvement as he observed inefficiencies in reporting results to patients promptly. He successfully transitioned from academia to business ownership, becoming a founder or co-founder of multiple businesses that he scaled effectively. He also ventured into management consulting, specializing in process automation, business transformation, and turnaround strategies. Stuart now empowers business growth through speaking, consulting, and mentoring, helping businesses scale and leaders create sustainable operations. [<a href="https://www.linkedin.com/in/stuartwebb">LinkedIn</a> | <a href="https://thecompleteapproach.substack.com/podcast">Podcast</a>]</li>
</ul>
<p>Be sure to subscribe for more collaboration, productivity, and AI-related business and technology content like this each Thursday, as well as the #MVPbuzzChat interviews with Microsoft MVPs and Regional Directors every Monday. Thanks for listening!</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/collabtalk-podcast-episode-185-with-stuart-webb/">CollabTalk Podcast | Episode 185 with Stuart Webb</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How to Make AI Learning Actually Stick</title>
		<link>https://buckleyplanet.com/2026/04/how-to-make-ai-learning-actually-stick/</link>
		
		<dc:creator><![CDATA[Christian Buckley]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 06:21:32 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Training]]></category>
		<category><![CDATA[learning and development]]></category>
		<category><![CDATA[training]]></category>
		<guid isPermaLink="false">https://buckleyplanet.com/?p=19021</guid>

					<description><![CDATA[<p>Here is a number that should make every L&#38;D professional stop cold: according to the Association for Talent Development, only 12% of learners apply&#46;&#46;&#46;</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/how-to-make-ai-learning-actually-stick/">How to Make AI Learning Actually Stick</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Here is a number that should make every L&amp;D professional stop cold: <a href="https://www.diversityresources.com/why-employee-training-programs-fail/?utm_source=copilot.com">according to the Association for Talent Development</a>, only 12% of learners apply new skills without structured follow-up after training. Twelve percent. That means roughly 88 cents of every training dollar you spend produces no behavior change unless you&#8217;ve built a reinforcement structure around it.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><a href="https://buckleyplanet.com/wp-content/uploads/2026/04/How-to-Make-AI-Learning-Actually-Stick.webp"><img decoding="async" loading="lazy" class="alignright  wp-image-19023" src="https://buckleyplanet.com/wp-content/uploads/2026/04/How-to-Make-AI-Learning-Actually-Stick-300x200.webp" alt="How to Make AI Learning Actually Stick" width="267" height="178" srcset="https://buckleyplanet.com/wp-content/uploads/2026/04/How-to-Make-AI-Learning-Actually-Stick-300x200.webp 300w, https://buckleyplanet.com/wp-content/uploads/2026/04/How-to-Make-AI-Learning-Actually-Stick-1024x683.webp 1024w, https://buckleyplanet.com/wp-content/uploads/2026/04/How-to-Make-AI-Learning-Actually-Stick-768x512.webp 768w, https://buckleyplanet.com/wp-content/uploads/2026/04/How-to-Make-AI-Learning-Actually-Stick-520x347.webp 520w, https://buckleyplanet.com/wp-content/uploads/2026/04/How-to-Make-AI-Learning-Actually-Stick.webp 1536w" sizes="auto, (max-width: 267px) 100vw, 267px" /></a>Now layer in the <a href="https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html">Deloitte 2026 State of AI in the Enterprise</a> data: when asked how organizations are adjusting talent strategies in response to AI adoption, the number one answer — at 53% — was &#8220;educating the broader workforce to raise AI fluency.&#8221; The number two answer, redesigning and implementing upskilling strategies, came in at 48%. Redesigning career paths? 33%. Measuring worker trust and engagement around AI? 30%.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">In other words, organizations have identified insufficient worker skills as the single biggest barrier to AI integration, and their primary response is to run more training. That&#8217;s not a strategy. That&#8217;s a hope.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">I&#8217;ve been facilitating AI at Work workshops for a range of organizations — enterprise clients, mid-market teams, healthcare systems, financial services firms — and the pattern I see on the ground matches what the data says. Organizations are investing in training events. They are not investing in the conditions that make training stick. And for anyone whose job it is to develop people, that distinction is the whole game.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">What the Science Has Been Telling Us for 140 Years</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">In 1885, German psychologist <a href="https://en.wikipedia.org/wiki/Hermann_Ebbinghaus">Hermann Ebbinghaus</a> conducted a series of experiments on memory retention that produced one of the most replicated findings in cognitive science: the forgetting curve. His research showed that without reinforcement, people forget roughly 50% of new information within an hour of learning it. Within 24 hours, that figure climbs to around 70%. By the end of the week, most learners retain only about 25% of what they were taught.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">One hundred and forty years later, organizations are still designing training programs that violate every principle his research established.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">A one-day AI fluency workshop. A Copilot training module on the intranet. A lunch-and-learn on prompt engineering. These are not learning programs. They are awareness events, and there is nothing wrong with awareness events — provided nobody mistakes them for behavior change.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The critical insight from Ebbinghaus, and from a century of subsequent learning science, is that repetition and reinforcement are not optional enhancements to a training program. They are the program. Without reinforcement, people tend to forget up to 90% of what they&#8217;ve learned within a month. The training event is just the starting line. What happens after it determines whether you get any return on the investment.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">The Knowing-Doing Gap Is Real and It Is Stubborn</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">There is a concept in organizational psychology called the knowing-doing gap — the persistent and well-documented distance between understanding what to do and actually doing it. It shows up everywhere, but it is particularly acute in technology training because technology training tends to be designed around knowledge transfer rather than behavior change.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Most corporate training programs fail because they are built around content delivery rather than performance change. Learning and development teams can correct this by shifting their role from content providers to performance partners. Instead of asking &#8220;what training should we deliver?&#8221;, effective L&amp;D teams ask &#8220;what behaviors must change for the business to improve?&#8221;</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That reframe matters enormously for AI training specifically. The question is not: do your employees understand what a large language model is? The question is: have they changed how they complete specific tasks as a result of having AI tools available? Those are completely different questions, they require completely different training designs, and they produce completely different outcomes.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Most AI training programs are built to answer the first question. Almost none are built to answer the second.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">The Four Structural Reasons AI Training Fails</h3>
<p><a href="https://buckleyplanet.com/wp-content/uploads/2026/04/Why-AI-Training-FGails-The-Structural-Gaps-scaled.webp"><img decoding="async" loading="lazy" class="aligncenter size-large wp-image-19022" src="https://buckleyplanet.com/wp-content/uploads/2026/04/Why-AI-Training-FGails-The-Structural-Gaps-1024x572.webp" alt="Why AI Training FGails - The Structural Gaps" width="1024" height="572" srcset="https://buckleyplanet.com/wp-content/uploads/2026/04/Why-AI-Training-FGails-The-Structural-Gaps-1024x572.webp 1024w, https://buckleyplanet.com/wp-content/uploads/2026/04/Why-AI-Training-FGails-The-Structural-Gaps-300x167.webp 300w, https://buckleyplanet.com/wp-content/uploads/2026/04/Why-AI-Training-FGails-The-Structural-Gaps-768x429.webp 768w, https://buckleyplanet.com/wp-content/uploads/2026/04/Why-AI-Training-FGails-The-Structural-Gaps-1536x857.webp 1536w, https://buckleyplanet.com/wp-content/uploads/2026/04/Why-AI-Training-FGails-The-Structural-Gaps-2048x1143.webp 2048w, https://buckleyplanet.com/wp-content/uploads/2026/04/Why-AI-Training-FGails-The-Structural-Gaps-520x290.webp 520w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Learning science and my own workshop experience point to four structural failures that show up repeatedly in enterprise AI training programs. None of them is a mystery. All of them are avoidable.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;"><strong>One: Training is treated as an event, not a journey.</strong></p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;">The follow-up void is one of the most common structural leaks in corporate training. Training is often treated as the finish line. In reality, the day the training ends is when the real work begins. Without manager reinforcement, learning decay starts within 48 hours.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;">A two-day AI workshop is a starting point. It is not a destination. Organizations that treat the workshop as the deliverable and move on have essentially spent money to create a brief spike of enthusiasm that will fade before the following Monday is over. The research on spaced learning is unambiguous: information must be encountered multiple times, in multiple formats, across an extended period before it moves from short-term awareness into a durable habit.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;"><strong>Two: The content is generic when it needs to be specific.</strong></p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;">A global LinkedIn Learning study found that 78% of employees want learning that directly relates to their daily responsibilities. Too often, employee training programs focus on theory instead of application, creating a gap between what&#8217;s taught and what&#8217;s useful.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;">A finance analyst and a project manager and a customer support rep do not have the same AI use cases, the same daily workflows, or the same relationship to the tools being deployed. Training them with the same content — here&#8217;s what AI can do, here are some prompts to try — is efficient for the training team and nearly useless for the learner. Role-specific use case development is not a luxury. It is the difference between training that creates behavior change and training that creates completion certificates.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;"><strong>Three: Managers are excluded from the process.</strong></p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;">Managers account for up to 70% of the variance in employee engagement, according to Gallup research. When managers reinforce training topics through coaching and feedback, employees are more likely to apply what they learned.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;">Yet most AI training programs are designed for individual contributors and delivered without any corresponding investment in the managers who supervise them. The employee goes to the workshop, comes back with new awareness and maybe some genuine enthusiasm, and then sits across from a manager who doesn&#8217;t know what they learned, isn&#8217;t asking about it, and isn&#8217;t creating any space for them to apply it. Within two weeks, the old habits have won by default.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;">If your AI training program does not include a parallel track for managers — equipping them to reinforce the learning, ask the right questions, and model the behaviors themselves — you are building on sand.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;"><strong>Four: There is no consequence for not changing.</strong></p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;">If there is no reward for applying the new skill and no consequence for sticking to the old habit, the brain will always take the path of least resistance.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;">This is perhaps the most uncomfortable truth in learning science, and the one organizations are most reluctant to act on. People are not naturally inclined to change well-established workflows, especially when the new approach requires more cognitive effort in the short term. If using AI tools is optional, unrecognized, and unmeasured, most people will opt for the familiar path — not because they are resistant or lazy, but because that is how human behavior works. Training without accountability is aspiration without architecture.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">What Actually Makes Training Stick</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The good news is that learning science is not ambiguous about what works. The elements of effective behavior change training are well established. They are also consistently underinvested in enterprise AI rollouts.</p>
<ul>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>Spaced repetition.</strong> Rather than concentrating all learning into a single intensive event, effective programs distribute content across multiple sessions over time, with each session building on the previous one and requiring the retrieval of earlier material. Spaced repetition, with intelligent scheduling of reviews at optimal intervals, can boost long-term retention by up to 250% compared to massed learning. For AI training, this means moving from the one-day workshop model to a structured learning journey — four sessions over six weeks, for example, rather than six hours in a single day.</li>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>Application in context.</strong> Experiential learning shifts people from being observers to participants. Instead of talking about behavior, it allows people to see, feel, and practice it in action. Participants don&#8217;t just remember what was said — they remember what happened. Many can recall specific moments, insights, and behavior shifts months or even years later. For AI training specifically, this means building practice exercises around participants&#8217; actual work — their real documents, their real processes, their real problems — rather than generic scenarios constructed for training purposes.</li>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>Peer learning and champion networks.</strong> Behavior change is most effective when it is a collective effort. When an entire team goes through transformation together, they create a new set of social norms and begin to hold each other accountable, ensuring that the new behavior becomes the standard rather than the exception. I&#8217;ve seen this work consistently in my workshop facilitation. The most durable AI adoption I&#8217;ve observed has happened laterally — one person showing another something that saved them time, that person demonstrating it to their team, the behavior spreading through peer credibility rather than top-down mandate. Building formal champion networks is how you institutionalize that organic process.</li>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>Manager involvement before, during, and after.</strong> Effective training programs brief managers before the learning event on what their teams will be covering and why. They give managers specific follow-up questions to ask. They create structured check-in points where managers and employees discuss the application and surface barriers. This is not complicated. It is largely absent from most AI training programs.</li>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>Measurement that tracks behavior, not completion.</strong> The question after an AI training program should not be &#8220;how many people completed the module?&#8221; It should be &#8220;what has changed about how these people do their jobs?&#8221; That requires pre-training baselines, post-training behavioral observation, and follow-up measurement at 30, 60, and 90 days. It requires L&amp;D professionals to be in the business of outcomes, not events.</li>
</ul>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">The Specific Challenge of AI Training</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">AI training has a characteristic that makes the knowing-doing gap particularly stubborn: the tools are improving faster than most organizations can train people to use them.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">By the time you&#8217;ve designed a Copilot training program, piloted it, refined it, and rolled it out to your first cohort, the tool has been updated, and some of what you taught is already outdated. This is not an excuse to skip training. It&#8217;s an argument for a fundamentally different training philosophy — one built around building adaptive capability rather than transferring fixed knowledge.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The organizations getting the most from their AI investments are not the ones that have trained everyone on the current feature set. They&#8217;re the ones that have built a culture of continuous learning and experimentation — where employees are expected to explore, encouraged to share what they find, and given the time and permission to build new habits incrementally rather than all at once.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That culture does not emerge from a workshop. It emerges from sustained leadership attention, consistent reinforcement structures, and a willingness to measure what actually matters — which is behavior change, not training completion.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">A Note for the People Who Train People</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">If you are an L&amp;D professional, a learning consultant, a training facilitator, or a manager responsible for developing your team&#8217;s AI capability, the Deloitte data is both a diagnosis and a call to action.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Fifty-three percent of organizations are responding to the AI skills gap by running more training. If that training isn&#8217;t designed around behavior change — if it doesn&#8217;t include spaced repetition, role-specific application, manager reinforcement, and meaningful measurement — it will produce exactly the outcome that 140 years of learning science predicts: a brief increase in awareness, followed by a rapid return to old habits.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The organizations that crack this problem will not do it by training harder. They&#8217;ll do it by training smarter — designing for retention from the first day, building the reinforcement infrastructure before the first session, involving managers as partners rather than bystanders, and measuring outcomes rather than inputs.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That is harder to sell to a budget committee than a two-day workshop. It takes longer to design. It requires more ongoing investment. And it is the only approach that actually works.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Act serious. Treat it seriously. Get serious results — and this time, get results that last past Friday.</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/how-to-make-ai-learning-actually-stick/">How to Make AI Learning Actually Stick</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>#MVPbuzzChat 355 with Melissa Hale</title>
		<link>https://buckleyplanet.com/2026/04/mvpbuzzchat-355-with-melissa-hale/</link>
		
		<dc:creator><![CDATA[Christian Buckley]]></dc:creator>
		<pubDate>Mon, 13 Apr 2026 21:40:07 +0000</pubDate>
				<category><![CDATA[Community]]></category>
		<category><![CDATA[MVP]]></category>
		<category><![CDATA[MVPbuzzChat]]></category>
		<guid isPermaLink="false">https://buckleyplanet.com/?p=19018</guid>

					<description><![CDATA[<p>For Episode 355 of the #MVPbuzzChat interview series, I spoke with Business Applications MVP Melissa Hale (/in/melissa-stephanie-hale/), a Dynamics 365 Consultant with Kerv, based in&#46;&#46;&#46;</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/mvpbuzzchat-355-with-melissa-hale/">#MVPbuzzChat 355 with Melissa Hale</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class=""><img decoding="async" loading="lazy" class="alignright" src="https://media.licdn.com/dms/image/v2/D4E03AQEGhjI2kT_aFg/profile-displayphoto-shrink_400_400/B4EZal0qIiHoAk-/0/1746538767544?e=2147483647&amp;v=beta&amp;t=p8iyDWCu3nojTRYn4Pffj5_k3lxRnIkN7AdMqEB6-m0" alt="scs2025 #southcoastsummit #microsoft #conference #microsoftcommunity  #risingcommunitystar #manchesteraiug #womenintech #techforgood | Melissa  Hale | 27 comments" width="149" height="149" />For Episode 355 of the <strong>#MVPbuzzChat</strong> <a href="https://youtube.com/playlist?list=PLUeUzxDasUMha9gPp8scMlGoqbw9U66pt&amp;feature=shared" target="_blank" rel="noopener">interview series</a>, I spoke with <a href="https://mvp.microsoft.com/en-US/MVP/profile/3dec4df2-c631-4c93-84a0-d9fcb5269d0a">Business Applications MVP</a><strong> Melissa Hale</strong> (<a href="https://www.linkedin.com/in/melissa-stephanie-hale/">/in/melissa-stephanie-hale/</a>), a Dynamics 365 Consultant with <a href="https://kerv.ai/">Kerv</a>, based in Liverpool, England. Melissa successfully transitioned from a career in molecular biology and parasitology research to becoming a leader in the Microsoft ecosystem. Drawing on her academic background and intensive software engineering training, she specializes in Power Platform development, SharePoint, and Canvas Apps to drive digital transformation. Melissa is deeply committed to ethical AI and community service, notably enhancing operations for the charity &#8220;Collaboration for Kids&#8221; through her technical expertise, and she focuses on delivering innovative, purpose-driven solutions for organizations across the public and private sectors.</p>
<p>If you would like to follow Melissa or reach out and connect with her, you can find her on <a href="https://www.linkedin.com/in/melissa-stephanie-hale/">LinkedIn</a>, <a href="https://sessionize.com/melissahale/">Sessionize</a>, or <a href="https://github.com/Mello245">GitHub</a>.</p>
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container">
<div class="video-container"><iframe loading="lazy" title="#MVPbuzzChat with Melissa Hale" width="500" height="281" src="https://www.youtube.com/embed/5i3_qL8vp9s?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<p>&nbsp;</p>
<p>Tune in every Monday for a new interview with a Microsoft MVP as part of the <a href="https://youtube.com/playlist?list=PLUeUzxDasUMha9gPp8scMlGoqbw9U66pt&amp;si=eLUT4J_igsOT-4KD" target="_blank" rel="noopener">#MVPbuzzChat series</a> and #MVPMonday. If you are a current (or former) MVP or RD and would like to participate in this series, please contact me!</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/mvpbuzzchat-355-with-melissa-hale/">#MVPbuzzChat 355 with Melissa Hale</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Blue Plate Special: Hot Hot Heat</title>
		<link>https://buckleyplanet.com/2026/04/blue-plate-special-hot-hot-heat/</link>
		
		<dc:creator><![CDATA[Christian Buckley]]></dc:creator>
		<pubDate>Sun, 12 Apr 2026 01:35:31 +0000</pubDate>
				<category><![CDATA[Blue Plate Special]]></category>
		<category><![CDATA[Music]]></category>
		<category><![CDATA[Personal]]></category>
		<guid isPermaLink="false">https://buckleyplanet.com/?p=19014</guid>

					<description><![CDATA[<p>Hot Hot Heat emerged at the turn of the millennium like a flickering neon sign in a rain-slick alley—equal parts jittery urgency and dancefloor&#46;&#46;&#46;</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/blue-plate-special-hot-hot-heat/">Blue Plate Special: Hot Hot Heat</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><img decoding="async" loading="lazy" class="alignright" src="https://f4.bcbits.com/img/0000065616_25.jpg" alt="Music | Hot Hot Heat" width="243" height="158" /><a href="https://en.wikipedia.org/wiki/Hot_Hot_Heat">Hot Hot Heat</a> emerged at the turn of the millennium like a flickering neon sign in a rain-slick alley—equal parts jittery urgency and dancefloor shimmer. Formed in Victoria in 1999, the band’s early years felt like a collage of sharp angles and bright synth flashes, as if post-punk had been run through a kaleidoscope of thrift-store keyboards and late-night art school energy. Their breakout era—anchored by Make Up the Breakdown—painted scenes of nervous romance and urban disconnection in bold, staccato brushstrokes: jangling guitars like loose wires, vocals that ricocheted between urgency and vulnerability, and rhythms that suggested both a crowded basement show and a dance party teetering on chaos. Influenced by acts like The Cure and Elvis Costello and the Attractions, they translated new wave nostalgia into something restless and modern, helping define the early-2000s indie dance-punk surge.</p>
<p>As their career unfolded across five albums, the band’s sound stretched outward—less a straight line than a shifting mural, layering glossy pop instincts over their twitchy foundation. Elevator and Happiness Ltd. introduced brighter colors and sleeker textures, while Future Breeds leaned into experimentation, weaving disco pulses and electronic loops into their sonic architecture. Lineup changes and industry shifts softened their momentum in the 2010s, but even in quieter years, their aesthetic lingered: a world of blinking lights, awkward silhouettes, and emotional static captured in motion. Their 2016 self-titled release closed the initial chapter like a final, saturated frame before the lights went out, and their brief 2023 return—ending almost as quickly as it began—felt like a flash of that same old glow in a darkened room. Across their full arc, Hot Hot Heat created more than songs; they built a visual language of sound—kinetic, colorful, and slightly off-balance—that continues to echo through the indie landscape.</p>
<p>Some of my favorites from their catalog:</p>
<h3>No, Not Now &#8211; from the album <em>Make Up the Breakdown</em> (2002)</h3>
<div class="video-container"><iframe loading="lazy" title="Hot Hot Heat - No Not Now (Official Video)" width="500" height="281" src="https://www.youtube.com/embed/lAddEFBYO4g?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>Middle of Nowhere &#8211; from the album <em>Elevator</em> (2005)</h3>
<div class="video-container"><iframe loading="lazy" title="Hot Hot Heat - Middle Of Nowhere (Video) (Standard Version)" width="500" height="375" src="https://www.youtube.com/embed/7Nk2iNjLYuk?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>Harmonicas &amp; Tambourines &#8211; from the album <em>Happiness LTD</em>. (2007)</h3>
<div class="video-container"><iframe loading="lazy" title="Hot Hot Heat - Harmonicas &amp; Tambourines (Video)" width="500" height="375" src="https://www.youtube.com/embed/0FNgThEm2E4?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>Talk To Me, Dance with Me &#8211; from the album <em>Make Up the Breakdown</em> (2002)</h3>
<div class="video-container"><iframe loading="lazy" title="Hot Hot Heat - Talk To Me, Dance With Me [Remastered] (OFFICIAL VIDEO)" width="500" height="281" src="https://www.youtube.com/embed/S-Rmz-nOJn4?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>Goognight Goodnight &#8211; from the album <em>Elevator</em> (2005)</h3>
<div class="video-container"><iframe loading="lazy" title="Hot Hot Heat - Goodnight Goodnight (Video)" width="500" height="375" src="https://www.youtube.com/embed/Y-h98hFYusc?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>Implosionatic &#8211; from the album <em>Future Breeds</em> (2010)</h3>
<div class="video-container"><iframe loading="lazy" title="Implosionatic" width="500" height="375" src="https://www.youtube.com/embed/hLUEiKnDF38?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>Let Me In &#8211; from the album <em>Happiness LTD</em>. (2007)</h3>
<div class="video-container"><iframe loading="lazy" title="Hot Hot Heat - Let Me In (Video)" width="500" height="375" src="https://www.youtube.com/embed/cwxR2_AtYtE?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>Ladies and Gentlemen &#8211; from the album <em>Elevator</em> (2005)</h3>
<div class="video-container"><iframe loading="lazy" title="Ladies and Gentleman" width="500" height="375" src="https://www.youtube.com/embed/l6TR6_EaMf8?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>&nbsp;</p>
<h3>Magnitude &#8211; from the album <em>Hot Hot Heat</em> (2016)</h3>
<div class="video-container"><iframe loading="lazy" title="Hot Hot Heat - Magnitude" width="500" height="281" src="https://www.youtube.com/embed/3_gVB_4Gs3M?feature=oembed&#038;wmode=opaque" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
<p>The post <a href="https://buckleyplanet.com/2026/04/blue-plate-special-hot-hot-heat/">Blue Plate Special: Hot Hot Heat</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Entry-Level Problem Nobody in AI Wants to Talk About</title>
		<link>https://buckleyplanet.com/2026/04/the-entry-level-problem-nobody-in-ai-wants-to-talk-about/</link>
		
		<dc:creator><![CDATA[Christian Buckley]]></dc:creator>
		<pubDate>Fri, 10 Apr 2026 11:00:13 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<category><![CDATA[Change Management]]></category>
		<category><![CDATA[Copilot]]></category>
		<category><![CDATA[Culture]]></category>
		<category><![CDATA[Future of Work]]></category>
		<category><![CDATA[AI adoption]]></category>
		<category><![CDATA[AI Training]]></category>
		<category><![CDATA[end user adoption]]></category>
		<guid isPermaLink="false">https://buckleyplanet.com/?p=19039</guid>

					<description><![CDATA[<p>Over the past two years, I have facilitated AI at Work workshops for organizations across a range of industries, including financial services, healthcare, manufacturing,&#46;&#46;&#46;</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/the-entry-level-problem-nobody-in-ai-wants-to-talk-about/">The Entry-Level Problem Nobody in AI Wants to Talk About</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Over the past two years, I have facilitated AI at Work workshops for organizations across a range of industries, including financial services, healthcare, manufacturing, logistics, professional services, and the public sector. The audiences vary. The questions vary. But one pattern shows up in almost every session, regardless of the room.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">When I ask participants to think about which tasks in their current role they&#8217;d most like AI to handle, the answers cluster quickly around the same category: the routine, repetitive, time-consuming work that fills the margins of their day. Data entry. Report generation. First-pass research. Meeting summaries. Basic customer inquiries. The stuff they do because someone has to, not because it requires their full capability.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That&#8217;s a reasonable and honest answer. And it points directly at the problem nobody in the AI conversation wants to sit with for too long.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That work — the routine, repetitive, entry-level work that experienced professionals are eager to hand off — is the same work that has historically served as the on-ramp to a career. It is how junior professionals learn the underlying mechanics of their field before they are trusted with higher-stakes responsibilities. It is how organizations build the talent pipeline that eventually becomes their senior leadership.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">And with the growth of AI, this repetitive yet career-developing work is disappearing.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">What the Data Says</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Deloitte&#8217;s <a href="https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html">2026 State of AI in the Enterprise report</a> puts numbers to what many practitioners already feel on the ground. Within one year, 36% of surveyed companies expect at least 10% of their jobs to be fully automated. Looking out three years, that figure rises to 82%. The report specifically calls out entry-level roles, such as data entry, reconciliation, and first-level customer support, as the first targets for automation.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">What makes this finding particularly sharp is what Deloitte says next. These are often the starting point for longer careers. Organizations will likely need to develop alternate pathways for professional advancement, ensuring that employees have expertise that includes foundational processes.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That sentence is doing a lot of work. It acknowledges a serious structural problem while leaving the solution almost entirely undefined. And that gap between acknowledgment and action is exactly where most organizations are sitting right now.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The external data reinforces the concern. Entry-level hiring at the 15 largest tech firms fell 25% from 2023 to 2024, according to <a href="https://spectrum.ieee.org/ai-effect-entry-level-jobs">research from SignalFire</a>. Entry-level job postings dropped 15% year over year in 2025. The <a href="https://www.weforum.org/stories/2025/04/ai-jobs-international-workers-day/">World Economic Forum&#8217;s Future of Jobs Report 2025</a> found that 40% of employers expect to reduce their workforce in areas where AI can automate tasks. And 49% of US Gen Z job seekers now believe AI has reduced the value of their college education in the job market.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">These are not fringe statistics. They reflect a structural shift that is already in motion.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">What I See in the Workshop Room</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The data matters, but data alone doesn&#8217;t capture what this actually looks like from where I stand. So let me tell you what I see in the room.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">In introductory AI workshops, the participants who engage most enthusiastically with AI tools are almost always the more experienced professionals. They grasp quickly how to redirect the time they save toward higher-value work. A senior analyst who used to spend half her week pulling and formatting data can now spend that time on interpretation and recommendations. A project manager who used to draft status reports from scratch can now review and refine an AI-generated draft in a fraction of the time. In fact, interacting with two Senior PMs in a Copilot Cowork demo, the presentation hadn&#8217;t even finished before we were talking about how these tools could automate a number of daily and weekly workloads. The productivity gains are real and visible, and participants leave these sessions genuinely energized.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The junior participants — the analysts two years into their career, the coordinators fresh out of school, the associates still learning how their organization actually works — tend to engage differently. Some are enthusiastic. But a meaningful number are quietly anxious in a way they rarely say out loud directly. They are doing the math. The tasks AI is best at are the tasks they were hired to do. And they don&#8217;t yet have the experience base to pivot into the higher-order work that AI is supposedly freeing everyone up for.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">In intermediate workshops, where participants have usually been using AI tools for six months to a year, a different pattern emerges. I start to see the gap between those who have used AI to accelerate their learning and those who have used it to avoid learning. The first group has built something: genuine understanding of their domain, developed through AI-augmented practice. The second group has a dependency: they can produce outputs, but they can&#8217;t always explain them, defend them, or adapt when something goes wrong.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That dependency is the long-term version of the entry-level problem. And it doesn&#8217;t resolve itself on its own.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">The Ladder Rung Nobody Is Replacing</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Here is the framing I keep coming back to, and that I think deserves more direct attention than the industry is currently giving it:</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Entry-level work has always served two functions simultaneously. The obvious function is output: getting necessary tasks done. <strong>The less obvious but arguably more important function is formation</strong>: building the foundational knowledge, professional judgment, and contextual understanding that makes a junior person capable of becoming a senior person.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">A loan officer who spent three years manually reviewing credit applications develops intuitions about risk that are difficult to acquire any other way. A financial analyst who built hundreds of models from scratch understands what can go wrong in ways that an analyst who only reviewed AI-generated models doesn&#8217;t. A customer service rep who handled ten thousand calls builds a model of human behavior and customer psychology that informs everything they do later in their career.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">As <a href="https://www.rezi.ai/posts/entry-level-jobs-and-ai-2026-report">one analysis put it</a>, we risk creating a generation of architects who have never laid a brick. The concern isn&#8217;t that AI will make junior work impossible to find. It&#8217;s that if the current generation of early-career professionals never grapples with the foundational challenges of their field because AI solves those challenges automatically, they may never develop the deep intuition and tacit knowledge required for senior roles.</p>
<p>Here&#8217;s a parallel problem that most people will understand: an overreliance on smartphones has created a population of people who don&#8217;t remember phone numbers and require detailed driving instructions to get anywhere. If we don&#8217;t use these skills, we lose them.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Most organizations have not thought carefully about how they are going to replace that formation function. They are automating the output without preserving the learning. And in three to five years, the consequences of that choice will be visible in ways they aren&#8217;t today.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">What the Deloitte Report Misses</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The Deloitte report flags the problem clearly and then moves past it relatively quickly, recommending that organizations develop <em>alternate</em> pathways for professional advancement. That recommendation is correct as far as it goes. But it understates how difficult that actually is to design and how few organizations are actively working on it.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The report&#8217;s talent strategy data is revealing. When asked how they are adjusting talent strategies because of AI, 53% of organizations said they are educating the broader workforce to raise AI fluency. Only 33% said they are redesigning career paths and career mobility strategies. Only 19% said they are changing the balance between full-time, contract, and gig workers.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">In other words, most organizations are teaching people to use AI tools without fundamentally rethinking the structure of how careers are built. <strong>They are adjusting for the output function of entry-level work without addressing the formation function</strong>. And those are not the same problem.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">What Responsible Organizations Are Starting to Do</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">I want to be careful not to present this as a problem without any constructive direction, because there are organizations approaching it thoughtfully. They are the minority, but the patterns are worth naming.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The most effective approaches I&#8217;ve seen share a few characteristics. They are deliberate about preserving learning experiences even when AI could handle the task more efficiently. A junior analyst might still be asked to build a model manually before being allowed to work with AI-generated versions, not because the manual version is more efficient, but because the manual process builds understanding that the AI-assisted process doesn&#8217;t.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">They invest in structured mentorship that is explicitly designed to transfer the tacit knowledge that used to move laterally through junior work. Senior professionals aren&#8217;t just reviewing AI outputs, but are explicitly narrating their reasoning, surfacing the judgment calls that don&#8217;t show up in the final product, and creating structured opportunities for junior staff to practice that reasoning themselves.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">They are rethinking what entry-level roles look like when routine tasks are automated. Rather than simply eliminating those roles, they are asking what remains — what the human layer of that work looks like when the AI handles the repetitive core — and building role definitions around that remainder. The job changes shape rather than disappearing.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">And they are honest with their junior staff about what is happening and why, rather than leaving people to figure it out through ambient anxiety. That honesty alone is rarer than it should be.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">A Word to the People Who Train People</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">If you design training programs, manage learning and development, or facilitate AI workshops, this problem sits at the center of your work, whether you&#8217;ve named it that way or not. [<strong>Update</strong>: Check out <a href="https://buckleyplanet.com/2026/04/how-to-make-ai-learning-actually-stick/">my article on making change &#8220;stick.&#8221;</a>]</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The workshops I run increasingly have to hold two things simultaneously: On one hand, I&#8217;m helping participants use AI tools more effectively — building genuine capability with the technology, reducing friction, expanding what people can accomplish. On the other hand, I&#8217;m watching carefully for the dependency pattern, the place where efficiency tips over into atrophy, where people start producing outputs they don&#8217;t fully understand.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That balance is not something you can set and forget. It requires intentional curriculum design, regular reassessment, and a willingness to slow down the productivity narrative long enough to ask whether learning is actually happening alongside the output gains.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The organizations that get this right will have a meaningful advantage in five years — not just because their AI deployments will be more effective, but because their people will understand what the AI is doing well enough to catch it when it goes wrong, adapt when the tools change, and apply judgment in situations the AI wasn&#8217;t designed for.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The organizations that get it wrong will have efficient processes and a talent pipeline that is shallower than they realize. They won&#8217;t notice until they need someone to step into a senior role and discover that the formation work was never done.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">Let&#8217;s Talk About Your Training Program</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">I run introductory and intermediate AI at Work workshops for organizations navigating exactly these questions: how to build genuine AI capability without creating dependency, how to help experienced professionals leverage AI&#8217;s efficiency gains, and how to think about the junior talent pipeline in a world where the traditional on-ramp is changing shape.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">If your organization is working through any of this, I&#8217;m happy to share my current syllabi and talk through what might fit your context. Whether you&#8217;re looking for a virtual session, an in-person workshop, or something in between, feel free to reach out directly.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">You can connect with me through <a href="https://www.linkedin.com/in/christianbuckley/">LinkedIn</a>. I&#8217;d rather have the conversation early than have you discover the gaps after the fact.</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/the-entry-level-problem-nobody-in-ai-wants-to-talk-about/">The Entry-Level Problem Nobody in AI Wants to Talk About</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>What Is an LLM, and How Do You Pick the Right One for Your Business?</title>
		<link>https://buckleyplanet.com/2026/04/what-is-an-llm-and-how-do-you-pick-the-right-one-for-your-business/</link>
		
		<dc:creator><![CDATA[Christian Buckley]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 15:25:19 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<category><![CDATA[LLM]]></category>
		<guid isPermaLink="false">https://buckleyplanet.com/?p=19031</guid>

					<description><![CDATA[<p>If you&#8217;ve spent any time in the tech world over the last couple of years, you&#8217;ve heard the term &#8220;LLM&#8221; thrown around constantly. Large&#46;&#46;&#46;</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/what-is-an-llm-and-how-do-you-pick-the-right-one-for-your-business/">What Is an LLM, and How Do You Pick the Right One for Your Business?</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">If you&#8217;ve spent any time in the tech world over the last couple of years, you&#8217;ve heard the term &#8220;LLM&#8221; thrown around constantly. Large language models power ChatGPT, Claude, Gemini, and pretty much every AI assistant that&#8217;s become part of our daily vocabulary. But what actually <em>is</em> an LLM, and more importantly, how do you figure out which one is right for your specific business problem?</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Let me break it down in plain terms:</p>
<h3 class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>The short version of what an LLM actually does</strong></h3>
<p>&nbsp;</p>
<p><a href="https://buckleyplanet.com/wp-content/uploads/2026/04/What-an-LLM-actually-does.webp"><img decoding="async" loading="lazy" class="aligncenter wp-image-19034 " src="https://buckleyplanet.com/wp-content/uploads/2026/04/What-an-LLM-actually-does-e1776360811865-1024x191.webp" alt="what an LLM actually does" width="713" height="133" srcset="https://buckleyplanet.com/wp-content/uploads/2026/04/What-an-LLM-actually-does-e1776360811865-1024x191.webp 1024w, https://buckleyplanet.com/wp-content/uploads/2026/04/What-an-LLM-actually-does-e1776360811865-300x56.webp 300w, https://buckleyplanet.com/wp-content/uploads/2026/04/What-an-LLM-actually-does-e1776360811865-768x144.webp 768w, https://buckleyplanet.com/wp-content/uploads/2026/04/What-an-LLM-actually-does-e1776360811865-520x97.webp 520w, https://buckleyplanet.com/wp-content/uploads/2026/04/What-an-LLM-actually-does-e1776360811865.webp 1536w" sizes="auto, (max-width: 713px) 100vw, 713px" /></a></p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">At its core, a large language model is a neural network trained to predict the next word (or more precisely, the next &#8220;token&#8221;) in a sequence. Feed it billions of web pages, books, articles, and code, and it learns the statistical patterns of human language: syntax, semantics, context, even some reasoning.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">When you ask it a question, it&#8217;s not &#8220;looking up&#8221; an answer the way a search engine does. It&#8217;s generating a response one token at a time, based on what it has learned is most likely to follow. That&#8217;s why these models can write, summarize, explain, and even code, but it&#8217;s also why they can confidently say something wrong. The model doesn&#8217;t <em>know</em> what&#8217;s true; it knows what sounds like a plausible continuation of text.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That distinction matters enormously when you&#8217;re thinking about applying LLMs to business processes.</p>
<h3 class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>Pre-training vs. post-training: why this affects you</strong></h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><a href="https://buckleyplanet.com/wp-content/uploads/2026/04/Pre-training-vs.-post-training-e1776360913488.webp"><img decoding="async" loading="lazy" class="alignright  wp-image-19035" src="https://buckleyplanet.com/wp-content/uploads/2026/04/Pre-training-vs.-post-training-e1776360913488-1024x751.webp" alt="Pre-training vs. post-training" width="374" height="274" srcset="https://buckleyplanet.com/wp-content/uploads/2026/04/Pre-training-vs.-post-training-e1776360913488-1024x751.webp 1024w, https://buckleyplanet.com/wp-content/uploads/2026/04/Pre-training-vs.-post-training-e1776360913488-300x220.webp 300w, https://buckleyplanet.com/wp-content/uploads/2026/04/Pre-training-vs.-post-training-e1776360913488-768x563.webp 768w, https://buckleyplanet.com/wp-content/uploads/2026/04/Pre-training-vs.-post-training-e1776360913488-520x382.webp 520w, https://buckleyplanet.com/wp-content/uploads/2026/04/Pre-training-vs.-post-training-e1776360913488.webp 1288w" sizes="auto, (max-width: 374px) 100vw, 374px" /></a>Most LLMs go through two phases. First, pre-training: the model ingests a massive corpus of internet text and learns to predict language patterns. This is where the bulk of what the model &#8220;knows&#8221; comes from. Think GPT-3 or the base LLaMA models, which are powerful but not yet useful as a business tool.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The second phase is post-training, which is where the model gets shaped into an assistant. Through a combination of human feedback, preference ranking, and fine-tuning on curated examples, the model learns to follow instructions, stay helpful, and avoid harmful outputs. This is what separates ChatGPT from a raw language model.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Why does this matter for you? Because the post-training process reflects choices made by the model&#8217;s developers: what&#8217;s acceptable, what&#8217;s prioritized, what use cases it&#8217;s been optimized for. Two models might have similar raw capability but behave very differently in a production environment.</p>
<h3 class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>Picking the right LLM for your use case</strong></h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><a href="https://buckleyplanet.com/wp-content/uploads/2026/04/picking-the-right-LLM-e1776361151899.webp"><img decoding="async" loading="lazy" class="alignright  wp-image-19036" src="https://buckleyplanet.com/wp-content/uploads/2026/04/picking-the-right-LLM-e1776361151899-300x231.webp" alt="picking the right LLM" width="374" height="288" srcset="https://buckleyplanet.com/wp-content/uploads/2026/04/picking-the-right-LLM-e1776361151899-300x231.webp 300w, https://buckleyplanet.com/wp-content/uploads/2026/04/picking-the-right-LLM-e1776361151899-768x590.webp 768w, https://buckleyplanet.com/wp-content/uploads/2026/04/picking-the-right-LLM-e1776361151899-520x400.webp 520w, https://buckleyplanet.com/wp-content/uploads/2026/04/picking-the-right-LLM-e1776361151899.webp 963w" sizes="auto, (max-width: 374px) 100vw, 374px" /></a>Here&#8217;s where a lot of organizations go wrong: they evaluate models by running one or two informal tests, pick a winner, and move on. That&#8217;s how you end up with a tool that performs beautifully in the demo and falls apart in production.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">A few things worth considering:</p>
<ul>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><em>Benchmark results are context-dependent.</em> The same model can score very differently across evaluation frameworks. A 63% accuracy on one benchmark can drop to 48% on another, not because the model changed, but because the evaluation method changed. Don&#8217;t put too much stock in a single leaderboard ranking.</li>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><em>Longer outputs aren&#8217;t better outputs.</em> Both humans and AI evaluation tools have a documented bias toward longer responses. When evaluating models, make sure you&#8217;re scoring for accuracy and usefulness, not verbosity. Some of the best business-ready responses are concise and precise, not sprawling.</li>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><em>Training data cutoffs matter.</em> If your use case involves recent events, current market data, or anything time-sensitive, you need a model with web access or recent training data. A model with a knowledge cutoff from a year ago can&#8217;t help you analyze last quarter&#8217;s trends.</li>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><em>Understand the difference between general and fine-tuned models.</em> A general-purpose LLM like Claude or GPT-4 is remarkably capable out of the box. But for highly specialized domains like legal analysis, medical documentation, or financial compliance, you may get better results from a fine-tuned model trained specifically on that domain&#8217;s language and conventions.</li>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><em>Cost and inference time are real considerations.</em> The biggest model isn&#8217;t always the right model. For tasks like summarization, classification, or drafting routine communications, smaller, faster, cheaper models often perform comparably to the frontier models. Right-sizing your model to the task can dramatically change your economics.</li>
</ul>
<h3 class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>The practical bottom line</strong></h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">After watching this space evolve for several years, the organizations getting the most value from LLMs are the ones doing three things consistently:</p>
<ol>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Defining specific, measurable success criteria before evaluating any model</li>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Testing against real data from their own domain rather than generic demos</li>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Building feedback loops so they know when model performance degrades over time</li>
</ol>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The model isn&#8217;t the whole answer. How you prompt it, what data you give it access to, and how you evaluate its outputs matter just as much. But starting with the right LLM, matched to your actual use case, is step one.</p>
<p>If you&#8217;re looking to improve your AI skills and get more out of the platform you use, take a look at my course on <a href="https://smarterconsulting.mykajabi.com/offers/KnyupKH4/checkout">Practical Prompt Engineering</a>.</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/what-is-an-llm-and-how-do-you-pick-the-right-one-for-your-business/">What Is an LLM, and How Do You Pick the Right One for Your Business?</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Agentic AI Readiness Problem</title>
		<link>https://buckleyplanet.com/2026/04/the-agentic-ai-readiness-problem/</link>
		
		<dc:creator><![CDATA[Christian Buckley]]></dc:creator>
		<pubDate>Wed, 08 Apr 2026 19:07:45 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<category><![CDATA[Copilot]]></category>
		<category><![CDATA[Future of Work]]></category>
		<category><![CDATA[Governance]]></category>
		<category><![CDATA[Agentic governance]]></category>
		<category><![CDATA[AI governance]]></category>
		<guid isPermaLink="false">https://buckleyplanet.com/?p=19025</guid>

					<description><![CDATA[<p>There is a scene playing out quietly inside organizations right now that should concern anyone responsible for technology strategy. An AI leader at a&#46;&#46;&#46;</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/the-agentic-ai-readiness-problem/">The Agentic AI Readiness Problem</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">There is a scene playing out quietly inside organizations right now that should concern anyone responsible for technology strategy. An AI leader at a large company goes looking for an inventory of the AI tools and models currently running in production. What they find — or more precisely, what they don&#8217;t find — is the problem. Models have been deployed without formal oversight. Agents are running without audit trails. Nobody has a complete picture of what the AI systems in their organization are actually doing.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><a href="https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_1.webp"><img decoding="async" loading="lazy" class="alignright wp-image-19026" src="https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_1-300x200.webp" alt="The Agentic AI Readiness Problem" width="284" height="189" srcset="https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_1-300x200.webp 300w, https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_1-1024x683.webp 1024w, https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_1-768x512.webp 768w, https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_1-520x347.webp 520w, https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_1.webp 1536w" sizes="auto, (max-width: 284px) 100vw, 284px" /></a>Deloitte documented this exact scenario in their <a href="https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html">2026 State of AI in the Enterprise report</a>. It isn&#8217;t an edge case. It&#8217;s a pattern.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Microsoft&#8217;s own telemetry from November 2025 confirms the scale of what&#8217;s already in motion. According to <a href="https://www.microsoft.com/en-us/security/blog/2026/02/10/80-of-fortune-500-use-active-ai-agents-observability-governance-and-security-shape-the-new-frontier/">Microsoft&#8217;s Cyber Pulse report</a>, 80% of Fortune 500 companies are already running active AI agents built with Copilot Studio or the Microsoft Agent Builder. These aren&#8217;t experiments sitting in a sandbox. They are agents in production, accessing data, triggering workflows, and making decisions at scale.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Now layer in what Deloitte&#8217;s survey of 3,235 enterprise leaders found: nearly three in four organizations plan to deploy agentic AI within two years. Today, 23% are already using it at least moderately. And only 21% of organizations report having a mature governance model for autonomous agents.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Do that math slowly. The technology is accelerating. The governance isn&#8217;t.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">What Makes Agentic AI Different, and Why It Matters for Governance</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">For the past several years, the AI governance conversation has been mostly about outputs. Large language models generate text, images, code, and recommendations. The risk profile is real but bounded: a model says something wrong, something biased, something it shouldn&#8217;t. Humans review the output and decide what to do with it.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Agentic AI breaks that model entirely.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">An AI agent doesn&#8217;t just generate a recommendation and wait. It sets goals, reasons through multi-step tasks, accesses tools and systems, and takes action — often without a human in the loop at any individual step. The Deloitte report describes the shift clearly: agentic AI transforms AI from a source of information and insights into a system that performs. And that distinction changes everything about how governance needs to work.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Consider what this looks like in practice. A traditional AI assistant might draft an email response to a customer complaint. An AI agent might read the complaint, check the inventory system, process the refund, update the CRM record, and send the confirmation email — all autonomously, all within its granted permissions, all without a human clicking anything. The outcome might be exactly right. Or it might not be. And by the time you know which, the action has already been taken.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Microsoft&#8217;s Chief Marketing Officer for AI at Work, Jared Spataro, put it plainly in a recent blog post (<a class="underline underline underline-offset-2 decoration-1 decoration-current/40 hover:decoration-current focus:decoration-current" href="https://aibusiness.com/agentic-ai/microsoft-recommits-to-ai-agents">https://aibusiness.com/agentic-ai/microsoft-recommits-to-ai-agents</a>): &#8220;The speed of agent development and proliferation tells us customers see value, but without guardrails the pace of adoption turns into blind spots, diminished ROI and real security risk.&#8221; That is not language you typically hear from a vendor in launch mode. It reflects something more honest: even the companies building these tools are concerned about the governance gap.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">The Governance Gap Is Not a Surprise</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">I want to be direct about something: the governance gap in agentic AI is not a surprise. It is the predictable consequence of how enterprise technology adoption has always worked, applied to a technology where the stakes of getting it wrong are higher than usual.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">New technology arrives. Early adopters deploy it. The deployment accelerates faster than the organizational infrastructure to manage it. At some point — sometimes after an incident, sometimes just as the scale becomes undeniable — organizations scramble to build the governance frameworks that should have been built first.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">I&#8217;ve watched this cycle play out with SharePoint, with cloud migration, with mobile device management, and with social media. The technology always moves faster than the policy. The difference with agentic AI is that the technology isn&#8217;t just faster — it&#8217;s acting. A SharePoint site that lacks governance produces messy information architecture. An AI agent that lacks governance can make purchases, send communications, modify data, and trigger downstream workflows. The blast radius is categorically different.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><a href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/tech-forward/state-of-ai-trust-in-2026-shifting-to-the-agentic-era">McKinsey&#8217;s 2026 AI Trust Maturity Survey</a> found that while overall AI maturity scores are improving, governance and agentic AI controls lag behind data and technology capabilities across every region they studied. They describe it as a globally consistent governance gap. Gartner&#8217;s prediction is even starker: more than 40% of agentic AI projects will be canceled by 2027, largely because organizations are deploying agents faster than they can control, explain, or audit them.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The organizations that get caught by this aren&#8217;t reckless. They&#8217;re organizations that did exactly what organizations do: they moved fast on the deployment side and left the governance conversation for later. Later is arriving.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">The Three Governance Problems Nobody Wants to Talk About</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><a href="https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_3-problems.webp"><img decoding="async" loading="lazy" class="alignright wp-image-19027" src="https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_3-problems-300x200.webp" alt="The Agentic AI Readiness Problem - the Three Problems" width="285" height="190" srcset="https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_3-problems-300x200.webp 300w, https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_3-problems-1024x683.webp 1024w, https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_3-problems-768x512.webp 768w, https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_3-problems-520x347.webp 520w, https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_3-problems.webp 1536w" sizes="auto, (max-width: 285px) 100vw, 285px" /></a>The Deloitte data identifies data privacy and security as the top AI risk concern at 73%, followed by legal, IP, and regulatory compliance at 50%, and governance capabilities and oversight at 46%. These numbers are telling — not because of what they include, but because of what they reveal about the gap between concern and action.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Organizations know these risks exist. Most haven&#8217;t built the infrastructure to manage them. Here&#8217;s where the specific failures tend to show up:</p>
<ol>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>The accountability problem.</strong> With traditional AI systems, accountability is relatively clear. A human made a decision with AI assistance, and the human is accountable for the outcome. With agentic AI, the chain gets murky fast. When an autonomous agent makes a decision that triggers a sequence of actions and one of those actions produces a bad outcome, who is accountable? The person who deployed the agent? The team that set its parameters? The vendor whose model is running underneath?Organizations need to answer this question before they deploy, not after something goes wrong. The answer has to be embedded in the governance framework itself: clear definitions of which decisions agents can make independently, which require human approval, and who is responsible for the outcome in either case. Most organizations haven&#8217;t had this conversation at sufficient depth.</li>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>The visibility problem.</strong> You cannot govern what you cannot see. Agentic AI systems that operate across multiple tools, databases, and APIs create audit trails that are difficult to reconstruct after the fact. Microsoft&#8217;s own Cyber Pulse report is direct about this risk: agents can inherit permissions, access sensitive information, and generate outputs at scale — sometimes entirely outside the visibility of IT and security teams. They describe this as shadow AI, a more dangerous evolution of the shadow IT problem that IT departments have been managing for years.In December 2025, OWASP published the <a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/">Top 10 for Agentic Applications</a>, the first formal taxonomy of risks specific to autonomous AI agents. The list includes goal hijacking, tool misuse, identity abuse, memory poisoning, cascading failures, and rogue agents. The scenario they describe for tool misuse is instructive: an enterprise AI assistant with legitimate access to email, calendar, and CRM is compromised through a malicious instruction embedded in a routine email. The agent follows the hidden directive — accessing sensitive data, exfiltrating it via calendar events — while providing a benign response to the user. Standard data loss prevention tools don&#8217;t flag it because nothing anomalous happened at the network level. The agent did exactly what it was authorized to do. The problem was that nobody was watching what it was actually doing at the task level.</li>
<li class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>The scope creep problem.</strong> AI agents are typically deployed with a defined scope: a set of tasks they are authorized to perform and a set of systems they are authorized to access. In practice, that scope expands. Agents get granted additional permissions incrementally. New integrations get added. Edge cases arise that require broader access. Over time, the gap between what an agent is theoretically scoped to do and what it actually has access to do widens — and nobody has a complete picture of where that line is.This is the least discussed governance risk in agentic AI deployments, and possibly the most dangerous. An agent operating at the edge of its intended scope is an agent operating outside its governance framework, regardless of what the governance documentation says.</li>
</ol>
<h2 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">What Microsoft Is Building, and What It Means for Your Organization</h2>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">To its credit, Microsoft has moved from describing the governance problem to shipping tooling designed to address it. The stack is now named and defined, and organizations in the Microsoft ecosystem have more governance infrastructure available to them than most realize.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;"><a href="https://www.microsoft.com/en-us/security/blog/2026/03/09/secure-agentic-ai-for-your-frontier-transformation/"><strong>Agent 365</strong></a> goes generally available on May 1, 2026, at $15 per user per month. It is designed specifically as a centralized control plane for agents, giving IT, security, and business teams visibility into which agents are running across the enterprise, what they are doing, who has access to them, and what security risks exist. It integrates with Microsoft Purview for data security and compliance, and Microsoft Entra for agent identity management. For organizations already in the Microsoft 365 ecosystem, this is the governance layer that should be running before the next Copilot Studio agent ships.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;"><a href="https://opensource.microsoft.com/blog/2026/04/02/introducing-the-agent-governance-toolkit-open-source-runtime-security-for-ai-agents/"><strong>The Microsoft Agent Governance Toolkit</strong></a> is a newer and less widely discussed release, published just two weeks ago as an open-source project under the Microsoft organization with an MIT license. It is the first toolkit designed to address all ten OWASP agentic AI risks with deterministic, sub-millisecond policy enforcement, covering goal hijacking, tool misuse, identity abuse, supply chain risks, code execution, and memory poisoning. For organizations with technical teams building custom agents, this toolkit provides the runtime security governance layer that most deployments are currently missing.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]" style="padding-left: 40px;"><strong>Microsoft Purview and Entra</strong> round out the governance stack, providing real-time data security and compliance monitoring and agent identity management, respectively. Microsoft has also named the Foundry Control Plane as the governance layer for organizations building on Azure AI Foundry.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><a href="https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_Microsoft-solutions.webp"><img decoding="async" loading="lazy" class="alignright  wp-image-19029" src="https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_Microsoft-solutions-300x200.webp" alt="The Agentic AI Readiness Problem_Microsoft solutions" width="281" height="187" srcset="https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_Microsoft-solutions-300x200.webp 300w, https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_Microsoft-solutions-1024x683.webp 1024w, https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_Microsoft-solutions-768x512.webp 768w, https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_Microsoft-solutions-520x347.webp 520w, https://buckleyplanet.com/wp-content/uploads/2026/04/The-Agentic-AI-Readiness-Problem_Microsoft-solutions.webp 1536w" sizes="auto, (max-width: 281px) 100vw, 281px" /></a>The broader picture here is significant. Microsoft was named a Leader in the <a href="https://www.microsoft.com/en-us/security/blog/2026/01/14/microsoft-named-a-leader-in-idc-marketscape-for-unified-ai-governance-platforms/">2025-2026 IDC MarketScape for Worldwide Unified AI Governance Platforms</a>, and the company is now positioning governance not as a feature but as a foundational architecture. As one analyst put it in a <a href="https://www.uctoday.com/collaboration/microsoft-governance-ai-security-enterprise-analysis/">post-Ignite 2025 assessment</a>: organizations will be judged not on what they promise but on what they can show — audit-ready, risk-controlled, automated agents operating within governed estates.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The tooling exists. The question is whether organizations are using it.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">Governance as Prerequisite, Not Afterthought</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Here is the argument I&#8217;d make to any technology leader preparing to expand their agentic AI deployment: governance is not a compliance exercise you complete before you can move forward. It is the prerequisite for moving forward at scale.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The Deloitte report makes this point explicitly. Organizations where senior leadership actively shapes AI governance achieve significantly greater business value than those delegating it to technical teams. Companies seeing the most success with agentic AI are taking a measured approach: starting with lower-risk use cases, building governance capabilities, and scaling deliberately. The ones rushing to deploy agents widely before establishing these foundations are the ones showing up in Gartner&#8217;s 40% cancellation projection.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">What does governance as a prerequisite actually look like in practice? A few things that are non-negotiable before any meaningful agentic deployment.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">A complete inventory of what&#8217;s running. You cannot govern an AI landscape you haven&#8217;t mapped. Before expanding agentic deployments, organizations need a clear picture of every agent currently active, every system it has access to, and every action it is authorized to take. If that picture doesn&#8217;t exist, building it is step one. For Microsoft organizations, Agent 365 and the Power Platform admin center are the starting points for this inventory.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Defined autonomy boundaries. Not every decision an agent makes carries the same risk. A scheduling agent rescheduling a meeting operates in a different risk tier than a procurement agent committing to a vendor contract. Organizations need explicit, documented thresholds — which decisions agents make independently, which trigger a human review, and which require explicit approval before action. These boundaries need to be enforced technically, not just documented in a policy.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Cross-functional governance ownership. The Deloitte report is clear that delegating AI governance entirely to technical teams is a predictor of lower business value. Effective governance requires technology, legal, compliance, and business unit leadership at the table — not as a committee that meets quarterly, but as an active structure with defined responsibilities and real authority. The conversation about what an agent is allowed to do is not purely technical.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Real-time monitoring, not periodic audits. Agentic AI systems change over time. Their access expands. Their behavior evolves. Static audits that check governance compliance at a point in time are insufficient for systems that operate continuously. Organizations need monitoring infrastructure that tracks agent behavior in real time, flags anomalies when agents act outside their expected patterns, and creates audit trails that are reviewable without reconstructing a sequence of events after the fact. The Microsoft 2026 Release Wave 1 updates to Power Platform specifically add real-time risk assessment in Copilot Studio and AI-powered governance agents that automate tenant monitoring and remediation — capabilities that weren&#8217;t available six months ago.</p>
<h3 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">The Honest Assessment</h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Twenty-one percent of organizations have mature governance for autonomous agents. Seventy-four percent plan to be using agentic AI at least moderately within two years. That is a large number of organizations about to deploy a technology they don&#8217;t fully know how to govern.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That&#8217;s not a reason to stop. Agentic AI is genuinely transformative, and the organizations that figure out how to deploy it responsibly will have meaningful advantages over those that don&#8217;t. But responsible deployment requires being honest about where the gaps are — and the governance gap in agentic AI is significant, documented, and not closing as fast as the deployment pace is accelerating.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">For organizations operating in the Microsoft ecosystem specifically, the timing is actually favorable. Agent 365 goes GA on May 1. The open-source Agent Governance Toolkit is available now. Purview and Entra have agent-specific governance capabilities shipping through the spring. The infrastructure to govern agentic AI responsibly within Microsoft 365 exists in a way it didn&#8217;t a year ago.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">What doesn&#8217;t exist is the organizational will to use it before something goes wrong rather than after. That&#8217;s the conversation worth having right now — before the next agent ships, not after it&#8217;s already running in production without anyone watching.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The question worth asking in your organization right now is not &#8220;how quickly can we deploy agents?&#8221; It&#8217;s &#8220;do we know what our agents are doing, do we have the right people accountable for their actions, and do we have the visibility to know when something goes wrong before it becomes a serious problem?&#8221;</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">If the answer to any of those is no — or even &#8220;we&#8217;re working on it&#8221; — the governance conversation needs to happen before the next deployment does.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Act serious. Treat it seriously. Get serious results. And this time, make sure someone is watching.</p>
<p>The post <a href="https://buckleyplanet.com/2026/04/the-agentic-ai-readiness-problem/">The Agentic AI Readiness Problem</a> appeared first on <a href="https://buckleyplanet.com">buckleyPLANET</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
