<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ProjectCrunch &#8211; Management, Technology, and Beyond</title>
	<atom:link href="http://projectcrunch.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://projectcrunch.com</link>
	<description>Management, Technology, and Beyond</description>
	<lastBuildDate>Sat, 11 Apr 2026 18:02:46 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Feature-Based Project Tracking: How to Regain Control in Distressed MtO Projects</title>
		<link>https://projectcrunch.com/feature-based-project-tracking-how-to-regain-control-in-distressed-mto-projects/</link>
		
		<dc:creator><![CDATA[Roman Mildner]]></dc:creator>
		<pubDate>Sun, 22 Mar 2026 19:19:28 +0000</pubDate>
				<category><![CDATA[Management]]></category>
		<category><![CDATA[CORE SPICE]]></category>
		<guid isPermaLink="false">https://projectcrunch.com/?p=3721</guid>

					<description><![CDATA[Made-to-Order (MtO) projects are fundamentally different from R&#38;D. A customer defines the requirements. The scope is contractually fixed. The lifecycle follows a V-model (at least in regulatory-relevant projects) with formal verification at every level of <a class="mh-excerpt-more" href="https://projectcrunch.com/feature-based-project-tracking-how-to-regain-control-in-distressed-mto-projects/" title="Feature-Based Project Tracking: How to Regain Control in Distressed MtO Projects">Read...</a>]]></description>
										<content:encoded><![CDATA[
<p>Made-to-Order (MtO) projects are fundamentally different from R&amp;D. A customer defines the requirements. The scope is contractually fixed. The lifecycle follows a V-model (at least in regulatory-relevant projects) with formal verification at every level of V. In MtO projects, the team must deliver a specific product—not a prototype or a proof of concept. It is usually a production-ready system that meets safety, security, and compliance standards.</p>



<p>This expectation applies to automotive, but equally to aviation, medical devices, railway systems, and any domain where complex, safety-relevant systems are built to customer specification.</p>



<p>When MtO projects get into trouble, the symptoms are remarkably consistent:</p>



<ul class="wp-block-list">
<li><strong>Work-item explosion. </strong>A single customer requirement spawns dozens of sub-tasks across disciplines — system requirements, architecture, software requirements, software design, implementation, integration, and verification. Responsibilities are often scattered across different engineers and tools. A project that started with 50 customer requirements now has 1,200 work items, and nobody can tell which customer feature is actually “done.”</li>



<li><strong>Silo thinking. </strong>System engineers don’t talk to software engineers. Software managers don’t talk to project leads. Line managers are randomly involved in the project work. The test team discovers what was built only when integration starts. Suppliers deliver their subsystems in isolation and claim everything is “on track.” The customer is kept at arm’s length. Each group optimizes for its own deliverables, not for the product. “I am not responsible for that” is often a popular attitude.</li>



<li><strong>Lack of sense of urgency. </strong>Once established, time buffers are consumed by other activities. The planning fallacy—well documented by Kahneman and Tversky—leads to optimistic schedules that drift, leaving deadlines impossible to meet.</li>



<li><strong>The “trust me” illusion. </strong>Management asks: “Are we on track?” The team answers: “Yes, we’ve got this.” That is a pointless ritual. No team or supplier will ever volunteer “No, we are failing.” Status must be <strong>measured</strong>, not <strong>asked</strong> for. Verbal assurances are not data. If control cannot be demonstrated with hard data, it does not exist.</li>
</ul>



<p>The consequence: project status becomes opaque, customer escalations multiply, and team morale collapses. The project is “in trouble,” and nobody noticed until it was too late.</p>



<h2 class="wp-block-heading">The Feature-Based Approach: What Is a Feature?</h2>



<p>The definition of project scope can be structured in numerous ways (see WBS structures defined by the Project Management Institute, PMI). In recent years, many customers in the automotive industry have adopted the concept of a “feature” to define project scope.</p>



<p>While there are countless ways to define what a “feature” is, for our purposes, a feature is a well-defined chunk of customer-relevant scope. It is a deliverable slice of value that the customer or a standard demands, and whose completion can be objectively verified.</p>



<p>Features can be functional or non-functional:</p>



<ul class="wp-block-list">
<li><strong>Functional: </strong>for example, “Active Steering Safety Manager,” “Bootloader Update Mechanism,” “CAN Communication Stack.”</li>



<li><strong>Non-functional: </strong>for example: “Startup Time &lt; 200ms,” “ASIL-D Coverage,” “OBD Compliance,” “Cybersecurity Compliance Certification.”</li>
</ul>



<p>The key criterion: a feature represents a meaningful unit of delivery. It aggregates all related work across the V-model—requirements, architecture, implementation, integration, and verification—regardless of whether the underlying work is system-level, software-level, or cross-disciplinary.</p>



<h2 class="wp-block-heading">One Feature, One Owner</h2>



<p>Feature ownership is a proven way to structure a project around technical expectations. Each feature has exactly one <strong>feature owner</strong> — a person responsible from inception to final verification. The feature owner is not “just software” or “just systems.” Feature owners own the outcome across disciplines—from left to right in the V. That directly implements the CORE SPICE principle of end-to-end responsibility: one person, one feature, from start to finish. It does not mean the feature owner must do all the work. Rather, the feature approach creates a matrix of responsibilities, and the feature owner must work with other feature owners to prevent redundancies. The advantage is that this approach is strictly merit-based: the feature management team follows the ideal path to delivery rather than requiring a more generalist mindset, since a feature owner usually does not know all the details across the V. Nevertheless, this approach offers a clear end-to-end view and closes the responsibility gap.</p>



<p>This is fundamentally different from tracking at the work-item level, where a requirement passes through five or six different hands, each responsible for only their discipline’s slice. In the feature-based model, the handover points — where things typically get stuck or lost — are eliminated.</p>



<h2 class="wp-block-heading">Advantages of the Feature-Driven Approach</h2>



<p>The feature-driven approach helps structure MtO projects systematically.</p>



<ul class="wp-block-list">
<li><strong>It makes status measurable. </strong>Stakeholders see “Feature X: done” or “Feature X: blocked on integration test.” Not “247 work items, 63% closed.” The burndown speaks for itself — no more “trust me.”</li>



<li><strong>It forces prioritization. </strong>Features can be ranked, sequenced into releases, and traded off against deadlines. You cannot meaningfully prioritize 1,200 low-level work items, but you can prioritize 80 features.</li>
</ul>



<p><strong>The practical setup: </strong>The feature list is derived from customer requirements and applicable standards. In a typical complex MtO project, this results in 50 to 250 features. Each feature is mapped to a release. Status is tracked daily at the feature level—not at the sub-task level.</p>



<h2 class="wp-block-heading">Radical Transparency: No Silos, No Exceptions</h2>



<p>Feature-based tracking only works if the entire operation is 100% transparent to everyone on the project team. This is not optional. It is a precondition.</p>



<p>“Everyone” means exactly that:</p>



<ul class="wp-block-list">
<li><strong>The core team: </strong>system engineers, software engineers, test engineers, integration leads — everyone sees every feature, every status, every blocker.</li>



<li><strong>Suppliers: </strong>If a supplier delivers custom-built systems or software components, they are part of the team. They participate in the daily Sync. They see the burndown. They report on the same features, in the same tool, with the same status definitions, and a clear “definition of done.” A supplier that delivers features in isolation and shows up at integration with “surprises” is a risk.</li>



<li><strong>The customer: </strong>The customer should see the feature status and the burndown. Hiding problems from the customer does not make them disappear—it makes the escalation worse when they inevitably surface.</li>
</ul>



<h2 class="wp-block-heading">No Information Asymmetry</h2>



<p>In distressed projects, information asymmetry is a root cause of failure. When the supplier knows something the project lead does not, when the test team sees a problem that the customer has not been told about, when a feature owner is stuck but does not want to admit it—these are the moments where projects silently slide into crisis.</p>



<p>The feature chart, the burndown chart, and the daily Sync must be the single source of truth. If it is not easy to see for everybody, it does not exist. If a supplier’s feature is red, everyone knows it is red—the supplier, the project lead, and the customer. That is not confrontation. That is professionalism.</p>



<h2 class="wp-block-heading">Risk Minimization Through Tight Tracking</h2>



<p>Traditional <strong>risk management</strong> tends to be bureaucratic and reactive: risk registers, probability/impact matrices, and quarterly reviews are often boring activities that are increasingly pointless and wasteful. The risks sit in a spreadsheet. Nobody reads it until the steering committee.</p>



<p><strong>Risk minimization</strong>, on the other hand, is the opposite: the goal is to make risks irrelevant by delivering early, testing often, and closing gaps daily. It is proactive and embedded in the daily workflow. This essential aspect is articulated as CORE SPICE Principle #7.</p>



<h2 class="wp-block-heading">The Burndown Baseline</h2>



<p>Using a burndown (or, alternatively, burn-up) chart is a well-proven risk-reduction, stakeholder-reporting, and project-tracking strategy. When properly set up, it offers near-real-time visibility into release progress, detects delays, and builds trust in the project delivery timeline.</p>



<p>At the start of a release (or a turnaround), the team establishes a baseline: the total number of items that must be completed for the release to ship. This includes features and critical bugs. During the release planning session, critical defects are prioritized alongside features — because a release is not “done” when all features are implemented. It is done when all features are verified and all critical defects are resolved.</p>



<p>The baseline is not a straight line. Real projects follow an S-curve: slow at the start (ramp-up, architecture, design), steep in the middle (implementation at peak velocity), and tapering at the end (integration, verification, final fixes). A straight-line baseline is a textbook fiction that misleads the team into thinking they are behind when they are still ramping up. The S-curve reflects how value is actually delivered.</p>



<p>From that point, the team tracks daily how many items remain versus how many should remain based on the baseline. The burndown chart makes the answer to “are we on track?” visible to everyone, every day, without anyone having to ask.</p>


<div class="wp-block-image">
<figure class="aligncenter size-large is-resized"><a href="https://projectcrunch.com/wp-content/uploads/2026/03/FeatureBurnDown-1.png"><img fetchpriority="high" decoding="async" width="1024" height="564" src="https://projectcrunch.com/wp-content/uploads/2026/03/FeatureBurnDown-1-1024x564.png" alt="" class="wp-image-3730" style="aspect-ratio:1.8156303826539228;width:719px;height:auto" srcset="https://projectcrunch.com/wp-content/uploads/2026/03/FeatureBurnDown-1-1024x564.png 1024w, https://projectcrunch.com/wp-content/uploads/2026/03/FeatureBurnDown-1-300x165.png 300w, https://projectcrunch.com/wp-content/uploads/2026/03/FeatureBurnDown-1-768x423.png 768w, https://projectcrunch.com/wp-content/uploads/2026/03/FeatureBurnDown-1.png 1083w" sizes="(max-width: 1024px) 100vw, 1024px" /></a></figure>
</div>


<p><em>Figure 1: Feature + critical bug burndown. The baseline (dashed S-curve) shows the plan; the actual line (solid) shows reality. Both start at the same point. The actual line falls behind the planned S-curve through W1–W5. After corrective measures at W5, velocity increases, and the team converges back to the planned end state.</em></p>



<p>The chart illustrates a typical pattern: the first weeks show sluggish progress (the team is still reorganizing, silos are being broken down), followed by acceleration once the feature-based approach takes hold. The gap between baseline and actual at any point is the conversation starter: not “are we on track?” but “what do we need to close this gap by next week?”</p>



<h2 class="wp-block-heading">Advantages of Feature Burndown Charts</h2>



<p>Burndown charts offer many advantages:</p>



<ul class="wp-block-list">
<li>They offer daily visibility, which means a daily opportunity to intervene—detect, assess, and plan specific corrective actions.</li>



<li>They help detect small deviations before they compound.</li>



<li>They prevent “surprises” at milestone reviews.</li>



<li>They help maintain the “sense of urgency.” If the line is flat, the team sees it. If the line is steep, the team sees that too.</li>
</ul>



<h2 class="wp-block-heading">The Daily Sync: The Heartbeat of the Turnaround</h2>



<h4 class="wp-block-heading">Purpose</h4>



<p>A “sanc” (also known as “standup”) is a 15-minute daily check-in. Syncs are not status meetings, no reporting ceremonies, etc. They are coordination sessions focused on feature flow and blockers.</p>



<h4 class="wp-block-heading">Format</h4>



<ul class="wp-block-list">
<li>Which features have moved since yesterday?</li>



<li>Which features are blocked?</li>



<li>What is needed to unblock the stuck features or bugs</li>
</ul>



<p>What matters is whether the feature or bug is closer to “done.” Attended by feature owners, the project lead, leading architects, verification leads, and the Team Capability Coach (TCC). Suppliers participate on equal terms.</p>



<h4 class="wp-block-heading">What Makes This Different from a Scrum Sync</h4>



<p>The unfortunate experience with “real life Scrum” is that Scrum tends to be—ironically enough—a heavyweight, cadence-driven, inflexible instrument facilitated by a “scrum master” who often has insufficient power to ensure flawless execution of feature implementation.</p>



<p>As opposed to Scrum, the CORE SPICE approach proposes a release-based, incremental strategy:</p>



<p><strong>No sprints. </strong>MtO projects plan releases, not sprints. This is Kanban-style, release-based cadence.</p>



<p><strong>No theater. </strong>No “what I did yesterday / what I’ll do today” rituals. The burndown chart is the visual anchor: everyone sees the same picture, every day.</p>



<p><strong>Suppliers in the room. </strong>A supplier that delivers features for this release participates in the Sync like any other team member. No separate “supplier sync” behind closed doors.</p>



<p>The TCC role can be summarized as follows:</p>



<p><strong>Challenge the delay: </strong>“This feature hasn’t moved in three days. What is the real blocker?”</p>



<p><strong>Facilitate organizational positive attitude: </strong>Help the team resolve cross-functional dependencies on the spot.</p>



<p><strong>Sense of urgency:</strong> Maintain urgency without creating panic. The TCC ensures the team stays focused without burning out.</p>



<h2 class="wp-block-heading">Anti-Patterns</h2>



<p>The “daily sync” must be crisp, data-driven, and purposeful. The following fallacies should be prevented:</p>



<ul class="wp-block-list">
<li>Turning the Sync into a 45-minute problem-solving session. Whenever needed, dedicated ad hoc working groups must be <strong>spun off</strong> after the meeting. A good practice is to set aside a “blocked” time for this action right after the meeting, when current open questions can be worked out during the Sync in a small expert group.</li>



<li>Reporting up instead of coordinating. All features and bugs should already be updated by the feature owners <em>before</em> the Sync.</li>



<li>Skipping days because “nothing changed”—the cadence is the discipline.</li>
</ul>



<h2 class="wp-block-heading">The Psychology of “Closing Features:” the Dopamine Effect</h2>



<p>Feature-based tracking is not just a visibility and progress control mechanism. It is a motivation mechanism.</p>



<p>Every closed feature is a visible, undeniable achievement. The feature owner and the entire team can see it on the burndown chart: one more item moved to “Done.” It triggers a psychological reward—a dopamine response that reinforces the behavior that elicited it.</p>



<p>Feature and bug closure is fundamentally different from closing low-level sub-tasks. Nobody celebrates completing one of twelve software design reviews. But when “OBD Compliance” moves to “Done,” the team knows that a real, customer-visible chunk of work is finished. The effect compounds: each closed feature raises confidence and energy for the next one.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><a href="https://projectcrunch.com/wp-content/uploads/2026/03/MotivationalCycle.png"><img decoding="async" width="750" height="750" src="https://projectcrunch.com/wp-content/uploads/2026/03/MotivationalCycle.png" alt="" class="wp-image-3732" style="width:572px;height:auto" srcset="https://projectcrunch.com/wp-content/uploads/2026/03/MotivationalCycle.png 750w, https://projectcrunch.com/wp-content/uploads/2026/03/MotivationalCycle-300x300.png 300w, https://projectcrunch.com/wp-content/uploads/2026/03/MotivationalCycle-150x150.png 150w, https://projectcrunch.com/wp-content/uploads/2026/03/MotivationalCycle-70x70.png 70w" sizes="(max-width: 750px) 100vw, 750px" /></a></figure>
</div>


<p><em>Figure 2: The positive feedback loop. Delivering a feature leads to recognition, triggering a psychological reward that fuels motivation, which drives the next delivery.</em></p>



<h2 class="wp-block-heading">Recognition Without Ceremony</h2>



<p>Feature closures should be acknowledged in the daily Sync—briefly, factually, but visibly. It is paying respect to the feature owner, who often had to invest a lot of “blood, sweat, and tears” to deliver the feature on time. The team that delivers should be recognized. Not with extensive celebrations, but with a simple acknowledgment—something along the lines of “Feature X is done. Well done, [name].”</p>



<p>That creates a culture where finishing things is valued—not just starting them. Over time, the burndown chart itself becomes a source of team pride: a visual record of what has been accomplished.</p>



<h2 class="wp-block-heading">Why Celebrating Feature Closure Matters</h2>



<p>Distressed teams are often frustrated or even demoralized. They have been in “crisis mode” for a long time, sometimes for months, working long hours with no visible sense of progress. The open-item count goes up. The backlog grows. Nobody feels like they are winning.</p>



<p>Feature-based tracking breaks the monolith into achievable milestones. Each closed feature is proof that progress is real. The positive psychological feedback loop—deliver, recognition, dopamine, motivation, deliver more—is the antidote to the vicious cycle of despair that distressed projects often fall into.</p>



<h2 class="wp-block-heading">CORE SPICE Measures</h2>



<p>The feature-based tracking approach does not work in isolation. It requires a foundation of five CORE SPICE measures already in place. For detailed descriptions, see <a href="https://projectcrunch.com/core-spice-coaching-concept/">“CORE SPICE Coaching Concept”</a>.</p>



<ul class="wp-block-list">
<li><strong>No task left behind. </strong>Every identified risk or gap becomes an owned task. If you create an issue, you will eventually deal with its outcome—this negative feedback loop prevents backlog mushrooming.</li>



<li><strong>Maintain the sense of urgency. </strong>Every MtO turnaround project is a task force. The TCC ensures high urgency is upheld as long as substantial risks remain unmitigated.</li>



<li><strong>End-to-end responsibility. </strong>The feature owner concept directly implements this: one person responsible from inception to final verification, cross-functional, not discipline-bound.</li>



<li><strong>Constantly assess the team. Project Lead</strong> and TCC monitor whether everyone contributes. Ineffective team members must be swiftly replaced—keeping them demotivates the rest.</li>



<li><strong>Automate everything. </strong>Feature tracking itself should be automated: status pulled from the issue management system, burndown charts generated without manual effort. With LLMs gaining traction, it should be a no-brainer.</li>
</ul>



<h2 class="wp-block-heading">Putting It All Together</h2>



<p>The elements described in this article are not independent techniques to be adopted piecemeal. They form a system. When all are in place, it creates a self-reinforcing cycle:</p>



<ul class="wp-block-list">
<li>Feature-based tracking provides <strong>visibility.</strong></li>



<li>Radical transparency provides <strong>trust.</strong></li>



<li>The burndown baseline (features + critical bugs) provides <strong>accountability.</strong></li>



<li>The daily Sync provides <strong>cadence.</strong></li>



<li>Feature closures provide <strong>motivation.</strong></li>



<li>CORE SPICE measures provide the <strong>cultural foundation.</strong></li>
</ul>



<p><strong>The cycle: Visibility → Urgency → Action → Progress → Recognition → Motivation → More progress</strong></p>



<p>Feature-based tracking is not a methodology. It is a pragmatic tool that makes existing methodologies work in distressed MtO environments. The key insight is that tracking what the customer or the standard demands — features, both functional and non-functional — not what the process generates. Include critical bugs to reflect reality, not just the plan. Make everything transparent to everyone — the core team, the suppliers, and the customer.</p>



<p><strong>The “trust me, I have this under control” era is over. The burndown is the answer. If the line is flat, there is no control. If the line is steep, the team is winning—and everyone can see it.</strong></p>



<h3 class="wp-block-heading">References</h3>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<ul class="wp-block-list">
<li><strong>The Right Genes for Your Project</strong> — MtO vs. R&amp;D project typology: <a href="https://projectcrunch.com/the-right-genes-for-your-project/">projectcrunch.com/the-right-genes-for-your-project/</a></li>
</ul>



<ul class="wp-block-list">
<li><strong>Unlock Efficiency with CORE SPICE</strong> — The 12 CORE SPICE principles: <a href="https://projectcrunch.com/unlock-efficiency-with-core-spice/">projectcrunch.com/unlock-efficiency-with-core-spice/</a></li>
</ul>



<p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>LLMs Are the New Yahoo: Why the Agentic AI Implosion Is Coming — And Who Will Survive It</title>
		<link>https://projectcrunch.com/llms-are-the-new-yahoo-why-the-agentic-ai-implosion-is-coming-and-who-will-survive-it/</link>
		
		<dc:creator><![CDATA[Roman Mildner]]></dc:creator>
		<pubDate>Thu, 26 Feb 2026 22:38:21 +0000</pubDate>
				<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Strategy]]></category>
		<guid isPermaLink="false">https://projectcrunch.com/?p=3714</guid>

					<description><![CDATA[Last week, Anthropic CEO Dario Amodei said we might be “6–12 months away from models doing all of what software engineers do end-to-end.” <a class="mh-excerpt-more" href="https://projectcrunch.com/llms-are-the-new-yahoo-why-the-agentic-ai-implosion-is-coming-and-who-will-survive-it/" title="LLMs Are the New Yahoo: Why the Agentic AI Implosion Is Coming — And Who Will Survive It">Read...</a>]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-audio"><audio controls src="https://projectcrunch.com/wp-content/uploads/2026/02/LLMs-Are-the-New-Yahoo-1.mp3"></audio></figure>



<h4 class="wp-block-heading">Last week, Anthropic CEO Dario Amodei said we might be “6–12 months away from models doing all of what software engineers do end-to-end.”</h4>



<p>Think about it: If that’s true — if agentic AI can really do everything a software engineer does — then replicating Anthropic itself is just a prompt away. Anyone could build Cowork in their basement. Why would you pay a $60 billion company for something you can bootleg with their own tools?</p>



<p><strong>Here is the thing: That’s the paradox that should keep every AI investor up at night.</strong></p>



<p>Either agentic AI is as powerful as the pitch decks claim — in which case the companies selling it are commoditizing themselves. Or it’s not — in which case the trillion-dollar valuations are built on fantasy.</p>



<p>You cannot have it both ways.</p>



<h2 class="wp-block-heading"><strong>The Yahoo Parallel Nobody Wants to Hear</strong></h2>



<p>In 1999, Yahoo was the internet. Yahoo’s market cap reached $125 billion. Every investor, analyst, and journalist agreed: Yahoo&nbsp;<em>was</em>&nbsp;the future. It had the users, the brand, the traffic, and the portal. The world literally ran on Yahoo.</p>



<p>Then the infrastructure underneath it — search, email, hosting — got commoditized. Cheaper. Better. Open. Google ate search. Gmail ate Yahoo Mail. WordPress ate Yahoo GeoCities. The “platform” everyone thought was an essential game-changer turned out to be a thin wrapper over generic technology.</p>



<p>By 2016, Verizon picked up Yahoo’s remains for $4.8 billion — a 96% discount from its peak.</p>



<p>Now replace “Yahoo” with “OpenAI.” Replace “portal” with “agentic AI platform.” Replace “search getting commoditized” with “LLMs getting commoditized.”</p>



<p>It is not the same pattern—but it rhymes.</p>



<p>Similar to Yahoo decades ago, OpenAI had a massive head start. ChatGPT was the fastest-growing consumer app in history. Sam Altman was on every magazine cover. The moat looked enormous.</p>



<p>Then DeepSeek showed you can train a frontier model for a fraction of the cost. Llama went open-source. NVIDIA stock collapsed by 17% in a single day. Claude matched GPT on most benchmarks. Gemini caught up. Mistral emerged. Dozens of open-weight models flooded the market. Every quarter, the performance gap between models shrinks while the cost per token collapses.</p>



<p><strong>LLMs are converging toward&nbsp;</strong>a&nbsp;<strong>commodity faster than anyone predicted.</strong>&nbsp;The model layer — the very thing these companies are built on — is approaching marginal cost, just as search did in the early 2000s.</p>



<p><strong>The Commoditization Paradox of Agentic AI</strong></p>



<p>Here’s the part that truly breaks the god-like AI narrative.</p>



<p>The current scare story goes like this:&nbsp;<em>Agentic AI will eat all software. Jira is dead. Salesforce is dead. Every SaaS tool will be replaced by an AI agent that just does the work.</em></p>



<p>Sounds terrifying — until you ask one simple question:</p>



<h2 class="wp-block-heading"><strong>Agentic AI is software, too.</strong></h2>



<p>Every “agent” is fundamentally the same thing: an LLM connected to tools via APIs, wrapped in orchestration logic, with a user interface on top. That’s it. There is no deep, proprietary magic. There is no secret sauce. The MCP (Model Context Protocol) and similar standards are enabling tool integration to be plug-and-play. The models themselves are interchangeable.</p>



<p>If Anthropic’s Cowork can automate software development, then by definition, someone can use that exact same capability to build a Cowork competitor over a long weekend. The tools to displace the disruptor&nbsp;<em>are the disruptor itself</em>.</p>



<p>And no: &nbsp;it is not an abstract, theoretical argument. We’ve already seen it happen. OpenClaw — a solo developer project — replicated most of what the big AI labs were pitching as their next billion-dollar product. OpenAI didn’t acquire the technology. They didn’t buy the company. They hired the&nbsp;<strong>guy</strong>. Because the technology was trivially replicable. The human judgment behind it was not.</p>



<h2 class="wp-block-heading"><strong>The One Person Who Already Figured This Out</strong></h2>



<p>While Sam Altman is chasing a $500 billion IPO for a company that sells commodity software, and Dario Amodei is telling the world his agents will replace all engineers (thereby making his own product worthless — see above), one person has quietly made the move that reveals he understands everything in this article.</p>



<p>Elon Musk.</p>



<p>On February 2, 2026, SpaceX acquired xAI in a $1.25 trillion all-stock merger—the largest in history. SpaceX is valued at $1 trillion. xAI at $250 billion. On paper, this looks like another Musk ego trip. In reality, it’s the most strategically coherent move in the entire AI industry.</p>



<p>Here’s why:&nbsp;<strong>Musk is the only AI player who understood that AI alone is worth nothing.</strong></p>



<p>But think about what SpaceX actually owns. Reusable rockets that no competitor has replicated at scale. Starlink — 9,000+ satellites in orbit, 9 million subscribers, and billions in defense contracts with NASA and the Department of Defense. A literal company town in Texas. $15 billion in revenue and $8 billion in profit. These are physical, hard-to-replicate assets that took over two decades of engineering, explosions, and near-bankruptcies to build.</p>



<p>xAI’s Grok, on the other hand? A chatbot. A good one, sure — but fundamentally the same commodity as GPT, Claude, Gemini, and the rest. By itself, Grok is heading toward the same zero-margin future as every other LLM.</p>



<p>But Grok&nbsp;<em>bolted onto</em>&nbsp;SpaceX’s rocket infrastructure, Starlink’s global network, and planned orbital data centers? That’s a vertically integrated stack that no pure-play AI company can touch. OpenAI can’t launch satellites. Anthropic doesn’t have rockets. Perplexity doesn’t own a communications network.</p>



<p>Musk is not betting on AI.&nbsp;<strong>He’s betting on the things AI cannot replace</strong>—and then using AI as an add-on, not the foundation of his tech empire. That’s the opposite of what OpenAI and Anthropic are doing.</p>



<p>The irony is thick. The man the tech press loves to mock may be the only AI CEO who has actually internalized the logic of LLM commoditization. Everyone else is building castles on sand — premium-priced software layers that are racing to zero. Musk is building on physical infrastructure: rockets, satellites, and a distribution network that can’t be “prompted into existence.”</p>



<p>Many of Musk’s ideas are still science fiction, like the orbital data center. Radiation, cooling, launch costs, the sheer audacity of it. But the strategic direction is unmistakably correct. Even if the space data centers never materialize, SpaceX + Starlink + defense contracts is a $1 trillion hardware business. xAI is just a meager add-on.</p>



<h2 class="wp-block-heading">Burry Is Early — But He’s Not Wrong (And Not Entirely Right, Either)</h2>



<p>Michael Burry — the “Big Short” investor who famously predicted the 2008 housing collapse — has put roughly $1.1 billion in notional put options against Nvidia and Palantir. He’s also been shorting Oracle and publishing detailed analyses of how hyperscalers are inflating their earnings by stretching GPU depreciation from 3 years to 6 years, potentially overstating earnings by $176 billion between 2026 and 2028.</p>



<p>The market laughed at him initially — just like in 2007. As of February 2026, his Palantir puts are up 35%. Oracle has fallen 51% from its Q3 2025 peak. The broader S&amp;P Software &amp; Services Index has dropped 19% in a single month. Burry’s thesis is starting to print.</p>



<p>However, his Nvidia bet hasn’t paid off yet: the chips are still selling, demand is still real, and at ~24x forward earnings, NVDA isn’t priced like a bubble. Burry himself admitted his NVDA bet is “the most concentrated way to express a bearish view on the AI trade” — a sector bet, not a company bet.</p>



<p>I think Burry sees the disease correctly, but is aiming at the wrong organ.</p>



<p><strong>Where Burry is right:</strong>&nbsp;The AI investment cycle is overheated. Trillion-dollar capex commitments for data centers look eerily similar to the fiber-optic boom of 2000, where less than 5% of US telecoms capacity was ever used. Depreciation accounting is masking real costs. Many pure-play AI companies will implode. Palantir at 200x earnings was never going to hold. Oracle’s AI pivot was always more PowerPoint than product.</p>



<p><strong>Where Burry is potentially wrong:</strong>&nbsp;He’s shorting&nbsp;<em>infrastructure</em>&nbsp;(Nvidia, the picks-and-shovels provider) when history suggests the infrastructure layer is often the last to fall — and sometimes doesn’t fall at all. During the Gold Rush, Levi Strauss got rich. During the dot-com crash, Cisco got hammered but survived to become a $200+ billion company today. The server farms that powered the “useless” dot-com companies became the backbone of cloud computing.</p>



<p>Here’s the deeper irony: Musk just showed the market exactly where the real value is — physical infrastructure, vertical integration, things that can’t be cloned with a prompt. Burry is betting against the AI bubble, and he’s right about the bubble. But the optimal short isn’t Nvidia (which sells real hardware to real customers). The optimal short is the pure-software layer — the OpenAIs, the Anthropics, the Palantirs — whose valuations depend on maintaining pricing power in a market heading toward commodity.</p>



<p>Burry may be losing money on his Nvidia puts while being philosophically correct.</p>



<h2 class="wp-block-heading"><strong>The Three Layers of AI Value — And Where It Goes to Zero</strong></h2>



<p>To understand who survives, think of the AI stack in three layers:</p>



<p><strong>Layer 1: The Model (LLMs):</strong>&nbsp;This is heading to a commodity. GPT, Claude, Gemini, Llama, DeepSeek, Mistral — the performance gaps are narrowing every quarter. Open-weight models are closing the gap with proprietary ones. The cost per token is in free fall. Within 2–3 years, the model itself will be like electricity: essential, ubiquitous, and worth pennies on the original dollar.</p>



<p>Companies at risk: OpenAI (targeting a $500B+ IPO), Anthropic ($350B valuation for… a chatbot and some agents), Cohere, AI21, and anyone whose primary value proposition is “we have a good model.” Musk understood this, which is exactly why he bolted xAI onto SpaceX instead of trying to IPO Grok as a standalone company.</p>



<p><strong>Layer 2: The Agent Wrapper.</strong>&nbsp;This is already a commodity. Cowork, Operator, Devin, and their dozen clones — these are LLM + API + orchestration + UI. There is no defensible moat in wiring a model to a set of tools. Any competent engineering team can (and will) build equivalents. The OpenClaw story is proof: one developer matched what the big labs were pitching as their next billion-dollar product in a few weeks.</p>



<p>Companies at risk: Any startup whose pitch is “we built an agent that does XYZ.” Venture capital in this space is in peak euphoria.</p>



<p><strong>Layer 3: Data, Distribution, and Infrastructure.</strong>&nbsp;This is where durable value lives. It splits into three sub-categories:</p>



<ul class="wp-block-list">
<li><strong>Irreplaceable data:</strong>&nbsp;Atlassian’s Teamwork Graph (100+ billion objects of institutional knowledge across 350,000 companies), Salesforce’s customer data, and Bloomberg’s financial data. The agent is replaceable; the data it operates on is not. This is the real moat.</li>



<li><strong>Infrastructure (picks and shovels):</strong>&nbsp;Nvidia (GPUs), Broadcom (custom ASICs/XPUs), TSMC (fabrication), the hyperscalers (AWS, Azure, GCP) — and, yes, SpaceX with its rockets, Starlink network, and orbital ambitions. Every AI company, regardless of which one wins, needs chips, power, connectivity, and cloud. This is the Levi Strauss play. It’s also the Musk play — and it’s why SpaceX at $1 trillion makes more strategic sense than OpenAI at $500 billion, even though OpenAI gets all the headlines.</li>



<li><strong>Distribution at enterprise scale:</strong>&nbsp;Companies embedded in mission-critical workflows with brutal switching costs — 80% of the Fortune 500 runs on Atlassian; virtually every enterprise runs on Microsoft. Ripping Jira out of a 10,000-seat deployment isn’t a weekend project. It’s a multi-year, multi-million-dollar nightmare.</li>
</ul>



<h2 class="wp-block-heading"><strong>Where Should the Smart Money Go?</strong></h2>



<p>If you believe — as I do — that the model and agent layers are heading toward commodity, the investment implications are clear:</p>



<p><strong>Avoid</strong>&nbsp;companies whose entire value proposition is “we have a good model” or “we built a cool agent.” That means extreme caution on OpenAI (if it IPOs), Anthropic, and the dozens of AI agent startups currently raising at absurd valuations. These are the Yahoo and Excite of this cycle.</p>



<p><strong>Be selective with infrastructure.</strong>&nbsp;NVIDIA is still printing money, but at some point, custom silicon from Google (TPUs), Amazon (Trainium/Inferentia), and Broadcom’s XPUs will erode margins. The question is timing, not direction. Short-term bull, long-term cautious.</p>



<p><strong>Favor the data and distribution moats.</strong>&nbsp;Companies like Atlassian — currently down 57% from its highs and trading at roughly 8x forward revenue — own something no agent can replicate: the institutional memory of hundreds of thousands of organizations. Their Teamwork Graph is not a feature. It’s a key that gets more valuable as more agents connect to it (via MCP). Paradoxically, the rise of agentic AI may make Atlassian&nbsp;<em>more</em>&nbsp;valuable, not less, because the agents need the data layer to function.</p>



<p><strong>Don’t forget physical scarcity.</strong>&nbsp;One of the most underappreciated implications of AI commoditization is that software value compresses while hardware and energy value do not. Defense companies, energy infrastructure, semiconductor fabrication — these cannot be “prompted into existence.” Claude is not disrupting a Rheinmetall tank or a Siemens Energy turbine.</p>



<h2 class="wp-block-heading"><strong>The Endgame</strong></h2>



<p>Here’s what I think happens:</p>



<ol class="wp-block-list">
<li><strong>2026–2027:</strong>&nbsp;The AI hype peaks. More money pours into model companies and agent startups. Valuations get even more absurd. OpenAI targets a $500B+ IPO. Anthropic raises at $350B+. SpaceX/xAI goes public at $1.5 trillion — but unlike the others, it has $15 billion in revenue and $8 billion in profit from real hardware. Everyone believes this time is different.</li>



<li><strong>2027–2028:</strong>&nbsp;Reality bites. Model commoditization becomes undeniable. Open-weight models match proprietary ones on virtually every benchmark. Price-per-token approaches zero. Agent wrappers proliferate — there are 500 Cowork clones. Enterprise customers realize they don’t need to pay premium prices for what is essentially a commodity utility.</li>



<li><strong>2028–2029:</strong>&nbsp;The shakeout. Pure-play AI companies that couldn’t build real moats get acquired at massive discounts or shut down. The pattern of the dot-com bust repeats: the technology and the revolution were real, but 90% of the companies built on them were not.</li>



<li><strong>What survives:</strong>&nbsp;The infrastructure layer (Nvidia/Broadcom, though with compressed margins), the data moats (Atlassian, Salesforce), the hyperscalers (who will provide AI like they provide cloud today — as a utility), the vertically integrated hardware-AI plays (SpaceX/xAI, if the execution holds), and the physical-world companies that AI simply cannot commoditize.</li>
</ol>



<p>Michael Burry is betting on the crash. I think he’s right about the&nbsp;<em>what</em>, but potentially wrong about the&nbsp;<em>where</em>. The model layer will implode. The agent layer will commoditize. But the picks-and-shovels layer and the data-moat layer will survive — and in some cases, thrive.</p>



<p><strong>The winners of the AI revolution won’t be the companies building AI. They’ll be the companies that own what AI cannot replicate: data, trust, physical infrastructure, and the human judgment to use it all wisely.</strong>&nbsp;Musk seems to get it. Burry half-gets it. The rest of the market? Still chasing the Yahoo dream.</p>



<p>As I wrote in my earlier piece on the AI Abundance Trap: LLMs don’t eliminate work; they give us 10× speed to develop everything else. The competitive edge in the coming decade belongs to those who refuse to let fast AI make them dumber.</p>



<p>So: cultivate your critical thinking. Invest in what can’t be prompted into existence. And prepare for the implosion that even Sam Altman’s pitch deck can’t prevent.</p>



<p>The dot-com crash didn’t kill the internet. It killed the pretenders.</p>



<p><strong>The AI implosion won’t kill artificial intelligence. It will kill the Yahoos.</strong></p>



<p><strong>And if you want to know who survives? Look for the rockets, not the chatbots.</strong></p>



<h2 class="wp-block-heading">Where Should the Smart Money Go?</h2>



<p>If you believe — as I do — that the model and agent layers are heading toward commodity, the investment implications are clear:</p>



<p><strong>Avoid</strong> companies whose entire value proposition is &#8220;we have a good model&#8221; or &#8220;we built a cool agent.&#8221; That means extreme caution on OpenAI (if it IPOs), Anthropic, and the dozens of AI agent startups currently raising at absurd valuations. These are the Yahoo and Excite of this cycle.</p>



<p><strong>Be selective with infrastructure.</strong> NVIDIA is still printing money, but at some point, custom silicon from Google (TPUs), Amazon (Trainium/Inferentia), and Broadcom&#8217;s XPUs will erode margins. The question is timing, not direction. Short-term bull, long-term cautious.</p>



<p><strong>Favor the data and distribution moats.</strong> Companies like Atlassian — currently down 57% from its highs and trading at roughly 8x forward revenue — own something no agent can replicate: the institutional memory of hundreds of thousands of organizations. Their Teamwork Graph is not a feature. It&#8217;s a flywheel that gets more valuable as more agents connect to it (via MCP). Paradoxically, the rise of agentic AI may make Atlassian <em>more</em> valuable, not less, because the agents need the data layer to function.</p>



<p><strong>Don&#8217;t forget physical scarcity.</strong> One of the most underappreciated implications of AI commoditization is that software value compresses while hardware and energy value do not. Defense companies, energy infrastructure, semiconductor fabrication — these cannot be &#8220;prompted into existence.&#8221; Claude is not disrupting a Rheinmetall tank or a Siemens Energy turbine.</p>



<h2 class="wp-block-heading">The Endgame</h2>



<p>Here&#8217;s what I think happens:</p>



<ol class="wp-block-list">
<li><strong>2026–2027:</strong> The AI hype peaks. More money pours into model companies and agent startups. Valuations get even more absurd. OpenAI targets a $500B+ IPO. Anthropic raises at $350B+. SpaceX/xAI goes public at $1.5 trillion — but unlike the others, it has $15 billion in revenue and $8 billion in profit from real hardware. Everyone believes this time is different.</li>



<li><strong>2027–2028:</strong> Reality bites. Model commoditization becomes undeniable. Open-weight models match proprietary ones on virtually every benchmark. Price-per-token approaches zero. Agent wrappers proliferate — there are 500 Cowork clones. Enterprise customers realize they don&#8217;t need to pay premium prices for what is essentially a commodity utility.</li>



<li><strong>2028–2029:</strong> The shakeout. Pure-play AI companies that couldn&#8217;t build real moats get acquired at massive discounts or shut down. The pattern of the dot-com bust repeats: the technology was real, the revolution was real, but 90% of the companies built on it were not.</li>



<li><strong>What survives:</strong> The infrastructure layer (Nvidia/Broadcom, though with compressed margins), the data moats (Atlassian, Salesforce), the hyperscalers (who will provide AI like they provide cloud today — as a utility), the vertically integrated hardware-AI plays (SpaceX/xAI, if the execution holds), and the physical-world companies that AI simply cannot commoditize.</li>
</ol>



<p>Michael Burry is betting on the crash. I think he&#8217;s right about the <em>what</em>, but potentially wrong about the <em>where</em>. The model layer will implode. The agent layer will commoditize. But the picks-and-shovels layer and the data-moat layer will survive — and in some cases, thrive.</p>



<p><strong>The winners of the AI revolution won&#8217;t be the companies building AI. They&#8217;ll be the companies that own what AI cannot replicate: data, trust, physical infrastructure, and the human judgment to use it all wisely.</strong> Musk seems to get it. Burry half-gets it. The rest of the market? Still chasing the Yahoo dream.</p>



<p>As I wrote in my earlier piece on the AI Abundance Trap: LLMs don&#8217;t eliminate work; they give us 10× speed to develop everything else. The competitive edge in the coming decade belongs to those who refuse to let fast AI make them dumber.</p>



<p>So: cultivate your critical thinking. Invest in what can&#8217;t be prompted into existence. And prepare for the implosion that even Sam Altman&#8217;s pitch deck can&#8217;t prevent.</p>



<p>The dot-com crash didn&#8217;t kill the internet. It killed the pretenders.</p>



<p><strong>The AI implosion won&#8217;t kill artificial intelligence. It will kill the Yahoos.</strong></p>



<p><strong>And if you want to know who survives? Look for the rockets, not the chatbots.</strong></p>



<p></p>
]]></content:encoded>
					
		
		<enclosure url="https://projectcrunch.com/wp-content/uploads/2026/02/LLMs-Are-the-New-Yahoo-1.mp3" length="19257189" type="audio/mpeg" />

			</item>
		<item>
		<title>The AI Abundance Trap: Trillion-Dollar Valuations, AI Job Scare—And How We Can Still Grow the Pie</title>
		<link>https://projectcrunch.com/the-ai-abundance-trap-trillion-dollar-valuations-ai-job-scare-and-how-we-can-still-grow-the-pie/</link>
		
		<dc:creator><![CDATA[Roman Mildner]]></dc:creator>
		<pubDate>Sun, 22 Feb 2026 21:00:59 +0000</pubDate>
				<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Strategy]]></category>
		<guid isPermaLink="false">https://projectcrunch.com/?p=3704</guid>

					<description><![CDATA[Last year, as music is my hobby, I spent an evening creating professional-sounding songs with Suno. They sounded great, and I felt really good about myself—until I realised that tens of thousands of people are <a class="mh-excerpt-more" href="https://projectcrunch.com/the-ai-abundance-trap-trillion-dollar-valuations-ai-job-scare-and-how-we-can-still-grow-the-pie/" title="The AI Abundance Trap: Trillion-Dollar Valuations, AI Job Scare—And How We Can Still Grow the Pie">Read...</a>]]></description>
										<content:encoded><![CDATA[
<p>Last year, as music is my hobby, I spent an evening creating professional-sounding songs with <a href="https://suno.com/">Suno</a>. They sounded great, and I felt really good about myself—until I realised that tens of thousands of people are doing the exact same thing every day, and their Suno-creations sound brilliant. Suddenly, a product that used to take months, real talent, and real money is now worth next to nothing.</p>



<p>I’ve been thinking about this a lot lately: if someone using AI can deliver the exact same quality of work in just a few days that used to take months, how should that work be valued? Do we still pay the old rate, or is the entire pricing model broken?</p>



<p>That simple question exposes a quiet, open flaw in the entire AI narrative: what happens when intelligence itself becomes abundant and cheap?</p>



<h2 class="wp-block-heading">Are LLMs Good Enough?</h2>



<p>LLMs are continuously improving, but they remain fundamentally fast-thinking pattern matchers — exactly as Daniel Kahneman describes in his book Thinking, Fast and Slow. In it, he distinguishes two modes of human thinking: System 1 (“fast thinking”—quick, intuitive, pattern-matching) and System 2 (“slow thinking”—deliberate, logical reasoning required for complex, high-stakes work).</p>



<p>Current LLMs are pure System 1 machines. They simply predict the next token based on the previous ones. That’s why they still hallucinate at 10-20% across many real-world tasks. In that sense, they are not “intelligent” in the human meaning of the word.</p>



<p>For many routine tasks that do not require a predetermined outcome quality, this is often sufficient. But for anything that truly matters—tax advice, legal contracts, or safety- and security-critical automotive development—the risk is simply too high. You can outsource the first draft to an LLM, but thorough human verification and validation (true System 2 thinking) remain indispensable.</p>



<h2 class="wp-block-heading">“Free&#8221;—at a Price</h2>



<p>In that sense, for many tasks, current LLMs are already “good enough.” The real question is: what is such cheap content actually worth?</p>



<p>When output becomes infinite and near-free, the old pricing model collapses. “Agentic” AI like Claude Cowork can now develop complete software for pennies. Yet here is the bizarre paradox: pure software companies like Anthropic have valuations in the tens of billions, even though they are selling the very tools that will commoditize the software layer itself.</p>



<p>As a lateral example, SaaS (Software-as-a-Service) is being commoditized as we speak — the easy, promptable layers are turning into near-zero-cost commodities. If anyone can recreate something like OpenClaw in their basement, why would companies continue paying premium prices for what is quickly becoming a utility?</p>



<p>The trillion-dollar pitch decks assumed AI would capture huge rents from automated labour. Instead, raw intelligence itself is heading toward full commoditization.</p>



<p>But the problem runs deeper than just economics. Our heavy reliance on these fast-thinking systems is already creating a more subtle but serious issue: cognitive offloading. Recent studies, including a 2025 MIT Media Lab EEG experiment, show that users who lean heavily on LLMs exhibit significantly reduced brain engagement, lower critical thinking, and measurable “cognitive debt” over time.</p>



<p>In other words, while we happily offload more and more work to LLMs—even as they still hallucinate left and right—the users themselves are beginning to lose the ability to spot those hallucinations. That is not a good sign for the future of an LLM-driven AI industry.</p>



<h2 class="wp-block-heading">Surviving the LLM Implosion</h2>



<p>Despite all the shortcomings of current LLMs, not everything will be devoured by agentic AI. Many LLM-powered tasks already appear “good enough” in the sense that they can be completely automated, but we must focus on what survives commoditization: proprietary data, customer relationships, distribution, personal brand, and—most importantly—the irreplaceable human inspiration.</p>



<p>However, the ability to make educated decisions will become even more important as automation progresses rapidly. The decisive competitive factor for the next decade will be Effective Critical Systems Thinking (<a href="https://projectcrunch.com/ecst/" data-type="post" data-id="2656">ECST</a>). This slow, deliberate, System-2-level reasoning turns cheap AI from a crutch into a 10× multiplier. Companies and indie builders who deliberately cultivate ECST will pull ahead, while those who just prompt-and-pray fall behind.</p>



<p>In addition, some software tools are unlikely to become commoditized anytime soon. Certain infrastructure layers will remain extremely valuable. For instance, the Atlassian platforms (e.g., JIRA) that guarantee data persistence, compliance, auditability, and deep integration cannot be easily replicated with a prompt. Software that protects the high-trust environment— the rule of law, honest integration, open inquiry, and long-term value creation—will remain in every company&#8217;s war chest.</p>



<p>Otherwise, software becomes, in general, a “commodity”: relatively easy to develop, maintain, and extend at low cost. Systems development, on the other hand, products that, in addition to software, require custom hardware and mechanical parts, will remain in the “scarcity” camp: not commoditizable, expensive, and labor-intensive.</p>



<p>Thinking longer term, when fusion energy finally arrives (see my earlier piece on the megatrend of cheap, clean energy <a href="https://projectcrunch.com/megatrend-cheap-clean-energy/" data-type="post" data-id="746">here</a>), the whole game changes again: energy becomes nearly free, supercharging the abundance for those who kept their thinking sharp. Once this day arrives (likely not before 2030), all bets will be off anyway, because with sufficient energy, iterating everything (including physical infrastructure) until the result is satisfactory will be a non-issue.</p>



<h2 class="wp-block-heading">Keep Calm and Carry On</h2>



<p>Most of the hype money is still betting on raw LLM models, even as they are fast approaching their own commoditization.</p>



<p>AI is not approaching the mythical AGI anytime soon. Serious analysis shows the productivity miracle is smaller and slower than pitched, especially while LLMs remain unreliable System 1 fast thinkers. In other words, true AGI will literally never be possible as long as we rely on today’s “System 1” software.</p>



<p>While some fear that humans will be eliminated and that AI will do everything, this fear is understandable but misplaced. LLMs produce cheap content, not accountability. For many years to come, clients will always need a human “throat to choke” when millions are on the line. The real danger is not replacement — it’s becoming so dependent on LLMs that we lose our own deep thinking ability.</p>



<h2 class="wp-block-heading">Let’s Grow the Pie</h2>



<p>The “great commoditization” of software (including LLMs) is a revolution—and, as the saying goes, revolutions often devour their children. Many currently hyped companies will disappear and be remembered only by the same people who still remember the “Boo.com” disaster. That said, this revolution is real, and the trillion-dollar AI fairy has reached a scale that is becoming “too big to fail.” The often-cited comparison with the dot-com crash should not be taken lightly—the current AI hype may indeed end up in a similar crash. Once the dust settles, we will likely be surprised by what emerges from this chaos.</p>



<p>In the meantime, the fear that the economic pie will shrink and leave millions living on a “universal basic income” can come true—if we as human beings refuse to adapt. If we don’t adapt, the near future will lead to a tumultuous transition to the “brave new world.”</p>



<p>On the other hand, this transition doesn’t have to be as painful as some assume. &nbsp;The potential horrors of “everyone gets fired by the AI” rest on a fixed-pie assumption: that work would shrink, and the rest of us would have to fight over the same slice. In my view, that’s a horrible misconception. There will be many changes in the workforce, as mostly boring “box checkers” and bureaucrats may be sent packing home; however, most of us won’t miss them anyway. Instead, the remaining productive engineers and scientists will gain AI superpowers, thereby steeply increasing economic output (a.k.a. “added value”).</p>



<p>In other words, instead of being overly anxious that jobs are supposedly being destroyed, let’s grow the pie.</p>



<p>LLMs don’t just eliminate work; they give us 10× speed to develop everything else—including fusion reactors, new materials, and better medicine. The real competitive edge in the coming decade will belong to those who refuse to let fast AI make them dumber. Cultivate Effective Critical Systems Thinking. Protect open inquiry. Build on solid ground.</p>



<p>For indie builders, consultants, and companies worldwide, this is liberating: we never needed to rent our future from Big Tech anyway. The real game is building sovereign, honest, long-term things while the technology gets cheaper every month.</p>



<p>That’s what technology has always been about—and it’s why I’m genuinely optimistic about the decade ahead.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>References</strong></p>



<ul class="wp-block-list">
<li>Daniel Kahneman, <em>Thinking, Fast and Slow</em>, Farrar, Straus and Giroux, 2011. ISBN 978-0374533557</li>



<li>Kosmyna et al., “Your Brain on ChatGPT” (MIT Media Lab, June 2025) — <a href="https://www.media.mit.edu/publications/your-brain-on-chatgpt/" target="_blank" rel="noreferrer noopener">https://www.media.mit.edu/publications/your-brain-on-chatgpt/</a></li>



<li>Acemoglu, Daron. “The Simple Macroeconomics of AI” (NBER Working Paper 32487, 2024) — <a href="https://www.nber.org/papers/w32487" target="_blank" rel="noreferrer noopener">https://www.nber.org/papers/w32487</a></li>



<li>Roman Mildner, “Megatrend: Cheap Clean Energy” (projectcrunch.com) — <a href="https://projectcrunch.com/megatrend-cheap-clean-energy/" target="_blank" rel="noreferrer noopener">https://projectcrunch.com/megatrend-cheap-clean-energy/</a></li>
</ul>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Why OpenAI Had to Hire the Solo Dev Behind OpenClaw – And Why That Kills the “Agentic AI Will Replace Everyone” Fantasy</title>
		<link>https://projectcrunch.com/why-openai-had-to-hire-the-solo-dev-behind-openclaw-and-why-that-kills-the-agentic-ai-will-replace-everyone-fantasy/</link>
		
		<dc:creator><![CDATA[Roman Mildner]]></dc:creator>
		<pubDate>Sat, 21 Feb 2026 12:19:45 +0000</pubDate>
				<category><![CDATA[Crunch Time]]></category>
		<category><![CDATA[AI]]></category>
		<guid isPermaLink="false">https://projectcrunch.com/?p=3695</guid>

					<description><![CDATA[Last weekend, Sam Altman did something that should make every AI hype merchant pause. He hired Peter Steinberger. Not the company. Not the tech. The guy. The one-man band who, in a few weeks in <a class="mh-excerpt-more" href="https://projectcrunch.com/why-openai-had-to-hire-the-solo-dev-behind-openclaw-and-why-that-kills-the-agentic-ai-will-replace-everyone-fantasy/" title="Why OpenAI Had to Hire the Solo Dev Behind OpenClaw – And Why That Kills the “Agentic AI Will Replace Everyone” Fantasy">Read...</a>]]></description>
										<content:encoded><![CDATA[
<p>Last weekend, Sam Altman did something that should make every AI hype merchant pause.</p>



<p>He hired Peter Steinberger.</p>



<p>Not the company. Not the tech. The <em>guy</em>. The one-man band who, in a few weeks in January 2026, built OpenClaw — the open-source personal agent that actually works on real laptops, not just demos.</p>



<p>It lives in WhatsApp, Telegram, and Slack. Clears inboxes, books flights, runs shell commands, controls your browser, manages your calendar — all while remembering everything as plain Markdown files on <em>your</em> disk. Proactive, local, no cloud &#8220;hostage.&#8221; 100k+ GitHub stars in weeks.</p>



<p>Then OpenAI hired Peter, who is now driving the “next-generation personal agents” at the lab. </p>



<p>Here’s the part the VC pitch decks don’t want you to see:</p>



<p><strong><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">If the models were as god-like as claimed, they wouldn’t have needed to hire <em>anyone</em>.</mark></strong></p>



<p>They could have prompted Claude, &#8220;Computer Cowork&#8221; or their own agents: &#8220;Clone OpenClaw, production-grade, fix the edge cases, security, memory, reliability loops.” Weekend project. Cheap commodity. Done.</p>



<p><strong>But they didn’t.</strong> They bought the human brain that turned unreliable models into something useful.</p>



<p>That’s the dirty secret still true in February 2026: hallucinations and drift remain a critical bug—and no, it&#8217;s not a &#8220;feature.&#8221; It is a BUG, period. Long-running agents break on tiny changes, need rock-solid sandboxing, and taste no prompt reliably delivers. Even OpenAI looked at one skilled builder’s work and said, &#8220;We need him.&#8221;</p>



<p>This is why the “AI → deliver” fantasy collapses. Real workflow is still “AI → validate → repeat → deliver.” Speed is real, but the mythical-magical replacement? Nope, it remains a fantasy.</p>



<p>In our CORE SPICE framework, where we automate everything possible but SPEED is everything, this is the exact reality check we live by every day in automotive dev and management: use the AI for velocity, keep the human taste and validation so you actually ship without breaking the product.</p>



<p>The economy will win in the long term, as it did after the Internet boom. But the companies that keep winning stock prices won’t be the ones promising to fire every developer. They’ll sell the shovels (chips, power, cooling), embed AI inside unbreakable moats (Microsoft, Google, Amazon), and rely on rare humans who make the unreliable reliable.</p>



<p>OpenClaw is the perfect case study. One guy with taste and grit shipped something useful. The world&#8217;s biggest lab still needed that guy.</p>



<p>So next time someone tells you agentic AI is about to replace every knowledge worker, ask them why OpenAI couldn’t replace Peter Steinberger with a weekend prompt.</p>



<p>The answer is staring us right in the face.</p>



<p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How to Use CORE SPICE Approaches — Manually or with LLMs</title>
		<link>https://projectcrunch.com/how-to-use-core-spice-approaches-manually-or-with-llms/</link>
		
		<dc:creator><![CDATA[Roman Mildner]]></dc:creator>
		<pubDate>Sun, 14 Dec 2025 19:14:01 +0000</pubDate>
				<category><![CDATA[Management]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[CORE SPICE]]></category>
		<guid isPermaLink="false">https://projectcrunch.com/?p=3673</guid>

					<description><![CDATA[A practical way to define compliant projects without writing processes. <a class="mh-excerpt-more" href="https://projectcrunch.com/how-to-use-core-spice-approaches-manually-or-with-llms/" title="How to Use CORE SPICE Approaches — Manually or with LLMs">Read...</a>]]></description>
										<content:encoded><![CDATA[
<h3 class="wp-block-heading">A practical way to define compliant projects without writing processes</h3>



<p>CORE SPICE Approaches replace classical process writing with outcome-driven project definitions. With the CORE SPICE Approaches, teams can quickly kick off engineering activities. They ensure consistent, compliant teamwork while eliminating heavyweight process documentation that nobody reads.</p>



<p>Each CORE SPICE Approach is a short, structured template. It is completed with the project team during the early project phase and serves as a living reference for how the project is actually executed. The goal is clarity, speed, and the avoidance of wasteful, redundant documentation.</p>



<p>The CORE SPICE Approaches are available as templates on the CORE SPICE GitHub repository (<strong><a href="https://github.com/CORE-SPICE/CORE_SPICE_Releases">LINK</a>)</strong>. They are free to use under the CORE SPICE Creative Commons license, which allows them to be used for any commercial application.</p>



<h2 class="wp-block-heading">What problem are we solving?</h2>



<p>Most automotive development organizations still rely on classical process documentation. These documents are usually large, slow to update, and expensive to maintain. Worse, they are often disconnected from how engineering teams actually work.</p>



<p>Three problems show up in almost every project:</p>



<ul class="wp-block-list">
<li>Process handbooks are rarely ready during project kickoff. Teams start working without shared rules. When the process finally arrives, it collides with reality and causes friction.</li>



<li>New hires and suppliers do not study hundreds of pages of process documentation. They learn by imitation and local habits. Long process descriptions quickly become irrelevant.</li>



<li>Automotive SPICE, functional safety, and cybersecurity requirements are often addressed only through assessment. At that point, process work becomes a bottleneck.</li>
</ul>



<p>Traditional “process tailoring” does not solve this. It still produces documents that are too abstract, too late, and too remote from the day-to-day engineering decisions.</p>



<h2 class="wp-block-heading">What are CORE SPICE Approaches?</h2>



<p>CORE SPICE Approaches are not process descriptions in the classical sense. They define:</p>



<ul class="wp-block-list">
<li>What outcomes must be achieved</li>



<li>Who is responsible</li>



<li>How the project intends to work</li>
</ul>



<p>They intentionally avoid exhaustive activity lists, redundant standard references, and academic completeness.</p>



<p>An Approach does not try to describe everything. It explains what matters for the project. Each Approach fits on a small number of pages. It is reviewed with the core team. After that, Approaches are updated as needed and can be shown directly to assessors and customers.</p>



<p>In short, an Approach is a strategy and decision aid, not a rulebook.</p>



<h2 class="wp-block-heading">Two ways to use CORE SPICE Approaches</h2>



<p>CORE SPICE Approaches can be used in two ways. Both are valid. The choice depends on timing, team maturity, and constraints.</p>



<p><strong>1. Manually </strong><strong>(conventional approach)</strong><strong></strong></p>



<p><strong>2. LLM-accelerated way</strong><strong></strong></p>



<p>In the former case, the approaches are used as inputs and manually elaborated, with each description completed line by line by a dedicated project role.</p>



<p>In the latter case, the CORE SPICE Approaches are used as input to LLMs, quickly generating an expanded document that can be reviewed and discussed with the team immediately.</p>



<h3 class="wp-block-heading"><strong>1. </strong><strong>Manually</strong></h3>



<p>In the manual approach, a designated role (e.g., Project Lead or Issue Lead—an optional role created for this example only) completes the CORE SPICE template line by line. The draft is reviewed with the core team. Open questions are clarified. Conflicts are resolved. After one or two iterations, the document is approved and becomes part of the project baseline.</p>



<p>This conventional approach makes sense when:</p>



<ul class="wp-block-list">
<li>The team needs deep alignment and discussion</li>



<li>The customer explicitly demands traditional documentation</li>



<li>There is enough time before development ramps up</li>
</ul>



<p>The downside is obvious; even using such a slim CORE SPICE Approach still takes time. In real-world projects, manually creating a complete set of Approaches can take weeks or months.</p>



<h3 class="wp-block-heading"><strong>2. LLM-accelerated creation</strong><strong></strong></h3>



<p>The second option uses large language models as drafting accelerators—in this case, ChatGPT 5.1.</p>



<p>The procedure is simple:</p>



<ul class="wp-block-list">
<li>The core team aligns on the key project intent, such as safety level, customer priorities, constraints, or team size.</li>



<li>The responsible role prepares a structured prompt. Inputs typically include:<ul><li>Project context and goals</li></ul><ul><li>Architectural scope</li></ul><ul><li>Applicable standards (ASPICE, ISO 26262, ISO 21434)</li></ul>
<ul class="wp-block-list">
<li>Organizational constraints</li>
</ul>
</li>



<li>The LLM generates a complete first draft of the Approach.</li>



<li>The team reviews, corrects, and adapts the draft.</li>
</ul>



<p>An important point is that LLM does not make decisions (which can be explicitly ensured by adding specific instructions to the original LLM prompt). It only produces a structured proposal. The responsible project role remains fully accountable for completing the Approach.</p>



<p>Using LLMs reduces drafting time by 70–80%. It allows teams to discuss a concrete document immediately rather than debate abstract ideas.</p>



<h2 class="wp-block-heading"><strong>Example: LLM-driven Issue Management Approach</strong><strong></strong></h2>



<p>Let us consider an Issue Management Approach created for a hypothetical Door Lock Control ECU project. The project required:</p>



<ul class="wp-block-list">
<li>Handling defects, change requests, and project tasks</li>



<li>Full tool support via Atlassian JIRA</li>



<li>Strong focus on ASIL D and cybersecurity</li>



<li>Compliance with Automotive SPICE 4.0 without clutter</li>
</ul>



<p>In addition, a new role was introduced: the Issue Flow Manager, who reports directly to the Project Lead. Using an LLM, the Issue Management Approach template was expanded into a full project-specific document.</p>



<p>The detailed prompt for this approach is provided in the “Appendix” at the end of this article.</p>



<p>The result was a 14-page draft that included scope and objectives, defined roles and responsibilities, and explicit issue lifecycles for defects, change requests, and tasks.</p>



<p>The entire first version was created in one working day, including several rounds of review. Creating the same document manually would typically take several weeks. This does not mean the document was “finished.” In a real project setting, it will still require expert review. But the hard part—structuring and completeness—was already done. </p>



<p>Two additional iterations were needed to fine-tune the level of detail. Workflow visualizations were generated automatically. Because the resulting workflows were purely textual, we used &nbsp;ChatGPT to generate UML-like state machine graphs using PlantUML (see <a href="https://www.plantuml.com">https://www.plantuml.com</a>). We stored the result in Git (see <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/Approaches_Demo/Issue%20Management%20Approach.docx">link in Git</a>)</p>



<h2 class="wp-block-heading"><strong>What about quality, accountability, and trust?</strong></h2>



<p>A common objection is that LLMs cannot be trusted to define processes. LLMs can produce incorrect or inconsistent output if not validated and appropriately reviewed. Therefore, every generated draft must be reviewed and owned by the responsible project role. Thus, the resulting approaches must be carefully validated by the project core teams. </p>



<ul class="wp-block-list">
<li>LLMs are not accountable. Project leads are.</li>



<li>LLMs do not approve documents. Teams do.</li>



<li>LLMs do not replace expertise. They amplify it.</li>



<li>Quality comes from review, not from typing speed.</li>
</ul>



<p>In fact, the LLM-based approach often improves quality because:</p>



<ul class="wp-block-list">
<li>Inconsistencies become visible earlier</li>



<li>Gaps are easier to spot in a complete draft</li>



<li>Changes can be applied consistently across documents</li>
</ul>



<p>The idea that process documents remain unchanged throughout a project is a fiction. Projects evolve. Requirements change. Teams change. Using LLMs enables faster, safer, and more efficient work. That helps avoid the annoying “reworking from scratch,” which often demotivates the team. CORE SPICE Approaches must be easily modifiable, living documents. An Approach is not written once and archived. Only with such “living documents” can approaches serve as references for daily decisions.</p>



<p>This makes CORE SPICE fundamentally different from traditional process frameworks. The goal is not compliance “theater.” The goal is working clarity under real-world constraints.</p>



<h2 class="wp-block-heading"><strong>Conclusion</strong><strong></strong></h2>



<p>Modern systems development does not fail because teams lack standards. It fails because the interpretation of standards is too complex, applied too late, and implemented too literally.</p>



<p>CORE SPICE Approaches address this by:</p>



<ul class="wp-block-list">
<li>Defining intent early</li>



<li>Keeping documentation minimal but sufficient</li>



<li>Automating everything whenever possible</li>



<li>Aligning teams around outcomes, not activities</li>
</ul>



<p>Whether created manually or accelerated with LLMs, CORE SPICE Approaches enable teams to start nearly instantly. In automotive projects, development speed is not a luxury. It is a survival factor. CORE SPICE is not about writing better processes. It is about not needing to “write” processes at all.</p>



<p></p>



<p></p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading">Appendix</h2>



<p>[1] The Issue Management Approach demo draft: <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/Approaches_Demo/Issue%20Management%20Approach.docx">https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/Approaches_Demo/Issue%20Management%20Approach.docx</a></p>



<p>[2] The full prompt:</p>



<p><strong>Context:</strong><br>In the example project “Door Lock” (see <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO">https://github.com/CORE-SPICE/DOORLOCK_DEMO</a>), develop a full draft of the Issue Management Approach.</p>



<p>Use the “Value” sections only as a support narrative and do not include the “Value” chapter in the final result.</p>



<p>Introduce one additional role that is not part of the CORE SPICE Roles Catalog and that reports directly to the Project Lead.</p>



<p><strong>Assumptions</strong></p>



<ul class="wp-block-list">
<li>The issue management system covers defects, change requests, and project tasks.</li>



<li>Use Atlassian JIRA as the standard issue management tool.</li>



<li>If applicable, include other tools that support the issue management system.</li>



<li>Ensure compliance with the project’s goals regarding ASIL D and cybersecurity.</li>



<li>Ensure full compliance with ASPICE 4.0 in SUP.1, SUP.8, SUP.9, SUP.10, and MAN.3, without cluttering the Approach with explicit ASPICE references—just be compliant.</li>



<li>Include a lifecycle for each issue type.</li>
</ul>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Can LLMs Automate Automotive Development?</title>
		<link>https://projectcrunch.com/can-llms-automate-automotive-development/</link>
		
		<dc:creator><![CDATA[Roman Mildner]]></dc:creator>
		<pubDate>Sun, 02 Nov 2025 18:21:57 +0000</pubDate>
				<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<guid isPermaLink="false">https://projectcrunch.com/?p=3611</guid>

					<description><![CDATA[Large Language Models (LLMs) such as ChatGPT 5 and Grok 4 are becoming more capable and versatile. The real question is whether they can be used for serious work in automotive parts development. LLMs can <a class="mh-excerpt-more" href="https://projectcrunch.com/can-llms-automate-automotive-development/" title="Can LLMs Automate Automotive Development?">Read...</a>]]></description>
										<content:encoded><![CDATA[
<h4 class="wp-block-heading">Large Language Models (LLMs) such as ChatGPT 5 and Grok 4 are becoming more capable and versatile. The real question is whether they can be used for serious work in automotive parts development.</h4>



<p></p>



<p>LLMs can write, summarize, and even compose music. But automotive engineering demands more than creativity. It demands compliance, traceability, and precision. So the question is: Can LLMs generate compliant, traceable, and review-ready documentation that meets Automotive SPICE, ISO 26262, and ISO 21434 requirements?</p>



<p>To find out, we conducted a one-day experiment using LLMs to create an end-to-end draft for a Door Lock Control ECU.</p>



<h2 class="wp-block-heading">The LLM Experiment</h2>



<p>The goal was simple: to generate, within one day, a complete documentation draffor a small but safety-relevant subsystem—a car door lock controller.</p>



<p>The intention wasn’t to create production-ready data, but to evaluate how far AI could accelerate early V-Model phases—from requirements elicitation to testing and compliance documentation.</p>



<p>No external tools were used. No DOORS, no Integrity, no code generators. Just LLMs, text prompts, and office formats.</p>



<p>Because Volkswagen’s projects (with KGAS and Formel Q) are known for rigor, VW was chosen as the reference OEM. The work was time-boxed to one Saturday.</p>



<h2 class="wp-block-heading">Customer Requirements (SYS.1)</h2>



<p>Using ChatGPT 5.0 and Grok 4.0, I began with the customer requirements. No existing example was provided; everything was generated from scratch.</p>



<p>After several iterations, the core SYS.1 query became:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Develop a Door Lock Control ECU<br>Description: Controls electric door locks via key-fob signal or button input, with feedback.<br>Key points: Response &lt; 500 ms, fail-safe unlock, ASIL A classification, signal authentication.<br>Deliverable: “Lastenheft-like” specification including ~50 requirements compliant with VW practice and KLH Gelbband 2023.</p>
</blockquote>



<p>The resulting document contained 87 customer requirements, ready for trace-down to SYS.2.</p>



<figure class="wp-block-image size-large"><a href="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS1.png"><img decoding="async" width="1024" height="410" src="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS1-1024x410.png" alt="" class="wp-image-3614" srcset="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS1-1024x410.png 1024w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS1-300x120.png 300w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS1-768x308.png 768w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS1.png 1320w" sizes="(max-width: 1024px) 100vw, 1024px" /></a><figcaption class="wp-element-caption">Excerpt from the SYS.1 requirements</figcaption></figure>



<p>Full list: <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SYS.1">https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SYS.1</a></p>



<h2 class="wp-block-heading">The Left-Side of V</h2>



<h3 class="wp-block-heading">System Specification (SYS.2)</h3>



<p>SYS.2 system requirements were derived from SYS.1.</p>



<figure class="wp-block-image size-large"><a href="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS2-scaled.png"><img loading="lazy" decoding="async" width="1024" height="390" src="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS2-1024x390.png" alt="" class="wp-image-3616" srcset="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS2-1024x390.png 1024w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS2-300x114.png 300w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS2-768x293.png 768w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS2-1536x585.png 1536w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS2-2048x781.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>It took several iterations to ensure the system used sufficiently analyzed system requirements, including the verification criteria.</p>



<p>Example of a requirement:</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>SysRS-079</td><td>System shall unlock all doors on valid crash signal (e.g., pulse&gt;5V for &gt;10ms), verifiable by signal injection.</td></tr><tr><td>Status</td><td>Approved</td></tr><tr><td>Derived from (customer requirement)</td><td>REQ-II-5.6</td></tr><tr><td>Safety Rating</td><td>ASIL A</td></tr><tr><td>Priority</td><td>High</td></tr><tr><td>Risk</td><td>High</td></tr><tr><td>Verification Method</td><td>Test</td></tr><tr><td>Test Level</td><td>System</td></tr><tr><td>Discipline</td><td>SYS/HW</td></tr><tr><td>Verification Criteria</td><td>Signal injection passed; 100% unlocks on pulse&gt;5V for &gt;10ms; no misses.</td></tr></tbody></table></figure>



<p>See <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SYS.2">https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SYS.2</a> for the complete document.</p>



<h3 class="wp-block-heading">System Architecture (SYS.3)</h3>



<p>SYS.3 (system architecture) was derived from SYS.2 and contained a textual architecture plus a simple LLM-generated block diagram (a separate query). Though basic, it demonstrated consistent traceability and structure typical for ASPICE-compliant work.</p>



<figure class="wp-block-image size-large is-resized"><a href="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS-3-Block-Diagram.png"><img loading="lazy" decoding="async" width="1024" height="683" src="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS-3-Block-Diagram-1024x683.png" alt="" class="wp-image-3618" style="width:518px;height:auto" srcset="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS-3-Block-Diagram-1024x683.png 1024w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS-3-Block-Diagram-300x200.png 300w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS-3-Block-Diagram-768x512.png 768w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS-3-Block-Diagram-675x450.png 675w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS-3-Block-Diagram.png 1536w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a><figcaption class="wp-element-caption">A simple system-level block diagram</figcaption></figure>



<p>The result was, at best, a glimpse of the architecture, but it gave at least a rough idea of the system architectural design. A full-blown architecture could be further refined using SysML (see SWE.2 examples).</p>



<p>SYS.3 full set: https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SYS.3</p>



<h3 class="wp-block-heading">Software Requirements (SWE.1)</h3>



<p>33 SWE.<strong>1</strong> (software requirements) requirements were derived from SYS.2 and SYS.3, retaining the traceability from both levels.</p>



<figure class="wp-block-image size-large"><a href="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SWE.1-scaled.png"><img decoding="async" src="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SWE.1-1024x223.png" alt="" class="wp-image-3619"/></a></figure>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>SwRS-001</td><td>Software shall implement the finite-state machine with states Locked, Transition, Unlocked, handling retries and watchdog recovery.</td></tr><tr><td>Status</td><td>Approved</td></tr><tr><td>Trace from SYS.2</td><td>SysRS-061; SysRS-072; SysRS-093; SysRS-100; SysRS-114; SysRS-109; SysRS-110</td></tr><tr><td>Trace from SYS.3</td><td>ARC-STM-003; ARC-SCN-001/002/004</td></tr><tr><td>Category</td><td>Functional</td></tr><tr><td>Priority</td><td>High</td></tr><tr><td>Risk</td><td>Medium</td></tr><tr><td>Verification Method</td><td>SIL model test + HIL timing</td></tr><tr><td>Discipline</td><td>SW (meaning: no FuSa or Cybersecurity)</td></tr><tr><td>Verification Criteria (KGAS-compliant)</td><td>All transitions executed ≤500 ms; retries ≤3; on watchdog/reset → fail-safe unlock; 100% state/transition coverage</td></tr></tbody></table></figure>



<p>Similar to SYS-levels, we had to iterate several times to achieve a more realistic level of granularity in the software requirements derived from the SYS.2/SYS.3 documents.</p>



<p>See <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.1">https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.1</a> </p>



<h3 class="wp-block-heading">Software Architecture (SWE.2)</h3>



<p></p>



<p>Software Architecture was derived from SWE.1, resulting in a textual of the architecture specification, commonly used in many ASPICE-compliant projects. ChatGPT automatically added relevant aspects of the software architecture:</p>



<ul class="wp-block-list">
<li>SW components</li>



<li>SW interfaces</li>



<li>Dynamic aspects</li>



<li>State machines</li>



<li>SW data types</li>



<li>Traceability</li>



<li>Non-functional requirements elements (e.g., cybersecurity)</li>
</ul>



<p>In addition, LLMs proposed using PlantUML diagrams and generated them.</p>



<figure class="wp-block-image size-full is-resized"><a href="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SWE2-State-Machine.png"><img loading="lazy" decoding="async" width="1024" height="1024" src="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SWE2-State-Machine.png" alt="" class="wp-image-3625" style="width:425px;height:auto" srcset="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SWE2-State-Machine.png 1024w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SWE2-State-Machine-300x300.png 300w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SWE2-State-Machine-150x150.png 150w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SWE2-State-Machine-768x768.png 768w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SWE2-State-Machine-70x70.png 70w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a><figcaption class="wp-element-caption">A state machine, generated by ChatGPT</figcaption></figure>



<p>(Complete SWE.2, including the images and the PlantUML diagrams, see <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.2">https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.2</a>)</p>



<h3 class="wp-block-heading">SW Detailed Design (SWE.3)</h3>



<p>SWE.3 elements … were created on two levels:</p>



<ul class="wp-block-list">
<li>Software detailed design</li>



<li>Software Units</li>
</ul>



<p>The <strong>detailed design </strong>came out very simplified, but we did not drill down further, as our intention was—as throughout the document—to generate a “proof of concept” methodology. Even using a simple prompt, ChatGPT was able to derive requirements, ChatGPT was able to generate a more comprehensive specification in one Excel book, including</p>



<ul class="wp-block-list">
<li>Module Units</li>



<li>API</li>



<li>Algorithms</li>



<li>Data dictionary</li>



<li>Error handling</li>



<li>Calibration</li>



<li>Unit test hooks</li>



<li>Traceability records</li>
</ul>



<p>For the very first iteration of the detailed design documentation, the result was pretty impressive.</p>



<figure class="wp-block-image size-full is-resized"><a href="https://projectcrunch.com/wp-content/uploads/2025/11/DD03.png"><img loading="lazy" decoding="async" width="1024" height="1024" src="https://projectcrunch.com/wp-content/uploads/2025/11/DD03.png" alt="" class="wp-image-3628" style="width:310px;height:auto" srcset="https://projectcrunch.com/wp-content/uploads/2025/11/DD03.png 1024w, https://projectcrunch.com/wp-content/uploads/2025/11/DD03-300x300.png 300w, https://projectcrunch.com/wp-content/uploads/2025/11/DD03-150x150.png 150w, https://projectcrunch.com/wp-content/uploads/2025/11/DD03-768x768.png 768w, https://projectcrunch.com/wp-content/uploads/2025/11/DD03-70x70.png 70w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a><figcaption class="wp-element-caption">A generated detailed design sequence diagram</figcaption></figure>



<p></p>



<p>(see <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.3/DD">https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.3/DD</a> for the full content).</p>



<p>The LLM also generated units as C source code.</p>



<p><strong>Software Units</strong> were traced from detailed design (SWE.2) and software requirements (SWE.1). The resulting example (C-code) consisted of 34 units.</p>



<pre class="wp-block-code"><code>Example of a simple CAN adapter:
/* ==========================================================================
 * Unit: can_adp (SWU-011) - implementation
 * Trace: SWE.1 SwRS-010, SwRS-019, SwRS-025 | SWE.2 SWD-CMP-011
 * ========================================================================== */
#include "can_adp.h"
#include &lt;string.h&gt;

#define MAX_ID 0x7FFu /* placeholder allowed range */
static uint8_t s_last_seq_table&#91;16] = {0}; /* LUT by (id &amp; 0xF) */

static bool id_allowed(uint32_t id)
{
    /* Placeholder policy: standard ID range only */
    return (id &lt;= MAX_ID);
}

static bool crc_ok(const CANMsg_t* m)
{
    /* Placeholder CRC check stub: always true, to be replaced with real CRC */
    (void)m;
    return true;
}

bool CAN_Validate(const CANMsg_t* m)
{
    if (m == 0) return false;
    if (!id_allowed(m-&gt;id)) return false;
    if (m-&gt;dlc &gt; 64u) return false;
    if (!crc_ok(m)) return false;

    uint8_t* pseq = &amp;s_last_seq_table&#91;m-&gt;id &amp; 0xFu];
    uint8_t last = *pseq;
    if (m-&gt;seq == last) {
        return false; /* duplicate */
    }
    /* allow wrap-around; only reject if strictly older */
    if ((uint8_t)(m-&gt;seq - last) &gt; 200u) {
        return false;
    }
    *pseq = m-&gt;seq;
    return true;
}
</code></pre>



<p>Like the rest of the example, it is an exceedingly simplified code, but it appears at least syntactically correct. </p>



<p>(Complete specification is located here: <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.3/Unit%20Construction">https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.3/Unit%20Construction</a></p>



<h2 class="wp-block-heading">The Right Side of V</h2>



<h3 class="wp-block-heading">System Qualification Test (SYS.5)</h3>



<p>Derived from system requirements (SYS.2), a complete set of system test cases were generated, based on requirements and therein specified verification criteria. </p>



<figure class="wp-block-image size-large"><a href="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS5-scaled.png"><img loading="lazy" decoding="async" width="1024" height="402" src="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS5-1024x402.png" alt="" class="wp-image-3642" srcset="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS5-1024x402.png 1024w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS5-300x118.png 300w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS5-768x302.png 768w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS5-1536x604.png 1536w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS5-2048x805.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a><figcaption class="wp-element-caption">LLM-generated system qualification test cases</figcaption></figure>



<p>In this example (SYS-TC-007), LLM created the following values for this test case:</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>SYS-TC-007</td><td>Crash Unlock Timing</td></tr><tr><td>Purpose</td><td>Force unlock on crash within time budget.</td></tr><tr><td>Priority</td><td>High</td></tr><tr><td>ASIL</td><td>A</td></tr><tr><td>Cybersecurity</td><td>No</td></tr><tr><td>Pre-condition</td><td>State=Locked; Crash_Line controllable; &#8211; Power Supply: programmable 0–16 V, ripple &lt;50 mV &#8211; DMM/ADC tap for VBAT &#8211; Oscilloscope (≥1 MS/s) on Motor_En, Motor_PWM, OCSense &#8211; CAN interface (FD-capable) logs @500k/2M, IDs per DBC &#8211; RF TX emulator with frame scripting &#8211; Digital IO to assert Crash_Line &#8211; Time sync via PPS or shared trigger</td></tr><tr><td>Test Steps</td><td>1) Scope CH1=Crash_Line, CH2=Motor_En. 2) Arm single-shot trigger on Crash_Line rising. 3) At T0, assert Crash_Line HIGH. 4) Measure Motor_En rising at T1; compute start latency T1-T0. 5) Verify completion and status (CAN 0x5A1) at T2/T3.</td></tr><tr><td>Expected Results</td><td>Unlock actuation starts quickly; completes; status reported.</td></tr><tr><td>Acceptance Criteria</td><td>Start latency ≤ 100 ms in 10/10 trials; status publish per normal (≤100 ms after completion).</td></tr><tr><td>Verification Method</td><td>Test</td></tr><tr><td>Environment</td><td>HIL/Vehicle</td></tr><tr><td>Status</td><td>Planned</td></tr><tr><td>Trace to SysRS</td><td>SYSRS-007 SYSRS-024</td></tr></tbody></table></figure>



<p>As a nice by-product, ChatGPT automatically generated a traceability record as a requirements test coverage metric</p>



<figure class="wp-block-image size-full"><a href="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS5_Coverage.png"><img loading="lazy" decoding="async" width="985" height="871" src="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS5_Coverage.png" alt="" class="wp-image-3644" srcset="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS5_Coverage.png 985w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS5_Coverage-300x265.png 300w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SYS5_Coverage-768x679.png 768w" sizes="auto, (max-width: 985px) 100vw, 985px" /></a><figcaption class="wp-element-caption">Test coverage of SYS.2 requirements</figcaption></figure>



<p>(Complete system test catalog: <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SYS.5">https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SYS.5</a>)</p>



<h3 class="wp-block-heading"><strong>SYS.4, SWE.6, SWE.5, and SWE.4</strong></h3>



<p>The remaining test cases on the right side of V have been derived from the respective levels (SYS.3, SWE.1, SWE.2, and SWE.3) in the same way.</p>



<p>See the remaining test catalogs:</p>



<ul class="wp-block-list">
<li>SWE.5 SW integration test: <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.5">https://github.com/CORE-SPICE/DOORLOCK_DEMO/tree/main/SWE.5</a></li>



<li>SWE.4 unit tests <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/SWE.4/Door_Lock_Control_ECU_SWE4_Unit_Test_Catalogue.xlsx">https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/SWE.4/Door_Lock_Control_ECU_SWE4_Unit_Test_Catalogue.xlsx</a> </li>
</ul>



<h2 class="wp-block-heading">Quality Assurance (SUP.1)</h2>



<p>Using Grok, we calculated a very simplified traceability coverage, which revealed a few gaps.</p>



<p>It appears realistic to expand the traceability report to include more complex traceability concepts, but we have not dived into the traceability aspect any further.</p>



<p>After a complete iteration of the door lock system, we also audited the results using Grok 4.0 to identify consistency and traceability gaps, which suggested the potential to automate quality assurance.</p>



<p>(See <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/Door_Lock_Control_Traceability_Coverage.xlsx">https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/Door_Lock_Control_Traceability_Coverage.xlsx</a>)</p>



<figure class="wp-block-image size-large"><a href="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-Traceability-Coverage.png"><img loading="lazy" decoding="async" width="1024" height="226" src="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-Traceability-Coverage-1024x226.png" alt="" class="wp-image-3646" srcset="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-Traceability-Coverage-1024x226.png 1024w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-Traceability-Coverage-300x66.png 300w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-Traceability-Coverage-768x169.png 768w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-Traceability-Coverage.png 1450w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>(See <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/2025-10-13_Audit%20Report.docx">https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/2025-10-13_Audit%20Report.docx</a> )</p>



<p>Looking at the generated system specification from a quality perspective, we wondered which gaps and improvements would be necessary to enhance its quality. Using a single prompt, we generated a high-quality report.</p>



<figure class="wp-block-image size-large is-resized"><a href="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SUP1.png"><img loading="lazy" decoding="async" width="688" height="1024" src="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SUP1-688x1024.png" alt="" class="wp-image-3647" style="width:485px;height:auto" srcset="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SUP1-688x1024.png 688w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SUP1-201x300.png 201w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SUP1-768x1144.png 768w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-SUP1.png 936w" sizes="auto, (max-width: 688px) 100vw, 688px" /></a></figure>



<p>This is an excerpt of the audit findings. See <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/2025-10-13_Audit%20Report.docx">https://github.com/CORE-SPICE/DOORLOCK_DEMO/blob/main/2025-10-13_Audit%20Report.docx</a> for the complete document.</p>



<p>The result is, of course, very simplified and most likely incomplete. However, it demonstrated the potential of using LLM-generated documentation for quality assurance and compliance.</p>



<h2 class="wp-block-heading">Key Observations</h2>



<p>We were able to create a “zero draft” of the specification documents required by ASPICE at the SYS and SWE levels, including full traceability, in just one day. The results were impressive at first sight but overly superficial at second sight. However, we must not forget that the documentation was generated in a few hours of work, and even that kind of simple, comprehensive draft would already cost weeks to achieve, at a cost of dozens of thousands of dollars.</p>



<p>The best way to work with LLMs is to Iiterate on the models until the desired level of quality is achieved.. </p>



<p>The central insight is that the future of automotive development will be hybrid, combining human expertise with AI-generated work products.</p>



<h2 class="wp-block-heading">Conclusion</h2>



<p>This experiment proved that LLMs can produce consistent, compliant, structured, and traceable documentation for an automotive subsystem in a single day. While far from replacing engineers, they can jump-start the development process and ensure a consistent baseline across the V-model.</p>



<p>Correctness, reasoning, and tool integration remain open challenges—but the trajectory is clear. With proper human oversight, AI will become a standard part of the automotive engineering toolkit, reshaping how projects start and how compliance is achieved.</p>



<p>The CORE SPICE approach captures this philosophy: Automate everything that can be automated—and let humans focus on what truly matters.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em><strong>Reference</strong></em></p>



<p>[1] All files created in this example: <a href="https://github.com/CORE-SPICE/DOORLOCK_DEMO">https://github.com/CORE-SPICE/DOORLOCK_DEMO</a></p>



<figure class="wp-block-image size-large is-resized"><a href="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-Git-Repo.png"><img loading="lazy" decoding="async" width="492" height="1024" src="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-Git-Repo-492x1024.png" alt="" class="wp-image-3649" style="width:268px;height:auto" srcset="https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-Git-Repo-492x1024.png 492w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-Git-Repo-144x300.png 144w, https://projectcrunch.com/wp-content/uploads/2025/11/Door-Lock-Git-Repo.png 678w" sizes="auto, (max-width: 492px) 100vw, 492px" /></a></figure>



<p>[2] VDA KLH: <a href="https://vda-qmc.de/wp-content/uploads/2023/11/KLH_Gelbband_2023_EN.pdf">https://vda-qmc.de/wp-content/uploads/2023/11/KLH_Gelbband_2023_EN.pdf</a></p>



<p>[3] KGAS 4.2 (not publicly avaliable on the net)</p>



<p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Car IT Reloaded has been released!</title>
		<link>https://projectcrunch.com/car-it-reloaded-has-been-released/</link>
		
		<dc:creator><![CDATA[Roman Mildner]]></dc:creator>
		<pubDate>Fri, 24 Oct 2025 18:18:18 +0000</pubDate>
				<category><![CDATA[Management]]></category>
		<category><![CDATA[CORE SPICE]]></category>
		<guid isPermaLink="false">https://projectcrunch.com/?p=3606</guid>

					<description><![CDATA[Fron the book back page: This book provides an overview of the many new features becoming a reality in connected cars. It covers everything from the integration of Google and Facebook to services that help <a class="mh-excerpt-more" href="https://projectcrunch.com/car-it-reloaded-has-been-released/" title="Car IT Reloaded has been released!">Read...</a>]]></description>
										<content:encoded><![CDATA[
<p>Fron the book back page:</p>



<p><em>This book provides an overview of the many new features becoming a reality in connected cars. It covers everything from the integration of Google and Facebook to services that help you find your parking spot, park your car via an app, or remotely close your sunroof when it&#8217;s raining.<br>The ultimate goal of this development is autonomous driving. The book includes current developments, implementation variants, and key challenges regarding safety and legal framework. It also provides information about the necessary quality standards in developing complex vehicle software-based systems.<br>Finally, the effects on the economy, society, and politics are described, with special consideration given to vehicle users, manufacturers, and suppliers.</em></p>



<p>Contents</p>



<ul class="wp-block-list">
<li>From Heritage to High-Tech: The Evolution of the Automotive Industry</li>



<li>Driving into the Future: Autonomous Vehicles</li>



<li>Digital Drive: The New Era of Connectivity</li>



<li>Quality in the Automotive Industry – From Product to Process</li>



<li>Know the Risks: Mastering Automotive Project Management</li>



<li>CORE SPICE</li>



<li>Strategic Roadmap: Shaping the Future of the Automotive Business</li>



<li>Automotive Perspectives: Resolving the Riddle</li>
</ul>



<p>Target audience</p>



<ul class="wp-block-list">
<li>Managers and specialists in software development, process quality, systems engineering, suppliers, and vehicle manufacturers</li>



<li>Drivers interested in the latest developments in automotive technology</li>
</ul>



<p>The authors<br><strong>Roman Mildner</strong>&nbsp;is a consultant, project manager, and author specializing in project organization and process quality in the automotive industry.</p>



<p><strong>Thomas Ziller</strong>&nbsp;is a project manager who shares his extensive knowledge as a lecturer at Heilbronn University of Applied Sciences.</p>



<p><strong>Franco Baiocchi</strong>&nbsp;is an Intacs<img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" />-certified Competent Assessor and project manager working as an independent consultant since 2021.</p>



<p>Enjoy!</p>



<p>Your book, Car IT Reloaded (get it <a href="https://www.amazon.de/Car-Reloaded-Disruption-Industry/dp/3658476907" data-type="page" data-id="1347">here</a> in Germany or <a href="https://www.amazon.com/Car-Reloaded-Disruption-Industry/dp/3658476907">here</a> in the US):</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>CORE SPICE Coaching Concept</title>
		<link>https://projectcrunch.com/core-spice-coaching-concept/</link>
		
		<dc:creator><![CDATA[Roman Mildner]]></dc:creator>
		<pubDate>Sat, 30 Aug 2025 10:51:42 +0000</pubDate>
				<category><![CDATA[Management]]></category>
		<category><![CDATA[CORE SPICE]]></category>
		<category><![CDATA[Strategy]]></category>
		<guid isPermaLink="false">https://projectcrunch.com/?p=3370</guid>

					<description><![CDATA[Refocusing from “compliance” to “project effectiveness”: When projects get stuck, CORE SPICE gets them moving. <a class="mh-excerpt-more" href="https://projectcrunch.com/core-spice-coaching-concept/" title="CORE SPICE Coaching Concept">Read...</a>]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-audio"><audio controls src="https://projectcrunch.com/wp-content/uploads/2025/08/CORE_SPICE_Coaching_Concept.mp3"></audio></figure>



<h4 class="wp-block-heading">Refocusing from “compliance” to “project effectiveness”: When projects get stuck, CORE SPICE gets them moving.</h4>



<p>There has been considerable pushback against relying on assessment models as a solution to automotive industry woes—a topic that has been a buzz for some time. Condemning such standards has become a good source of clickbait, but there is a grain of truth in that continuous buzz. The German assessment model, Automotive SPICE (also known as ASPICE), is a VDA standard that has gained popularity as an alternative to the failed CMMI appraisal model. ASPICE is a helpful tool tied to the V-model, thus eliminating the need to invent an industry-specific project lifecycle (CMMI did not postulate any specific model like V). Its primary purpose is to assess suppliers&#8217; “worthiness” in terms of risk management, structured approach, and the complex traceability requirements, so that the OEM can contract the most suitable supplier. Since the “software crisis” was proclaimed (see the 1968 NATO Conference [1]), capability models like CMM, CMMI, and ASPICE have played a role in addressing the challenge of software complexity in systems parts development (see [2] for a broader context).</p>



<p>Assessment models can be used as an indicator of compliance, with the expectation that demanding specific compliance elements can be directly translated into efficient processes. However, the high number of failed ASPICE process improvement projects points to an efficiency gap. The root cause of this problem likely lies in the expectation that ASPICE could be “implemented” as a development process. Yet assessment models are not process templates. The fundamental challenge is that ASPICE does not provide a systemic resolution of its expectations. ASPICE is an analytical, highly rational approach to evaluating compliance. In contrast, designing an efficient, well-thought-out systems development process is a <em>systemic</em> task that requires a deep understanding of the organizational context and all reference processes described in ASPICE. Using assessment models as project improvement goals is a risky proposition. In fact, analytical assessments should not be the starting point for process development, as such a strategy is <em>rational </em>rather than <em>systemic</em>.</p>



<p><a href="https://thesystemsthinker.com/a-lifetime-of-systems-thinking" data-type="link" data-id="https://thesystemsthinker.com/a-lifetime-of-systems-thinking">Russell Ackoff</a>, renowned for his “systems thinking,” emphasized the distinction between analytical thinking (which breaks a system down into its parts and optimizes them individually) and systemic thinking (which focuses on the whole and the interactions between its parts), using the example of an automobile. In analytical thinking, one might assume that assembling the &#8220;best&#8221; individual components would result in a perfect whole—but Ackoff argued this is flawed because the system&#8217;s performance depends on how the parts interact, not just their separate qualities (a recording of the speech is available <a href="https://www.youtube.com/watch?v=EbLh7rZ3rhU">here</a>). In his remarks, Ackoff explains:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The NY Times reported that there are 457 automobiles available in the U.S. If you had engineers find the best part from each car, the best motor from say a Rolls Royce, the best transmission from a Mercedes, etc., and have engineers take the best parts from each car and instruct them to assemble them. Do we get the best automobile? Of course not, we don’t even get an automobile. Because the parts don’t fit together.</p>



<p></p>
</blockquote>



<p>He further explains:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Divide and conquer is the basic principle of Western management. Manage each department separately, and in turn, the whole will be run as well as possible. But this is <em>absolutely false</em>. (…) A system is not the sum of its parts; it’s the product of their interactions.</p>
</blockquote>



<h1 class="wp-block-heading">Assessment models are not process blueprints.</h1>



<p>ASPICE is the best V-based assessment model specifically tailored to the needs of the European automotive industry. It can provide valuable insights into weaknesses that help make product development run more smoothly and efficiently. It is, however, not a proper process “blueprint.” I have witnessed dozens of auto part suppliers “implementing ASPICE” literally. In ASPICE, there are several &#8220;processes,&#8221; including SYS.1, SYS.2, SWE.1, SUP.10, and others—all of which operate within their own siloed organizational structures when such a process blueprint is followed. BPs (base practices) are cleanly sorted in separate chapters. This often leads to team roles and activities that are redundant and poorly integrated into the development practice. For example:</p>



<ul class="wp-block-list">
<li>Once a process “MAN.3” is defined, the team tends to view MAN.3 as a job of the project lead. The other project team members took a “hands-off” approach. “It’s MAN.3—So I am not responsible.”</li>



<li>If SUP.8 is viewed as a process, other aspects, such as release planning, continuous integration, traceability management, etc., become the responsibility of “SUP.8”—usually the Configuration Manager, which creates a bottleneck within a project organization. For instance, creating a “baseline” seems to be a proverbial fifth wheel; the Configuration Manager often must chase the deliveries instead of collecting and delivering them in the upcoming release package. </li>



<li>Treating SYS.3 (System Architectural Design) as the architect&#8217;s isolated responsibility leads to designs that lack input from integration or test engineers, resulting in architectures that look good. Still, the practical value of the complex SysML model may be viewed as overrated (sometimes referred to as “ivory tower design”). </li>



<li>SUP.1 (Quality Assurance) is often implemented as a standalone process. The only person who embodies this role is the QA Manager alone. This typically results in a disconnect, where developers and engineers adopt a &#8220;throw it over the wall&#8221; mentality. Quality issues are deferred until SOP, where an assessment adds to the stressful strain that only increases the risk of losing key engineers and can even lead to health issues and the oft-dreaded “burnout.”</li>
</ul>



<p></p>



<p>Using ASPICE as a top-down process definition, rather than building a project development approach from the bottom up, typically creates an organizational liability: it becomes a detached entity within an engineering organization. Although it may not be obvious, <strong>ASPICE—while it abstractly emphasizes appropriate “resources” and mentions responsibilities and accountabilities—provides no guidance on a structured approach to effective team development</strong>. ASPICE features some 200 BPs but misses the forest for the trees: the fundamental intrinsic team motivation. A lack of personal “buy-in” from the team, which suffers from the inherent redundancy of “carbon copy” methodologies and a lack of focus on practical project contributions, is the Achilles&#8217; heel of ASPICE-based projects. Misaligned responsibilities and accountabilities can lead to a leadership vacuum and a “not-my-job” mentality.</p>



<p>The lack of “buy-in” cannot be fixed by imposing ASPICE compliance through recurring, formal assessments; it is the proverbial “spanking that continues until morale improves.” Well-educated and trained engineers know what needs to be done. They must be allowed to do so to maintain the team’s intrinsic motivation and ensure a well-understood purpose for the system under development.</p>



<h1 class="wp-block-heading">CORE SPICE&#8217;s effectiveness accelerators</h1>



<p>Since R&amp;D (research and development) projects have the flexibility to shape the project “triangle” of quality, speed, and scope (see, e.g., <a href="https://en.wikipedia.org/wiki/Project_management_triangle">here</a>), CORE SPICE primarily focuses on “Made-to-Order” (MtO) projects (see the article <a href="https://projectcrunch.com/the-right-genes-for-your-project/" data-type="post" data-id="1967">The Right Genes for Your Project</a> for the explanation of the project typology). CORE SPICE helps prepare a transition from R&amp;D to the MtO stage, but this transition is not the focus of this article. The CORE SPICE coaching approach emphasizes strong team involvement. The team is encouraged to take ownership of their work, a critical factor in the success of MtO projects. The Team Capability Coach (TCC) plays a central role in the CORE SPICE MtO framework. It emphasizes a shared sense of urgency and a quality-focused mindset. Through hands-on coaching, CORE SPICE addresses the “buy-in” challenge directly, making quality and development speed everyone’s responsibility. By following CORE SPICE’s clear principles, teams stay focused and energized, which supports efficient and cohesive project delivery.</p>



<p><strong>The following measures for project speed and quality help the team to focus on effective project velocity:</strong></p>



<ol class="wp-block-list">
<li>“No task left behind”—own your task</li>



<li>Maintain the sense of urgency—there is no time like “now.”</li>



<li>Establish the principle of “end-to-end responsibility”</li>



<li>Constantly assess the team</li>



<li>Automate everything</li>
</ol>



<p><strong>Measure 1: “No Task Left Behind.”</strong><br>Whenever a project member identifies a critical risk (e.g., a missing requirement or a design flaw), it automatically becomes a task “owned” by the task creator until the issue is resolved or effectively assigned to the engineer who can address it more efficiently and effectively. This must always involve an explicit “handover” agreed upon bilaterally to avoid the “throwing the problem over the fence” syndrome. While risks must be taken in fast-paced automotive projects, those risks should be either resolved or reduced, so that such problems won’t “blow up in our faces” at a critical project phase.</p>



<p><em><strong>Corollary to Measure 1</strong>: Creating a task in the issue management system (e.g., JIRA) alone does not resolve the issue; instead, it adds to the overall workload—an ever-growing “backlog” of owner-less (or even entirely redundant or abandoned) tasks. It initiates a sequence of steps that must be tracked, managed, resolved, reviewed, and closed. Managing tasks in an issue management system is expensive; creating a “ticket” must have the consequence of monitoring and reviewing the final result for the ticket creator. A good rule is to establish the “what goes around, comes around” principle: when users create new tasks, they will eventually have to deal with them. That negative cybernetic loop helps reduce the number of tickets that otherwise mushroom to an unmanageable dimension during the project lifecy</em>cle.</p>



<p><strong>Measure 2: Maintain the sense of urgency.</strong></p>



<p>The phenomenon of work that was initially planned optimistically but then fills out the time buffer and eventually causes schedule overruns is a pattern well-documented in the planning fallacy literature (see Kahneman and Tversky [3]).</p>



<figure class="wp-block-image size-full"><a href="https://projectcrunch.com/wp-content/uploads/2025/08/Planning-Anti-pattern.png"><img loading="lazy" decoding="async" width="796" height="301" src="https://projectcrunch.com/wp-content/uploads/2025/08/Planning-Anti-pattern.png" alt="" class="wp-image-3372" srcset="https://projectcrunch.com/wp-content/uploads/2025/08/Planning-Anti-pattern.png 796w, https://projectcrunch.com/wp-content/uploads/2025/08/Planning-Anti-pattern-300x113.png 300w, https://projectcrunch.com/wp-content/uploads/2025/08/Planning-Anti-pattern-768x290.png 768w" sizes="auto, (max-width: 796px) 100vw, 796px" /></a></figure>



<p>Once the “buffer” is established, it is primarily consumed by other activities. Thus, the task is being delayed, and when it is finally addressed, unexpected obstacles arise, which naturally delay the entire task and result in missing the original deadlines. </p>



<p>This conundrum cannot be resolved by “better planning”—it is simply unrealistic to have a waterfall planning approach where the project scope is a stable figure from project start to SOP. However, it also cannot be resolved by just being “agile,” either, which, in reality, frequently means a mindset in which the project scope is subject to arbitrary changes. This is never acceptable in the automotive business (see the &#8220;CAR IT Reloaded&#8221; [2], Chapter 5 and 6, for more details on this issue). </p>



<p>In our industry’s experience, delayed tasks are addressed by raising the level of urgency, usually facilitated by calling for a joint “surge,” also called “task force” or “bootcamp,” which enforces pragmatic measures to resolve late tasks. </p>



<p>CORE SPICE philosophy is that every MtO development project is already a “task force” without being called that. TCC’s task, among other responsibilities, is to ensure that this high level of urgency is upheld as long as substantial project risks remain unmitigated or poorly addressed, which usually means that it remains in this state until SOP.</p>



<p><strong>Measure 3: Establish the principle of “<em>end-to-end responsibility</em>”</strong></p>



<p>Team fragmentation and silo thinking are typical project anti-patterns, which are exacerbated by naively implemented ASPICE-induced processes. In such settings, different persons handle tasks throughout an implementation cycle. This might be inevitable or even desirable, as it is a good practice for a requirement not to be implemented and verified by the same engineer. However, it can also lead to an exponential multiplication and fragmentation of tasks in the issue management system (like JIRA). For instance, separate tasks may be created for customer requirements approval, another for systems-level requirements, and for systems architecture, as well as further tasks to handle software requirements approval, software architecture approval, and software implementation. In extreme cases, different tasks are created and maintained by other engineers to identify, analyze, review, and approve the exact requirement. On the right side of V, those tasks are often duplicated at least. Thus, for one customer requirement, dozens of tickets (tasks) are created, maintained, and resolved by multiple engineers, which poses “friction” costs that can lead to significant financial “waste”—not to mention the cost of demotivating the team due to the resulting “process management.”</p>



<p>To minimize the “handover” and reduce responsibility fragmentation in the project, a “<strong>feature-oriented</strong>” approach should be established. In such cases, a chunk of customer functionality is defined as a “feature.” A “<em>feature owner</em>” is a cross-functional person responsible for ensuring that a portion of customer requirements (“feature”) is managed by the same person from inception until its final verification (thus “end-to-end”). Each feature owner acts as a project lead for a specific portion of the functionality, e.g., “active steering safety manager,” and reports to the Project Lead within the project organization. This reduces responsibility fragmentation. For instance, in a feature-owner approach, it is not crucial whether an engineer is a “software” or a “system” engineer. Such an engineer is responsible for supporting the implementation of the feature across all disciplines, rather than focusing on “discipline-bound” tasks.</p>



<p><strong>Measure 4: Constantly assess the team</strong></p>



<p>This measure is as easy to grasp as it is “politically” difficult to enforce. The Project Lead and TCC must constantly monitor team activities to ensure everyone contributes to the project&#8217;s success. If the Project Lead and TCC agree that a project member proves to be ineffective in relation to the project&#8217;s mission, a swift replacement is necessary. Keeping unhappy and ineffective team members on the team demotivates the rest; thus, such a situation must be addressed immediately. A “pooling” principle, as described in our book &#8220;CAR IT Reloaded&#8221; ([2], Chapter 5) may be the organizational solution to this issue.</p>



<p><strong>Measure 5: Automate everything</strong></p>



<p>In every project, the team must perform repetitive tasks that, at least after the initial stage, become tedious, monotonous, and frustrating. Every project activity must therefore be automated as effectively as possible. Examples of such repetitive activities could be:</p>



<ul class="wp-block-list">
<li>Creating a peer review for a software component</li>



<li>Creating links between different elements across the V, like links between software components and software units or test cases.</li>



<li>Generating standardized project status reports</li>



<li>Compiling and updating traceability reports</li>



<li>Running routine regression tests</li>



<li>Compiling and archiving documentation for compliance audits</li>
</ul>



<p>Those activities are often mechanical and intellectually unappealing to highly trained engineers. The “Project Tool Engineer” role (as described in CORE SPICE) must be established from the very beginning of the project, which will automate such tasks and reduce the burden on the development team.</p>



<p>These measures are a substantial part of the CORE SPICE mindset. Their advantage lies in acknowledging that automotive product development (particularly MtO projects) is inherently non-deterministic. The project “triangle” is exceedingly inflexible in our industry for several reasons (see Chapter 6 of our book, CAR IT Reloaded [2], Chapters 4, 5, and 6). It can rarely be controlled by negotiating a modified project scope (as the customer seldom accepts it), and it cannot be tamed by enforcing dogmatic obedience to assessment models. Only a systemic approach like CORE SPICE addresses the essential aspects and challenges in complex automotive MtO projects (see the 12 CORE SPICE Principles and <a href="https://projectcrunch.com/ecst/" data-type="post" data-id="2656">ECST</a>—effective critical systems thinking for details). </p>



<p>Applying these five coaching measures revitalizes a project, significantly increasing the likelihood of its success.</p>



<h1 class="wp-block-heading">Make it happen</h1>



<p>CORE SPICE is a straightforward coaching concept inspired by industry needs for compliance, such as ASPICE, ISO 26262, and ISO 21434. There are hundreds of other regulations and standard specifications that an automotive product development team is expected to follow. Most of them are contractual obligations that should be taken seriously—not for the “standard’s” sake, but for the safety and security of the product in development. </p>



<p>The automotive industry has little need for further and more complex regulations and mandatory standards. Time is running out as the electric vehicle breakthrough has intensified competition, demanding rapid innovation. CORE SPICE is a coaching concept that emphasizes the actions required to deliver high-quality products efficiently. Its measures enhance development velocity and product quality, supported by the critical role of the Team Capability Coach. We believe that only true leaders will retain a competitive edge in this fast-paced industry. Those who succeed will shape the future of automotive innovation, and CORE SPICE empowers to lead that transformation.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>[1] NATO SOFTWARE ENGINEERING CONFERENCE 1968<br><a href="http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PDF">http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PDF</a></p>



<p>[2] Car IT Reloaded, 2025<br>Springer Vieweg Wiesbaden, <a href="https://link.springer.com/book/9783658476908">https://link.springer.com/book/9783658476908</a></p>



<p>[3] “Intuitive Prediction: Biases and Corrective Procedures.&#8221; <br>TIMSS Studies in the Management Sciences, D. Kahneman &amp; A. Tversky</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>


<div class="wp-block-image">
<figure class="alignleft"><a href="https://www.linkedin.com/in/romanmildner/"><img decoding="async" src="https://projectcrunch.de/wp-content/uploads/2023/10/Hyde-Park-e1697533844778.png" alt="" class="wp-image-2533"/></a></figure>
</div>


<p>Let’s start a conversation on <strong><a href="https://www.linkedin.com/in/romanmildner/">LinkedIn</a></strong> or <strong><a href="https://twitter.com/RomanMildner">X.com</a></strong> (formerly Twitter).</p>



<p></p>
]]></content:encoded>
					
		
		<enclosure url="https://projectcrunch.com/wp-content/uploads/2025/08/CORE_SPICE_Coaching_Concept.mp3" length="18308062" type="audio/mpeg" />

			</item>
		<item>
		<title>AI Is Devouring the World? Not So Fast.</title>
		<link>https://projectcrunch.com/ai-is-devouring-the-world-not-so-fast/</link>
		
		<dc:creator><![CDATA[Roman Mildner]]></dc:creator>
		<pubDate>Wed, 13 Aug 2025 21:35:41 +0000</pubDate>
				<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<guid isPermaLink="false">https://projectcrunch.com/?p=3359</guid>

					<description><![CDATA[Remember Marc Andreessen's line about software eating the world? Today, the “software” hype has been replaced by the new “AI” hype. <a class="mh-excerpt-more" href="https://projectcrunch.com/ai-is-devouring-the-world-not-so-fast/" title="AI Is Devouring the World? Not So Fast.">Read...</a>]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-audio"><audio controls src="https://projectcrunch.com/wp-content/uploads/2025/08/Is-Devouring-the-World-Not-So-Fast.mp3"></audio></figure>



<h4 class="wp-block-heading">Remember Marc Andreessen&#8217;s <a href="https://a16z.com/why-software-is-eating-the-world">line</a> about software eating the world? Today, the “software” hype has been replaced by the new “AI” hype.</h4>



<p>The amount of money that is being spent on AI solutions is genuinely <a href="https://ourworldindata.org/grapher/private-investment-in-artificial-intelligence">staggering</a>. Worldwide corporate investments in AI are assumed to have reached at least 150 billion dollars annually. The cumulative global investment in AI has probably crossed 1 trillion US$—and yet, it appears that all we get is fluffy cat videos and creative images. Popular generative AI tools like ChatGPT and co. can generate images, music, videos, charts, and more.</p>



<p>However, the results—while entertaining—are still limited to non-deterministic content: the outcomes are often surprising and creative, but they don’t have to be “correct.” Correctness, however, is what is desperately needed: only correct results are accepted in real-life corporate and legal settings. Sometimes, a single wrong figure in a presentation can ruin your day—or even your entire professional career. Especially in large, bureaucratic organizations, correctness is the be-all and end-all of daily business.</p>



<p>Before the generative AI boom, I assumed that genuine creativity—or at least what most of us accept as such—would be the very last victim of AI devouring the world. I was wrong. Quirky presentations, a catchy tune, or a viral meme? AI delivers, often better than a room full of brainstormers. Content generated by LLMs is fun, flashy—and arguably cost-effective (which is a euphemism for “cheap”). Artists, writers, marketers, etc. appear to become disposable. Those roles could eventually disappear as generative AI pumps out endless variations on demand. It may be “cheap” (as assumed <a href="https://projectcrunch.com/the-great-ai-blabbermouth/" data-type="post" data-id="3334">here</a>), but it can pass as “good enough” for a vast number of creative tasks.</p>



<p>Here is the thing: AI excels at creativity precisely because creativity usually doesn&#8217;t demand correctness (a.k.a. “accuracy”). Funny cat videos are always, well, funny and entertaining, but let’s take a closer look at a task that is on the opposite end of the creativity spectrum: a tax accountant. Everyone (well, most people who create any commercial value, at least) must pay taxes. In most developed countries, the tax code is overly complicated. A simple limited company (e.g., a German GmbH) must obey dozens of often confusing rules and regulations. A small business can&#8217;t run without an exceedingly expensive tax advisor. There are no shortcuts; any deviations from the strict tax code can cause trouble or even land one in jail.</p>



<p>The AI industry is admittedly struggling with “AI reasoning.” The reason for that is that generative AI is not intelligence; it is simply a brute-force extrapolation of the training data that can only approximate the data used to train LLMs. LLMs arguably offer nothing in terms of reasoning. They can smell like it, sound like it, walk like it, look like it—but they cannot actually reason, no matter how much data is thrown at a multi-layer neural network.</p>



<p>Tasks requiring &#8220;correctness” can be nearly impossible to solve using brute-force heuristics like LLMs. A disruptive innovation in LLM use could involve combining LLMs with reasoning models. I am convinced that many companies and computer scientists are working on such innovative hybrid “reasoning AI models,” but—at least as of this writing—there is no sign of any such innovations to be seen.</p>



<p>That’s a frustrating state of AI affairs: a correct tax advisor that could replace a German tax consultant who could run even a small German GmbH is nowhere to be seen. All tax advisors I have asked whether AI can replace them soon keep laughing at me: “No way, not anytime soon” is their usual response.</p>



<p>We have to accept that generative AI is not intelligent at all. It is funny, helpful—but not smart in a human sense. The entire discussion about “AGI” (artificial general intelligence) seems to be a smoke screen for the NVIDIA investors.</p>



<p>As long as AI is not correct, it cannot be called “intelligent.”</p>



<p>Alan Turing thought that a “Turing Test” could prove intelligence if an AI entity could not be distinguished from a human being. My impression is that a generative AI agent can fool a human into thinking that an AI entity (e.g., an AI phone agent) is a human. To me, it does not prove much. The real “Post-Turing Test” should be able to reason correctly without the usual AI “hallucination” from which all generative AI tools suffer.</p>



<p>A “correct” AI entity that can pass the “Post-Turing Test” and act indistinguishably from a trained and experienced tax advisor is currently not in sight. It may happen sometime around 2030. Until then, we will continue enjoying AI-powered cat videos and see how long big tech can sustain throwing billions and trillions of dollars at ever more complex LLMs.</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>


<div class="wp-block-image">
<figure class="alignleft"><a href="https://www.linkedin.com/in/romanmildner/"><img decoding="async" src="https://projectcrunch.de/wp-content/uploads/2023/10/Hyde-Park-e1697533844778.png" alt="" class="wp-image-2533"/></a></figure>
</div>


<p>Let’s start a conversation on <strong><a href="https://www.linkedin.com/in/romanmildner/">LinkedIn</a></strong> or <strong><a href="https://twitter.com/RomanMildner">X.com</a></strong> (formerly Twitter).</p>



<p></p>
]]></content:encoded>
					
		
		<enclosure url="https://projectcrunch.com/wp-content/uploads/2025/08/Is-Devouring-the-World-Not-So-Fast.mp3" length="5842133" type="audio/mpeg" />

			</item>
		<item>
		<title>The Great AI Blabbermouth</title>
		<link>https://projectcrunch.com/the-great-ai-blabbermouth/</link>
		
		<dc:creator><![CDATA[Roman Mildner]]></dc:creator>
		<pubDate>Mon, 07 Jul 2025 20:03:23 +0000</pubDate>
				<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Insights]]></category>
		<guid isPermaLink="false">https://projectcrunch.com/?p=3334</guid>

					<description><![CDATA[Artificial intelligence has exploded like an artificial H-bomb, leaving a mushroom cloud of hype in its wake. But here's the uncomfortable truth nobody wants to admit: What we call "artificial intelligence" is neither artificial nor intelligent. It's essentially a sophisticated human-made pattern recognition system with some degree of self-modification and a notorious inclination to hallucinate. <a class="mh-excerpt-more" href="https://projectcrunch.com/the-great-ai-blabbermouth/" title="The Great AI Blabbermouth">Read...</a>]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-audio"><audio controls src="https://projectcrunch.com/wp-content/uploads/2025/07/The-Great-AI-Blabbermouth.mp3"></audio></figure>



<p>Artificial intelligence has exploded like a virtual H-bomb, leaving a mushroom cloud of hype in its wake. But here&#8217;s the uncomfortable truth nobody wants to admit: What we call &#8220;artificial intelligence&#8221; is neither artificial nor intelligent. It&#8217;s essentially a sophisticated human-made pattern recognition system with some degree of self-modification and a notorious inclination to hallucinate.</p>



<h2 class="wp-block-heading"><strong>The Blabbermouth Revolution</strong></h2>



<p>Ever since ChatGPT became mainstream, everyone has started using it because it can produce endless streams of content at a low cost. The problem with this new digital toy is that it is little more than a digital blabbermouth—the equivalent of that person at every party who can talk about anything but actually knows nothing. Current large language models are like such ennoyingly eloquent individuals with pathologically loose tongues, capable of generating confident-sounding responses on virtually any topic—without any actual comprehension.</p>



<p>And yet, these increasingly popular systems have found their perfect niche. They excel at tasks that nobody really wanted to do in the first place—those &#8220;bullshit jobs&#8221; that anthropologist David Graeber wrote about. LLMs have become the miracle cure that helps you maintain your workplace sanity while doing essentially nothing productive. They&#8217;re the ultimate enablers of busywork, which helps create an endless stream of content that nobody actually wanted in the first place.</p>



<h2 class="wp-block-heading"><strong>The Economics of Fake Intelligence</strong></h2>



<p>Here&#8217;s where basic economics kicks in. Now that everyone is using these tools, the free market mechanism becomes apparent: since this kind of &#8220;AI&#8221; is cheap, the results are also cheap. If something is easy to get, it automatically becomes worthless. &#8220;It&#8217;s the economy, stupid&#8221; all over again.</p>



<p>Thus, no matter how much it is hyped and glorified as the supposedly greatest innovation since sliced bread, LLMs might become virtually useless after all. Easy come, easy go. Just think for a minute (and you can still get it right without asking your favorite chatbot): When everyone can generate a thousand-word essay in thirty seconds, what&#8217;s a thousand-word essay worth?</p>



<h2 class="wp-block-heading"><strong>For What It&#8217;s Worth</strong></h2>



<p>We might assume it&#8217;s easy to distinguish between LLM-generated and genuine human content. I think most of us still have this ability. However, there is a long-term, subtle effect that is likely to occur gradually, perhaps too slowly to spot it: We will likely witness a &#8220;convergence crisis.&#8221; The new generation is growing up with LLM-generated content. They could actually stop learning anything else because genuine content will be too expensive. To them, this artificial output is the &#8220;real thing&#8221;—the mental baseline for what writing, thinking, and creativity are supposed to look like.</p>



<p>The older generation, which can still distinguish between artificial and authentic content, will diminish. Ironically, the older generation—those who remember what genuine human thought and creativity looked like—may become the proverbial one-eyed among the blind. They will retain a form of literacy that is becoming increasingly rare: the ability to recognize authentic human intelligence when they see it.</p>



<p>This &#8220;convergence&#8221; won&#8217;t happen overnight. It will likely take decades for LLMs to become sufficiently advanced to be truly indistinguishable from human output. But the cultural damage is already eroding the way we think and what we perceive as &#8220;normal.&#8221; We&#8217;re training an entire generation to accept mediocrity as excellence, to mistake pattern matching for thinking, and to value quantity over quality.</p>



<p>There&#8217;s no such thing as a free lunch. Cheap content—while nearly free—will have its price eventually. That price is the erosion of critical thinking.</p>



<h2 class="wp-block-heading"><strong>The Value of the Authentic</strong></h2>



<p>In this digital ocean of infinite, worthless content, only original ideas, authentic creativity, and genuine human insight will likely retain value. The irony of AI conquering the world is that, in creating a system that enhances human intelligence, we have made natural intelligence more precious than ever.</p>



<p>This kind of genuine, human-value-based approach may become scarce. The future belongs not to those who can generate the most content, but to those who can still think original thoughts. Those &#8220;old-fashioned&#8221; thinkers will still be able to recognize the difference between the real and the artificial. In a world drowning in digital blather, silence and authenticity become revolutionary acts.</p>



<p>The AI revolution promised to extend human intelligence. Instead, it may have created the ultimate test of whether we can preserve what makes us genuinely human in the first place.</p>



<p></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>


<div class="wp-block-image">
<figure class="alignleft"><a href="https://www.linkedin.com/in/romanmildner/"><img decoding="async" src="https://projectcrunch.de/wp-content/uploads/2023/10/Hyde-Park-e1697533844778.png" alt="" class="wp-image-2533"/></a></figure>
</div>


<p>Let’s start a conversation on <strong><a href="https://www.linkedin.com/in/romanmildner/">LinkedIn</a></strong> or <strong><a href="https://twitter.com/RomanMildner">X.com</a></strong> (formerly Twitter).</p>



<p></p>
]]></content:encoded>
					
		
		<enclosure url="https://projectcrunch.com/wp-content/uploads/2025/07/The-Great-AI-Blabbermouth.mp3" length="5664447" type="audio/mpeg" />

			</item>
	</channel>
</rss>
