<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	 xmlns:media="http://search.yahoo.com/mrss/" >

<channel>
	<title>Checkmarx</title>
	<atom:link href="https://checkmarx.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://checkmarx.com/</link>
	<description>The world runs on code. We secure it.</description>
	<lastBuildDate>Mon, 02 Mar 2026 11:58:43 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Confident Developers Are the New Security Risk </title>
		<link>https://checkmarx.com/blog/confident-developers-are-the-new-security-risk/</link>
		
		<dc:creator><![CDATA[Frank Emery]]></dc:creator>
		<pubDate>Mon, 02 Mar 2026 11:58:41 +0000</pubDate>
				<category><![CDATA[AI & LLM Tools in Application Security]]></category>
		<category><![CDATA[Application Security Trends & Insights]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI generated code]]></category>
		<category><![CDATA[AppSec]]></category>
		<category><![CDATA[developer assist]]></category>
		<guid isPermaLink="false">https://staging.checkmarx.com/?p=107381</guid>

					<description><![CDATA[Teams are shipping more code, faster than ever. It looks polished, it runs smoothly, and it works - so developers trust it. That's the problem.]]></description>
										<content:encoded><![CDATA[<p>AI coding tools have fundamentally changed how software gets built.&nbsp;</p>



<p>After attending and speaking with security and development leaders at OnPoint Ski &amp; Snowboard CyberCon 2026, one theme stood out to me: teams are shipping more code, in more languages, across more projects than ever before. Features that used to take days now take minutes and complex logic can be scaffolded from a single prompt. </p>



<p>The output is fast, it looks polished, and it runs smoothly.&nbsp;</p>



<p><em>And&nbsp;that’s&nbsp;exactly the problem.</em>&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-1">When Confidence Outpaces Security </h2>



<p>As developers rely more on AI tools, something subtle happens: the speed and quality of the output create <em>confidence</em>. The code looks clean, it compiles, it works as expected – so it gets trusted. </p>



<p>But AI models only predict what is likely to work, they don’t understand your threat model and they aren&#8217;t able to assess exploitability in your environment. AI-generated code can function perfectly and still introduce serious vulnerabilities. </p>



<p>This&nbsp;gap&nbsp;between what works and&nbsp;what’s&nbsp;secure&nbsp;is where risk&nbsp;exponentially&nbsp;builds.&nbsp;</p>



<p>This&nbsp;isn’t&nbsp;a criticism of developers. AI tools are powerful productivity accelerators, and teams&nbsp;absolutely should&nbsp;use them. But&nbsp;validating&nbsp;functionality is&nbsp;not the same as&nbsp;validating security. And right now, that distinction is getting blurred.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-2">More Code, Same Security Team</h2>



<p>This confidence issue&nbsp;isn’t&nbsp;happening in a&nbsp;vacuum;&nbsp;it’s&nbsp;the byproduct of broader organizational shifts.&nbsp;</p>



<p>The development lifecycle is shifting&nbsp;to be&nbsp;more agentic, more automated, and moving faster than ever. That means more code written with fewer reviewers, pull requests that are more frequent&nbsp;but also&nbsp;more complex, and an AppSec team expected to keep pace without any&nbsp;additional&nbsp;resources.&nbsp;&nbsp;</p>



<p>So,&nbsp;the backlog&nbsp;isn’t&nbsp;stabilizing –&nbsp;it’s&nbsp;growing.&nbsp;</p>



<p>I see this tension in organizations all the time. AppSec teams are expected to keep up with the speed of development while&nbsp;maintaining&nbsp;strong security standards. In practice, they&nbsp;can’t&nbsp;fully do both. Slowing down development usually&nbsp;isn’t&nbsp;an option, so security is expected to adapt.&nbsp;</p>



<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="616" height="411" src="https://checkmarx.com/wp-content/uploads/2026/03/image.png" alt="" class="wp-image-107382" srcset="https://checkmarx.com/wp-content/uploads/2026/03/image.png 616w, https://checkmarx.com/wp-content/uploads/2026/03/image-300x200.png 300w, https://checkmarx.com/wp-content/uploads/2026/03/image-400x267.png 400w" sizes="(max-width: 616px) 100vw, 616px" /></figure>



<h2 class="wp-block-heading article-anchor" id="article-anchor-3">Development Is Now Human + AI </h2>



<p>Development is no longer purely human-led – but it&nbsp;isn’t&nbsp;exclusively&nbsp;AI-led either. It is now driven by developers working alongside AI.&nbsp;</p>



<p>AI is&nbsp;assisting, suggesting, generating, and accelerating, but humans are still making decisions and shipping code. The model has shifted from developers writing everything themselves to developers collaborating with AI systems throughout the process.&nbsp;</p>



<p>This shift significantly increases output. Teams are producing more features, services, and integrations at a much faster pace. But AI is optimized for speed and plausibility, not security. It can produce functional code, but not inherently secure code. </p>



<p>The speed AI delivers builds confidence and trust, but it also increases the likelihood of security gaps slipping through unnoticed – especially when developers are shipping code they didn’t write and don&#8217;t fully understand. We recently dug more into this trend in our <a href="https://checkmarx.com/ai-llm-tools-in-application-security/securing-code-no-one-actually-wrote-2/" target="_blank" rel="noreferrer noopener"><em>Don’t Trust the Code</em></a> paper and you can read more about it <a href="https://checkmarx.com/resources/dont-trust-the-code/" target="_blank" rel="noreferrer noopener">here</a>.  </p>



<p>But these tools&nbsp;don’t&nbsp;just change how developers work – they also add new components to the software supply chain. Every&nbsp;model&nbsp;integration, MCP connection, and AI-assisted workflow becomes another potential entry point, and the environment is expanding faster than many security teams can track.&nbsp;</p>



<p>I&#8217;ve&nbsp;seen cases where thousands of AI coding assistant licenses were active before the Head of Security even knew they existed. And when organizations&nbsp;don’t&nbsp;know which AI tools are in use or how data is flowing, they&nbsp;can’t&nbsp;properly assess&nbsp;risk&nbsp;–&nbsp;and the attack surface grows,&nbsp;unnoticed.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-4">Security Has To Evolve </h2>



<p>One of my biggest takeaways is that if AI-driven productivity is the new baseline, security can&#8217;t operate the way it did five years ago – it must evolve across these three categories: </p>



<ol class="wp-block-list">
<li><strong>How we&nbsp;identify&nbsp;vulnerabilities in code is changing.</strong></li>



<li><strong>How we&nbsp;identify&nbsp;vulnerabilities in the tools we are using is changing.</strong></li>



<li>
<strong>How we address vulnerabilities is changing.</strong> </li>
</ol>



<p>Traditional scanners&nbsp;weren’t&nbsp;built for this environment. They struggle with modern languages and frameworks, generate noise, and&nbsp;can&#8217;t&nbsp;keep pace with modern CI/CD pipelines.&nbsp;&nbsp;</p>



<p>Meanwhile,&nbsp;AI is introducing new threat vectors:&nbsp;</p>



<ul class="wp-block-list">
<li>Generated logic that hasn’t been deeply reviewed</li>



<li>New dependencies</li>



<li>Expanded supply chain components</li>
</ul>



<p>Organizations still need every line of that code scanned quickly, with findings developers can&nbsp;actually act&nbsp;on.&nbsp;This is why&nbsp;we’re&nbsp;seeing the rise of agentic scanning approaches: hybrid engines that combine deterministic analysis with AI reasoning, LLM-powered workflows, and automated context-aware triage.&nbsp;</p>



<p>But securing the code is only&nbsp;half of&nbsp;the&nbsp;problem,&nbsp;we&nbsp;also need to secure the AI&nbsp;tools&nbsp;writing it. AI Bills of Materials&nbsp;(AI-BOMs)&nbsp;are&nbsp;emerging&nbsp;to&nbsp;provide visibility into where AI is being used, which models are connected, and how data flows through them. Securing the full AI stack is quickly becoming a core AppSec responsibility.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-5">From Backlog to Automation </h2>



<p>Detection alone&nbsp;won’t&nbsp;solve the scaling problem. The traditional&nbsp;identify&nbsp;– triage – remediate – verify cycle cannot be managed manually when code is growing exponentially. Without automation, quality declines and backlogs grow.&nbsp;</p>



<p>Agents become valuable when&nbsp;they’re&nbsp;embedded directly into the&nbsp;development&nbsp;lifecycle, especially in high-volume stages like triage, remediation, and verification. These are areas where automation can absorb the workload security teams&nbsp;can’t&nbsp;handle manually.&nbsp;</p>



<p>When agents&nbsp;operate&nbsp;within a defined AppSec strategy, they form the foundation for applications that can secure themselves, freeing teams to focus on policy and governance rather than reactive risk management.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-6">Securing at the Speed of Confidence </h2>



<p>The paradox is clear. AI increases output, which increases risk. At the same time, it increases confidence, and confident developers move faster, question less, and merge code more quickly.&nbsp;</p>



<p>But beneath that momentum, the gap between perceived security and actual security continues to widen.&nbsp;Since slowing down is not a realistic&nbsp;option, the only path forward is to secure software at the speed AI now sets.&nbsp;</p>



<p>Checkmarx&nbsp;is built for this shift.&nbsp;It&nbsp;combines deterministic scanning with&nbsp;AI-driven detection to give&nbsp;clear visibility into how AI&nbsp;is&nbsp;being used across&nbsp;development environments, while&nbsp;also automating&nbsp;remediation&nbsp;with&nbsp;tools&nbsp;like&nbsp;<a href="https://checkmarx.com/product/developer-assist/" target="_blank" rel="noreferrer noopener">Checkmarx&nbsp;Developer Assist</a>.&nbsp;&nbsp;</p>



<p>The result is security embedded directly into the development process – instead of&nbsp;tacked at the end.&nbsp;</p>



<p>And the goal isn’t to reduce developer confidence – confidence is a good thing! The goal is to ensure that this confidence is earned, backed by real visibility and security controls that scale with the volume of code being produced. </p>



<p>At the end of the day,&nbsp;confident developers&nbsp;with&nbsp;guardrails in place move fast&nbsp;<em>and&nbsp;</em>stay secure. Confident developers without them just move fast.&nbsp;</p>]]></content:encoded>
					
		
		
		
		<media:thumbnail url="https://checkmarx.com/wp-content/uploads/2026/03/image-150x150.png" />
		<media:content url="https://checkmarx.com/wp-content/uploads/2026/03/image.png" medium="image">
			<media:title type="html">image</media:title>
			<media:thumbnail url="https://checkmarx.com/wp-content/uploads/2026/03/image-150x150.png" />
		</media:content>
	</item>
		<item>
		<title>AI Code Needs AI Security: Why Claude’s Announcement Signals a Bigger Shift </title>
		<link>https://checkmarx.com/blog/ai-code-needs-ai-security-why-claudes-announcement-signals-a-bigger-shift/</link>
		
		<dc:creator><![CDATA[Eran Kinsbruner]]></dc:creator>
		<pubDate>Tue, 24 Feb 2026 15:26:54 +0000</pubDate>
				<category><![CDATA[AI & LLM Tools in Application Security]]></category>
		<category><![CDATA[Application Security Trends & Insights]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Checkmarx One]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Claude Code]]></category>
		<category><![CDATA[developer assist]]></category>
		<category><![CDATA[vibe coding]]></category>
		<guid isPermaLink="false">https://staging.checkmarx.com/?p=107161</guid>

					<description><![CDATA[Claude Code Security marks a shift toward AI-native vulnerability detection. Explore why AI-generated code demands enterprise-grade, agentic AppSec at scale.]]></description>
										<content:encoded><![CDATA[<p>Let’s&nbsp;start this article by&nbsp;stating&nbsp;that the launch of Claude Code Security is good news for the industry.&nbsp;</p>



<p>Not because it replaces traditional application security.&nbsp;</p>



<p>Not because it suddenly makes AI-generated code safe.&nbsp;</p>



<p>But because it validates something many security leaders already know: <strong>AI coding introduces new risks that require <a href="https://checkmarx.com/product/developer-assist/">AI-native, agentic application security</a>.</strong> </p>



<p>In an era where code is increasingly written by AI assistants, security cannot remain an afterthought bolted on after commit. If vulnerabilities are discovered only after the AI coding phase,&nbsp;it&#8217;s&nbsp;already too late. </p>



<p>Velocity and scale&nbsp;have&nbsp;increased, risk has compounded, and exposure scales faster than remediation.&nbsp;</p>



<p>Claude’s announcement acknowledges this reality. And&nbsp;that’s&nbsp;a positive step forward.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-1">
<strong>A New Way To Think About Detection</strong> </h2>



<p>At first glance, Claude Code Security and <a href="https://dev.checkmarx.com/" target="_blank" rel="noreferrer noopener">Checkmarx Developer Assist</a> may look similar. Both live in the IDE, both surface vulnerabilities, and both suggest fixes. </p>



<p>But the philosophies differ.&nbsp;</p>



<p><strong>Claude Code Security</strong>&nbsp;reasons about code the way a human security researcher might:&nbsp;mapping data flows, understanding&nbsp;component&nbsp;interactions, and&nbsp;identifying&nbsp;logical flaws that&nbsp;don’t&nbsp;match known signatures&nbsp;and predefined security rules. This reasoning-first approach allows it to uncover subtle, context-dependent vulnerabilities that traditional rule-based scanners often miss.&nbsp;</p>



<p>That’s&nbsp;meaningful&nbsp;progress.&nbsp;However, it is only in early preview and covers&nbsp;a very specific&nbsp;angle across the entire Agentic AppSec lifecycle.&nbsp;</p>



<p><strong>Checkmarx&nbsp;Developer Assist</strong>, part of the broader&nbsp;Checkmarx&nbsp;Assist family&nbsp;and a complete Agentic AppSec platform, takes a complementary but enterprise-proven approach. It provides real-time feedback in the IDEs (Cursor, Windsurf,&nbsp;VSCode, AWS Kiro,&nbsp;JetBrains)&nbsp;across:&nbsp;</p>



<ul class="wp-block-list">
<li>SAST vulnerabilities</li>



<li>Open-source and malicious packages (SCA)&nbsp;</li>



<li>Secrets exposure</li>



<li>Infrastructure-as-Code (IaC) misconfigurations</li>



<li>Container security risks</li>
</ul>



<p>It is fast, comprehensive, and powered by years of curated security intelligence and proven rule libraries, built to operate at true enterprise scale. </p>



<p><strong>Unlike </strong>Claude Code Security, Checkmarx goes beyond pre-commit issue detection. With <strong>Safe Refactor</strong>, we validate that security fixes don&#8217;t introduce regressions, break dependencies, or disrupt the build, so remediation is secure and production-ready.</p>



<p>In simple terms: </p>



<p><strong>Claude Code Security is deep and novel.</strong> </p>



<p><strong>Developer Assist is broad, fast, and supply-chain aware.</strong>&nbsp;</p>



<p>Both&nbsp;matter,&nbsp;but&nbsp;they&nbsp;solve different layers of the problem.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-2">
<strong>Scope Matters in the AI Era</strong>&nbsp;</h2>



<p>Claude Code Security currently focuses on reasoning over the application code itself. But modern risk doesn’t live only in application logic. </p>



<p>It lives in:&nbsp;</p>



<ul class="wp-block-list">
<li>AI-generated dependencies&nbsp;</li>



<li>LLM models, MCPs, offensive agents, IDE extensions, SDKs, etc.</li>



<li>Malicious packages</li>



<li>Container images</li>



<li>Infrastructure misconfigurations&nbsp;</li>



<li>Exposed credentials&nbsp;</li>



<li>Policy violations across pipelines</li>



<li>Runtime environments</li>
</ul>



<p>AI coding doesn’t just produce insecure code, it accelerates insecure ecosystems. </p>



<p>And this is where a unified, enterprise-grade platform becomes critical. </p>



<p>Checkmarx One integrates with Developer Assist for broader capabilities including policy enforcement, compliance reporting, ASPM, and auditability, providing visibility across the entire AI supply chain, not just inside a single file in an IDE. </p>



<figure class="wp-block-image size-full is-resized"><img decoding="async" width="800" height="800" src="https://checkmarx.com/wp-content/uploads/2026/02/Untitled-design-22.webp" alt="Checkmarx Developer Assist in action" class="wp-image-107164" style="width:825px;height:auto" srcset="https://checkmarx.com/wp-content/uploads/2026/02/Untitled-design-22.webp 800w, https://checkmarx.com/wp-content/uploads/2026/02/Untitled-design-22-300x300.webp 300w, https://checkmarx.com/wp-content/uploads/2026/02/Untitled-design-22-150x150.webp 150w, https://checkmarx.com/wp-content/uploads/2026/02/Untitled-design-22-768x768.webp 768w, https://checkmarx.com/wp-content/uploads/2026/02/Untitled-design-22-585x585.webp 585w" sizes="(max-width: 800px) 100vw, 800px" /></figure>



<p>In large enterprises, security needs to do more than catch clever bugs. It needs to enforce governance, consistency, and control across thousands of developers and millions of lines of code. </p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-3">
<strong>Remediation: Human-In-The-Loop, but at Scale</strong> </h2>



<p>Claude Code Security introduces an interesting concept:&nbsp;attempting&nbsp;to disprove its own findings before surfacing them. This aims to reduce false positives at the&nbsp;source, an&nbsp;important improvement over pushing noise downstream.&nbsp;</p>



<p>But accuracy in detection is only part of the equation. Even high-confidence findings create friction if remediation is slow or disconnected from the developer workflow. Developer Assist addresses this by using agentic AI remediation, initiating an AI session enriched with contextual intelligence and proprietary databases to generate safe, actionable fixes directly in the IDE. Developers can accept, refine, or interact with the agent to tailor the resolution. </p>



<p>The difference is operational scale and ecosystem integration.</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-4">
<strong>Enterprise Readiness Is Not an Afterthought</strong>&nbsp;</h2>



<p>Claude Code Security is currently in limited research preview.&nbsp;</p>



<p>Developer Assist, by contrast, is&nbsp;generally available&nbsp;and integrated natively into modern AI-powered IDEs. It is architected with enterprise data handling in mind,&nbsp;minimizing data exposure and ensuring sensitive source code&nbsp;remains&nbsp;protected.&nbsp;</p>



<p>For regulated industries, large enterprises, and global development organizations, these distinctions matter.&nbsp;</p>



<p>Innovation is exciting, but operational maturity is essential. </p>



<p>The Developer Assist agent as stated in this article is one of many agents that Checkmarx Assist offers. It joins the Triage and Remediation Assist agents that operate at the post-commit phase of the agentic development lifecycle (ADLC), offering an agentic-based cleanup solution for any missed or ignored security true positives that can slip into production. That second layer of defense, which is also part of the Checkmarx platform, ensures continuous autonomous AI coding across a large scale of repositories and applications. </p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-5">
<strong>This&nbsp;Isn’t&nbsp;Replacement —&nbsp;It’s&nbsp;Evolution</strong>&nbsp;</h2>



<p>Market reactions often jump to “disruption.” But even financial analysts have noted that this is not a direct replacement scenario today.&nbsp;</p>



<p>The more honest framing is this:&nbsp;</p>



<p>Claude Code Security highlights a long-term shift in how vulnerabilities will be discovered,&nbsp;toward reasoning-based, AI-native analysis.&nbsp;</p>



<p>And that shift reinforces the broader truth: <strong>AI-generated code requires AI-native AppSec (agentic AppSec).</strong> </p>



<p>But AI reasoning alone does not replace enterprise-grade coverage across the supply chain, runtime, policy enforcement, compliance, and governance.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-6">
<strong>The Bigger Opportunity</strong>&nbsp;</h2>



<p>Claude’s move validates the future. It acknowledges that traditional static scanning models are not sufficient in an AI-driven development lifecycle. </p>



<p>What it does not yet deliver is a unified, enterprise-ready&nbsp;<a href="https://checkmarx.com/rsac-2026/" target="_blank" rel="noreferrer noopener">Agentic Application Security</a>&nbsp;platform spanning:&nbsp;</p>



<ul class="wp-block-list">
<li>IDE prevention</li>



<li>Post-commit triage and remediation&nbsp;</li>



<li>AI supply chain visibility&nbsp;</li>



<li>Runtime validation&nbsp;</li>



<li>Centralized governance and risk assessment&nbsp;</li>
</ul>



<p>That broader vision is where the real transformation lies.&nbsp;If&nbsp;you’re&nbsp;attending the RSAC conference, come by the&nbsp;Checkmarx&nbsp;booth to learn more about this platform.&nbsp;</p>



<p>The future of security&nbsp;isn’t&nbsp;defensive.&nbsp;</p>



<p>It’s embedded. It’s agentic. It’s platform-driven. </p>



<p>And, most importantly, it evolves as fast as the AI writing the code. </p>



<p>The era of AI coding has begun. Now AI-native AppSec must scale with it. </p>]]></content:encoded>
					
		
		
		
		<media:thumbnail url="https://checkmarx.com/wp-content/uploads/2026/02/Untitled-design-22-150x150.webp" />
		<media:content url="https://checkmarx.com/wp-content/uploads/2026/02/Untitled-design-22.webp" medium="image">
			<media:title type="html">Untitled design (22)</media:title>
			<media:thumbnail url="https://checkmarx.com/wp-content/uploads/2026/02/Untitled-design-22-150x150.webp" />
		</media:content>
	</item>
		<item>
		<title>Reducing Noise With Contextual Risk Scoring: Why Critical Doesn’t Always Mean Urgent </title>
		<link>https://checkmarx.com/blog/reducing-noise-with-contextual-risk-scoring/</link>
		
		<dc:creator><![CDATA[Emma Datny]]></dc:creator>
		<pubDate>Sun, 22 Feb 2026 10:29:56 +0000</pubDate>
				<category><![CDATA[AI & LLM Tools in Application Security]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Secure Coding Best Practices for Developers]]></category>
		<category><![CDATA[Threat Intelligence & Vulnerability Analysis]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[AppSec]]></category>
		<category><![CDATA[IDE Scanning]]></category>
		<category><![CDATA[Vulnerability Remediation]]></category>
		<guid isPermaLink="false">https://staging.checkmarx.com/?p=107057</guid>

					<description><![CDATA[AppSec teams aren’t failing to find risk in their applications, they’re overwhelmed by it. A constant flood of critical alerts, false positives, and disconnected security findings has created a severe signal‑to‑noise problem, making it nearly impossible to distinguish business risk from background static. Every commit now triggers a chain reaction of scans across SAST, SCA, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>AppSec teams aren’t failing to find risk in their applications, they’re overwhelmed by it. A constant flood of critical alerts, false positives, and disconnected security findings has created a severe signal‑to‑noise problem, making it nearly impossible to distinguish business risk from background static.</p>



<p>Every commit now triggers a chain reaction of scans across SAST, SCA, IaC, containers, APIs, secrets, and cloud infrastructure, with each producing its own findings, severity rating, and risk interpretation. And when everything appears critical, developers are left with no guidance on what to fix first. The introduction of AI coding propelled new risks almost overnight speeding everything up. While AI tools help teams ship faster, they also create more code, more components, and more attack surface – leading to more alerts and more noise.</p>



<p>The alert problem that existed before AI? It intensified. And when everything looks urgent, teams lose focus on the vulnerabilities that create business risk.</p>



<p>Developers can’t operate effectively when they’re constantly buried under alerts without prioritization or clarity. Because when they can’t distinguish between theoretical and real threats, critical vulnerabilities slip through unnoticed, exposure windows widen, and business risk increases.</p>



<p>This is exactly the outcome we need to prevent. Detecting vulnerabilities is easy; the real challenge is understanding which ones matter, why they matter, and what to fix first.</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-1">
<strong>The Noise Problem: Volume vs. Actionable Insights</strong>&nbsp;</h2>



<p>Noise isn’t just annoying, it’s dangerous. When teams are forced to sift through endless alerts, fatigue sets in and important issues get overlooked.</p>



<p>To make matters worse, these alerts rarely tell a coherent story. Each scanner operates independently, surfacing different symptoms of potentially related problems.</p>



<p>SAST may identify a potential injection risk, SCA may flag a critical CVE in a transitive dependency, and IaC may highlight risks in cloud configuration – all at the same time.</p>



<p>Individually, each issue appears “critical,” but without understanding how the vulnerabilities relate to each other and to real execution paths, AppSec teams are flying blind, leading to:</p>



<ul class="wp-block-list">
<li>Multiple tools reporting versions of the same underlying issue</li>



<li>High‑severity findings in code paths that cannot execute</li>



<li>Duplicate tickets routed to different teams</li>



<li>“Critical” vulnerabilities treated equally, regardless of real impact</li>
</ul>



<p>The problem isn’t the volume of alerts, but the absence of context. Raw vulnerability data means nothing without the intelligent insights to prioritize them. Because when every vulnerability is “urgent,” nothing actually is.</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-2">
<strong>Contextual Risk Scoring: What It Is, How It Works, and Why It Matters</strong>&nbsp;</h2>



<p>When teams understand a vulnerability’s real-world impact, they can stop chasing theoretical risks and instead fix the issues that matter most.</p>



<p>Instead of treating all “critical” tags equally, contextual risk scoring evaluates how a vulnerability behaves in your specific application and if it presents a realistic threat. This allows teams to move from severity‑driven triage to intelligent risk‑driven prioritization.</p>



<p>Contextual risk scoring takes the following into account:</p>



<p><strong>Exploitability</strong>: Is there a realistic attack path? Are exploit techniques known or emerging? Is the weakness commonly abused in the wild?</p>



<p><strong>Reachability</strong>: Is the vulnerable code path actually executed? Can untrusted input reach it? A flaw in unreachable or dead code may pose minimal risk despite its severity.</p>



<p><strong>Correlation</strong>: Do signals from multiple scanners converge on the same root issue? Correlation provides a deeper understanding of location, impact, and propagation across services.</p>



<p><strong>Business impact</strong>: How critical is the asset? Does it handle sensitive data? Is it externally exposed? Does it support a revenue‑generating or regulated function?</p>



<p>By combining these factors, contextual risk scoring aligns remediation with real exposure. This is how a “critical” issue in an unused library becomes low urgency, while a “medium” flaw in an internet-facing API becomes top priority. Severity alone can’t make that distinction, but contextual risk scoring can.</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-3">Correlation Between Scanners: Full Context Requires Multiple Signals Working Together</h2>



<p>We need to get smarter about where we focus. Not every vulnerability is worth dropping everything for, and only teams that filter out the noise and focus on what really matters are able to stay ahead of risk.</p>



<p>Teams today rely on a variety of scanners, but no single engine provides complete risk context.</p>



<p>A dependency vulnerability flagged by SCA is just random data until you know whether your application code calls the affected function. An exposed cloud configuration only becomes urgent when tied to the services and code running on that infrastructure.</p>



<p>Let’s look at an example:</p>



<p>SCA flags a critical CVE in a transitive dependency. On its own, it looks urgent. But SAST scan shows no code path that calls the affected function, and runtime signals confirm the component isn&#8217;t loaded in production. Three scanners, three separate alerts – but when correlated, the actual risk is low. Meanwhile, a medium-severity SAST finding in an internet-facing API that handles PII, is reachable, and is exercised in production traffic. That “medium” instantly becomes the top priority.</p>



<p>That’s why correlation matters. It stitches together findings across code, dependencies, infrastructure, containers, and runtime environments – transforming disconnected alerts into a single, unified view of actual risk.</p>



<p><em>Without it, everything becomes noise.</em></p>



<p>The correlation of findings across SAST, SCA, IaC, API testing, runtime signals, container scans, and CI/CD metadata helps teams determine:</p>



<ul class="wp-block-list">
<li>When multiple alerts represent the same issue</li>



<li>Whether vulnerabilities propagate across microservices</li>



<li>If issues exist in deployed, production-facing assets</li>



<li>Which components introduce actual operational risk</li>



<li>True root causes that need to be fixed</li>
</ul>



<p>Correlation turns noise into intelligent, actionable signals. Instead of dozens of fragmented alerts, teams receive a single, contextualized insight that reflects the complete picture. This unified code‑to‑cloud intelligence closes visibility gaps,&nbsp;eliminates&nbsp;redundant triage, and enables smarter prioritization for faster, more efficient remediation.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-4">
<strong>Turning Contextual Insights&nbsp;Into&nbsp;Actionable Remediation</strong>&nbsp;</h2>



<p>Insight alone doesn’t reduce risk, action does. Risk reduction requires turning signals into a fast, confident remediation. A vulnerability isn’t neutralized just because it’s been detected. It’s only eliminated when developers understand why it matters, where it originates, and how to fix it without wading through logs or deciphering cryptic scanner output.</p>



<p>This is where contextual risk intelligence stops being just a risk scoring exercise and becomes a practical remediation engine. When you combine exploitability, reachability, and cross‑scanner correlation, you give developers something they rarely get: findings they can trust. Instead of another generic “critical” label, they get true prioritization – and a clear explanation of why the issue is important, the exact code path, and where to remediate. And that clarity transforms how teams work.</p>



<p>Delivering these insights directly in the IDE is what makes them actionable. There’s no tool sprawl or no context switching. Developers don’t need to jump between dashboards or triage queues because the context comes to them, showing them precisely which part of the code needs attention.</p>



<p>Your AppSec stack doesn’t need more scanners or stricter thresholds, it just needs contextual intelligence. Contextual risk scoring cuts through the noise to surface genuine threats to your code. And when that intelligence reaches developers where they work, directly in their workflow, remediation becomes fast, confident, and focused.</p>



<p>The most effective teams aren&#8217;t the ones processing every alert, they’re the ones with enough context to confidently deprioritize most of them. When everything is labelled “critical,” protecting against true vulnerabilities requires the ability to actually distinguish real risk from noise.</p>]]></content:encoded>
					
		
		
		
	</item>
		<item>
		<title>Securing Code No One Actually Wrote</title>
		<link>https://checkmarx.com/ai-llm-tools-in-application-security/securing-code-no-one-actually-wrote-2/</link>
		
		<dc:creator><![CDATA[Eran Kinsbruner]]></dc:creator>
		<pubDate>Wed, 18 Feb 2026 07:19:50 +0000</pubDate>
				<category><![CDATA[AI & LLM Tools in Application Security]]></category>
		<category><![CDATA[Application Security Trends & Insights]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Checkmarx One]]></category>
		<category><![CDATA[AI Agents]]></category>
		<category><![CDATA[AI generated code]]></category>
		<category><![CDATA[AI Powered]]></category>
		<category><![CDATA[developer assist]]></category>
		<category><![CDATA[IDE Scanning]]></category>
		<guid isPermaLink="false">https://staging.checkmarx.com/?p=106710</guid>

					<description><![CDATA[If you don’t secure AI-generated code at the moment it's created, you've already missed the most effective opportunity to secure it. ]]></description>
										<content:encoded><![CDATA[<p>Your developers are accepting code they&nbsp;didn’t&nbsp;write and&nbsp;don’t&nbsp;fully understand.&nbsp;When vulnerabilities surface,&nbsp;no one knows why – or how to fix it.&nbsp;</p>



<p>Large language models (LLMs),&nbsp;coding agents, and AI-native IDEs are generating, completing, and refactoring the code that ships to production. In many organizations, AI sits at the center of software creation,&nbsp;determining&nbsp;what gets built and how quickly it reaches users.&nbsp;</p>



<p>Most teams see this as a productivity win. But AI-generated code&nbsp;doesn’t&nbsp;just accelerate development. It&nbsp;changes the scale of software creation and&nbsp;with it&nbsp;the&nbsp;scope of application risk.&nbsp;</p>



<p>Traditional AppSec&nbsp;tools were created with the&nbsp;assumption that&nbsp;humans wrote&nbsp;code&nbsp;and security reviewed it afterward.&nbsp;But when AI generates code continuously&nbsp;and&nbsp;autonomously, at a speed no traditional security process can keep up&nbsp;with,&nbsp;vulnerabilities spread long before a scanner ever runs. Risk is compounding while security struggles to catch up.&nbsp;</p>



<p>The reality is simple:&nbsp;<strong>if you&nbsp;don’t&nbsp;secure AI-generated code&nbsp;at the moment it&#8217;s created, you&#8217;ve already missed the most effective opportunity to secure it.&nbsp;</strong></p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-1">When No One Owns the Code&nbsp;</h2>



<p>For decades, application security depended on a clear chain of ownership: a developer wrote the code, understood its intent, and&nbsp;was responsible for&nbsp;fixing it when issues arose. This&nbsp;model assumed human authorship and accountability at every step. Today, that assumption no longer holds.&nbsp;</p>



<p>Instead of writing code line by line, developers increasingly prompt models, accept suggestions, and make light edits to AI-generated output. This dramatically accelerates delivery, but at the cost of context. Developers&nbsp;can’t&nbsp;fully explain why a piece of code exists, where it originated, or what assumptions it encodes.&nbsp;</p>



<p>This shift underpins what many now call “vibe coding”: a workflow&nbsp;optimized&nbsp;for speed, flow, and creativity. But as understanding erodes, so does security – because code that moves fast without clear intent is harder to reason, review, and&nbsp;fix.&nbsp;</p>



<p>When no human&nbsp;truly understands&nbsp;or owns the code, accountability breaks down. And security programs built around human authorship are&nbsp;incompatible with this new reality.&nbsp;</p>



<figure class="wp-block-image size-full"><img decoding="async" width="936" height="261" src="https://checkmarx.com/wp-content/uploads/2026/02/image.png" alt="" class="wp-image-106711" srcset="https://checkmarx.com/wp-content/uploads/2026/02/image.png 936w, https://checkmarx.com/wp-content/uploads/2026/02/image-300x84.png 300w, https://checkmarx.com/wp-content/uploads/2026/02/image-768x214.png 768w, https://checkmarx.com/wp-content/uploads/2026/02/image-400x112.png 400w" sizes="(max-width: 936px) 100vw, 936px" /></figure>



<h2 class="wp-block-heading article-anchor" id="article-anchor-2">Welcome to the Agentic Development Lifecycle (ADLC)&nbsp;</h2>



<p>Modern development is no longer purely human-driven. In the Agentic Development Lifecycle (ADLC), humans and autonomous agents collaborate at machine speed, requiring trust in AI-generated code and guardrails that protect security without slowing delivery.&nbsp;</p>



<p>For now, humans&nbsp;remain&nbsp;in the loop. But as trust in AI grows, human involvement will naturally decrease, raising a critical question for security teams: what happens when the loop gets smaller?&nbsp;</p>



<p>As fewer human eyes review code and more decisions are made autonomously, traditional security assumptions break down. The idea that someone will “catch it later” becomes not just unrealistic, but dangerous.&nbsp;</p>



<p>Compounding this shift is a growing myth that AI produces clean, secure code by default.&nbsp;&nbsp;</p>



<p>The data tells&nbsp;a very different&nbsp;story.&nbsp;</p>



<p>Research from&nbsp;BaxBench&nbsp;shows that Claude 4 Sonnet generates insecure code in over 24% of tested scenarios. And&nbsp;Stanford&nbsp;study found that developers using AI assistants wrote significantly less secure code than those without access to AI&nbsp;– but&nbsp;were more likely to believe their&nbsp;<em>code was secure</em>.&nbsp;</p>



<p>AI&nbsp;doesn’t&nbsp;eliminate&nbsp;risk,&nbsp;it industrializes it.&nbsp;Here’s&nbsp;what that looks like in practice:&nbsp;</p>



<ul class="wp-block-list">
<li>
<strong>Hallucinated logic</strong>: Code that compiles and passes tests but encodes incorrect assumptions or missing validation.&nbsp;</li>



<li>
<strong>Dependency amplification:</strong>&nbsp;AI-suggested packages introduced without awareness of provenance or exploit history.&nbsp;</li>



<li>
<strong>Insecure defaults at scale:</strong>&nbsp;AI reproduces insecure patterns faster than teams can review or correct them.&nbsp;</li>



<li>
<strong>Context loss:</strong>&nbsp;Generated code that&nbsp;diverges from&nbsp;internal standards because the AI lacks organizational context.&nbsp;</li>
</ul>



<p>There’s&nbsp;a clear pattern. As AI usage increases,&nbsp;code is delivered more&nbsp;quickly,&nbsp;but&nbsp;issues are introduced at the same pace.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-3">The&nbsp;Software Supply Chain You&nbsp;Can’t&nbsp;See&nbsp;</h2>



<p>s AI becomes&nbsp;more&nbsp;embedded in development workflows, the software supply chain expands well beyond source-code and open-source libraries. Today’s applications increasingly depend on foundation models, fine-tuned LLMs, coding agents, IDE extensions, MCP servers, prompts, embeddings, and configuration artifacts.&nbsp;</p>



<p>Each of these components introduces its own attack surface. Unlike traditional dependencies, many of them&nbsp;operate&nbsp;as black boxes, offering little visibility into how decisions are made or what assumptions are embedded.&nbsp;</p>



<p>This creates a fundamental challenge for security teams. You&nbsp;can’t&nbsp;protect what you&nbsp;can’t&nbsp;see, and without clear visibility into which AI components are active and how&nbsp;they’re&nbsp;used, organizations are left&nbsp;placing trust in systems they&nbsp;don’t&nbsp;fully understand.&nbsp;</p>



<p>This&nbsp;isn’t&nbsp;just a larger supply chain.&nbsp;It’s&nbsp;a less transparent one.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-4">Scanning After the Fact&nbsp;Doesn’t&nbsp;Work&nbsp;</h2>



<p>Despite these changes, many organizations still rely on post-commit scanning and downstream security gates. These approaches were designed for incremental development and human-paced review cycles –&nbsp;assumptions that&nbsp;aren&#8217;t&nbsp;relevant&nbsp;in AI-driven development.&nbsp;</p>



<p>When code is generated continuously and autonomously, security applied&nbsp;post-commit becomes reactive by definition.&nbsp;Findings arrive long after decisions were made, forcing developers to context-switch, rework AI-generated code they did not&nbsp;write, and interpret results that no longer reflect original intent.&nbsp;</p>



<p>At AI speed, reactive security&nbsp;quickly loses effectiveness.&nbsp;</p>



<p>In an AI-driven development model, the&nbsp;<strong>only reliable point of control is the moment code is created</strong>. Once AI-generated code is accepted and committed, risk has already propagated across repositories, pipelines, and services.&nbsp;</p>



<p>This requires a fundamental shift in how application security&nbsp;operates. Instead of scanning&nbsp;code&nbsp;after the fact, security must&nbsp;study&nbsp;code, intent, and context in real time,&nbsp;operating&nbsp;at&nbsp;the same AI&nbsp;speed generating the code.&nbsp;</p>



<p>In this model,&nbsp;<em>prevention&nbsp;</em>replaces&nbsp;<em>detection&nbsp;</em>as the primary&nbsp;objective.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-5">The IDE&nbsp;Is the New Perimeter&nbsp;</h2>



<p>As AI-native IDEs take on more work – writing code, choosing dependencies, making architectural decisions – they become the place where software decisions are made.&nbsp;This is where trust is&nbsp;built&nbsp;or&nbsp;broken.&nbsp;Every AI-assisted action can introduce risk, but security tools that run outside the IDE typically catch problems too late to matter.&nbsp;</p>



<p>Building security directly into the IDE allows teams to catch&nbsp;problems&nbsp;the moment code is written. Security becomes part of everyday development, not a separate step at the end.&nbsp;</p>



<figure class="wp-block-image size-full is-resized"><img decoding="async" width="508" height="505" src="https://checkmarx.com/wp-content/uploads/2026/02/image-1.png" alt="" class="wp-image-106714" style="width:421px;height:auto" srcset="https://checkmarx.com/wp-content/uploads/2026/02/image-1.png 508w, https://checkmarx.com/wp-content/uploads/2026/02/image-1-300x298.png 300w, https://checkmarx.com/wp-content/uploads/2026/02/image-1-150x150.png 150w, https://checkmarx.com/wp-content/uploads/2026/02/image-1-302x300.png 302w" sizes="(max-width: 508px) 100vw, 508px" /></figure>



<p>That shift has&nbsp;a&nbsp;measurable&nbsp;impact.&nbsp;When security issues are prevented in real time and&nbsp;pre-commit, risky code is stopped before it ever exists. In&nbsp;fact, embedding security directly into the IDE&nbsp;<strong>eliminates&nbsp;</strong><strong>90% of security&nbsp;</strong><strong>rework</strong>. Most issues never enter the backlog, never fail CI, and never become production risks.&nbsp;</p>



<p>This&nbsp;isn’t&nbsp;about fixing problems&nbsp;faster,&nbsp;it’s&nbsp;about&nbsp;eliminating&nbsp;entire categories of work that only exist when vulnerabilities are discovered after the fact. Once issues slip past commit, developers are pulled into a&nbsp;familiar cycle:&nbsp;</p>



<ul class="wp-block-list">
<li>Context switching and rebuilding mental models&nbsp;</li>



<li>Debugging root causes in unfamiliar or AI-generated code&nbsp;</li>



<li>Fixing and refactoring under delivery pressure&nbsp;</li>



<li>Rerunning builds and waiting on CI pipelines&nbsp;</li>



<li>Back-and-forth PR comments and security reviews</li>
</ul>



<p>Catching issues early in the IDE removes that downstream work entirely. Problems are surfaced inline, explained in developer-friendly terms, and resolved while the code and context are still fresh.&nbsp;</p>



<p>Organizations that succeed will not be those that blindly trust AI-generated code, but those that recognize a harder truth:&nbsp;<strong>AI-generated code moves fast only when security moves with it.</strong>&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-6">Agentic AppSec in Practice&nbsp;</h2>



<p>Checkmarx&nbsp;Developer Assist was built for this exact&nbsp;shift. It embeds agentic application security directly inside the IDE,&nbsp;operating&nbsp;alongside AI-coding tools to&nbsp;detect&nbsp;risk and prevent vulnerabilities&nbsp;from&nbsp;the moment code is created.&nbsp;</p>



<p>By catching and fixing issues pre-commit,&nbsp;Checkmarx&nbsp;Developer Assist helps teams&nbsp;eliminate&nbsp;rework, reduce noise, and move at AI speed without&nbsp;sacrificing&nbsp;security.&nbsp;</p>



<p>If your security strategy still acts like your&nbsp;code is written by humans,&nbsp;it’s&nbsp;time to rethink&nbsp;your stack.&nbsp;</p>



<p>You can try&nbsp;<a href="https://checkmarx.dev/free-trial/">Checkmarx&nbsp;Developer Assist for <strong>free</strong> </a>and see what real-time, IDE-native AppSec looks like in practice.&nbsp;</p>]]></content:encoded>
					
		
		
		
		<media:thumbnail url="https://checkmarx.com/wp-content/uploads/2026/02/image-150x150.png" />
		<media:content url="https://checkmarx.com/wp-content/uploads/2026/02/image.png" medium="image">
			<media:title type="html">image</media:title>
			<media:thumbnail url="https://checkmarx.com/wp-content/uploads/2026/02/image-150x150.png" />
		</media:content>
		<media:content url="https://checkmarx.com/wp-content/uploads/2026/02/image-1.png" medium="image">
			<media:title type="html">image</media:title>
			<media:thumbnail url="https://checkmarx.com/wp-content/uploads/2026/02/image-1-150x150.png" />
		</media:content>
	</item>
		<item>
		<title>Goodbye SDLC, Hello ADLC: How Will AppSec Adapt? </title>
		<link>https://checkmarx.com/ai-llm-tools-in-application-security/goodbye-sdlc-hello-adlc-how-will-appsec-adapt/</link>
		
		<dc:creator><![CDATA[Checkmarx Team]]></dc:creator>
		<pubDate>Mon, 26 Jan 2026 18:49:33 +0000</pubDate>
				<category><![CDATA[AI & LLM Tools in Application Security]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Compliance & Secure SDLC Frameworks]]></category>
		<category><![CDATA[AI generated code]]></category>
		<category><![CDATA[AI in Engineering]]></category>
		<category><![CDATA[SDLC]]></category>
		<category><![CDATA[Secure SDLC]]></category>
		<guid isPermaLink="false">https://staging.checkmarx.com/?p=106518</guid>

					<description><![CDATA[Application security, as it exists today, was shaped by the Software Development Lifecycle.&#160; The SDLC assumed that code was written primarily by humans, progressed through recognizable phases, and paused naturally at points where review made sense.&#160;&#160; Security controls were layered onto those pauses&#160;&#8211;&#160;during pull requests, before releases, or after&#160;builds &#8211; because&#160;that’s where time existed to [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Application security, as it exists today, was shaped by the Software Development Lifecycle.&nbsp;</p>



<p>The SDLC assumed that code was written primarily by humans, progressed through recognizable phases, and paused naturally at points where review made sense.&nbsp;&nbsp;</p>



<p>Security controls were layered onto those pauses&nbsp;&#8211;&nbsp;during pull requests, before releases, or after&nbsp;builds &#8211; because&nbsp;that’s where time existed to apply them.&nbsp;</p>



<p>Those assumptions&nbsp;are becoming obsolete.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-1">
<strong>The SDLC Mental Model Is Breaking</strong>&nbsp;</h2>



<p>AI has changed how code comes into existence.&nbsp;An increasing number&nbsp;of modern codebases are now generated,&nbsp;modified, and refactored continuously, often&nbsp;without a clear distinction between “writing,” “fixing,” and “improving.”&nbsp;</p>



<p>The lifecycle no longer advances in steps&nbsp;or clear breaks.&nbsp;It loops.&nbsp;</p>



<p>Once that happens, many of the places where AppSec traditionally&nbsp;operated, like&nbsp;stage&nbsp;gates, handoffs, centralized review&nbsp;queues,&nbsp;lose&nbsp;their effectiveness. They&nbsp;weren’t&nbsp;designed for continuous change, and they&nbsp;weren’t&nbsp;designed for machine-paced production.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-2">
<strong>What ADLC Actually Describes</strong>&nbsp;</h2>



<p>The&nbsp;Agentic&nbsp;Development Lifecycle (ADLC)&nbsp;is&nbsp;a new&nbsp;methodology&nbsp;that is&nbsp;shaping&nbsp;a new&nbsp;reality.&nbsp;</p>



<p>In an ADLC environment, humans and AI systems work together to produce and evolve software continuously. Developers guide intent and direction, while AI systems generate, transform, and extend code at a rate that no longer maps cleanly to&nbsp;phases or milestones.&nbsp;</p>



<p>This changes the unit of work AppSec&nbsp;has to&nbsp;reason&nbsp;about:&nbsp;Instead of releases or pull requests, security&nbsp;has to&nbsp;contend with a constant stream of small, fast-moving changes.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-3">
<strong>Why Existing AppSec Models&nbsp;Struggle</strong>&nbsp;</h2>



<p>Most AppSec programs were built around interruption: stop here, scan there, review later. That approach assumes development can afford to wait.&nbsp;</p>



<p>In&nbsp;<a href="https://www.linkedin.com/pulse/welcome-aidlc-new-ai-native-lifecycle-software-eran-kinsbruner-jgc6e/" target="_blank" rel="noreferrer noopener">ADLC</a>, waiting becomes&nbsp;part of&nbsp;the risk.&nbsp;</p>



<p>Centralized security teams cannot manually review the volume of code produced by AI-assisted workflows, and stage-based tooling struggles to stay relevant when code is rewritten multiple times before it ever reaches a traditional checkpoint.&nbsp;</p>



<p>There’s&nbsp;also a growing false sense of safety around AI-assisted development.&nbsp;&nbsp;</p>



<p>Because AI-generated code often looks clean, idiomatic, and well-structured,&nbsp;it’s&nbsp;easy to assume it is safer than hand-written code.&nbsp;&nbsp;</p>



<p>In practice, it&nbsp;frequently&nbsp;reproduces insecure patterns, makes inconsistent trust assumptions, and introduces vulnerabilities that are harder to spot precisely because they appear reasonable.&nbsp;</p>



<p>The impact is felt on both sides of the organization: Security teams lose&nbsp;timely&nbsp;visibility and effective control as AI accelerates code creation beyond traditional review models.&nbsp;&nbsp;</p>



<p>At the same time, developers experience security as an after-the-fact&nbsp;interruption,&nbsp;flagging issues in code that&nbsp;has&nbsp;already changed.&nbsp;&nbsp;</p>



<p>ADLC exposes a fundamental mismatch: tools designed for sequential development cannot keep pace with AI-driven workflows without compromising either security or speed.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-4">
<strong>What AppSec Has to Become</strong>&nbsp;</h2>



<p>If development is continuous, security has to operate continuously as well.&nbsp;</p>



<p>That means security systems need to evaluate code as it is created and&nbsp;modified, not after the fact. They need to understand context&nbsp;&#8211;&nbsp;how a piece of code fits into a broader&nbsp;system&nbsp;&#8211;&nbsp;and&nbsp;they need to act without relying on human intervention for every decision.&nbsp;</p>



<p>This is where agentic AI becomes necessary rather than aspirational. Security systems need the ability to&nbsp;reason about&nbsp;changes, apply organizational policies automatically, and persist alongside development rather than responding to snapshots.&nbsp;</p>



<p>In practical terms, this pushes AppSec closer to where development decisions are&nbsp;made:&nbsp;inside the IDE and before changes are committed.&nbsp;It’s&nbsp;where&nbsp;developers’&nbsp;convenience&nbsp;and&nbsp;necessity&nbsp;intersect, because&nbsp;that’s where intent is expressed and where correction is still cheap.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-5">
<strong>The Developer Workflow Is Changing</strong>&nbsp;</h2>



<p>As AI takes on more of the mechanical aspects of coding, developers spend more time directing,&nbsp;validating, and integrating output. Security decisions increasingly happen implicitly, through what developers accept, reject, or&nbsp;modify.&nbsp;</p>



<p>Independent research such as the&nbsp;<a href="https://baxbench.com/" target="_blank" rel="noreferrer noopener">BaxBench benchmark</a>, which measures how well large language models generate backend applications that are both functionally correct and secure, shows a stark reality:&nbsp;&nbsp;</p>



<p>Even flagship models&nbsp;frequently&nbsp;produce code that may&nbsp;or may not work but&nbsp;still&nbsp;contain&nbsp;security vulnerabilities. In the&nbsp;BaxBench&nbsp;evaluation, many generated programs that passed functional tests still failed security checks when exposed to expert-designed exploits,&nbsp;indicating&nbsp;that correctness and security&nbsp;don’t&nbsp;automatically coincide in AI-generated outputs.&nbsp;</p>



<p>AppSec has to align with that reality.&nbsp;Guidance that arrives late or requires developers to context-switch will be ignored, regardless of policy. Guidance that arrives in-line, with enough context to be actionable, has a chance to influence outcomes at scale.&nbsp;</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img decoding="async" width="936" height="624" src="https://checkmarx.com/wp-content/uploads/2026/01/appsec-adlc-vs-sdlc.webp" alt="Appsec in SDLC vs. ADLC
" class="wp-image-106521" srcset="https://checkmarx.com/wp-content/uploads/2026/01/appsec-adlc-vs-sdlc.webp 936w, https://checkmarx.com/wp-content/uploads/2026/01/appsec-adlc-vs-sdlc-300x200.webp 300w, https://checkmarx.com/wp-content/uploads/2026/01/appsec-adlc-vs-sdlc-768x512.webp 768w, https://checkmarx.com/wp-content/uploads/2026/01/appsec-adlc-vs-sdlc-878x585.webp 878w, https://checkmarx.com/wp-content/uploads/2026/01/appsec-adlc-vs-sdlc-400x267.webp 400w" sizes="(max-width: 936px) 100vw, 936px" /><figcaption class="wp-element-caption">AppSec in SDLC vs. ADLC<br></figcaption></figure>
</div>


<p>This&nbsp;doesn’t&nbsp;eliminate&nbsp;governance. Organizational standards, risk tolerances, and compliance requirements still matter. What changes is how they are enforced: automatically and continuously, rather than episodically and manually.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-6">
<strong>Organizational&nbsp;Consequences</strong>&nbsp;</h2>



<p>In many organizations, this shift is already reshaping responsibility boundaries.&nbsp;AppSec capabilities are beginning to intersect more closely with platform engineering and emerging AI engineering teams, reflecting the fact that security, developer experience, and AI systems are now tightly coupled.&nbsp;</p>



<p>Security becomes less about approval and more about enablement,&nbsp;providing guardrails that&nbsp;operate&nbsp;at the same speed as development rather than trying to slow it down.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-7">
<strong>Closing</strong>&nbsp;</h2>



<p><strong>ADLC&nbsp;doesn’t&nbsp;leave much room for AppSec to catch up later.&nbsp;</strong>Code is produced continuously, changes compound quickly, and delayed feedback becomes indistinguishable from no feedback at all.&nbsp;</p>



<p>That reality forces a simple conclusion:&nbsp;<strong>security has to operate inside the development loop itself</strong>, aligned to how software is actually produced in an&nbsp;AI-driven lifecycle.&nbsp;</p>



<p><strong>Checkmarx.dev&nbsp;offers a view on what ADLC-oriented security looks like in practice, with&nbsp;Checkmarx&nbsp;Developer Assist&nbsp;&#8211;</strong>&nbsp;an agentic security linter that&nbsp;operates&nbsp;directly inside supported IDEs&nbsp;to evaluate risk as code is written &#8211; before commits, pipelines, or handoffs exist.&nbsp;&nbsp;</p>



<p>Developers and AI engineers can try it hands-on through a free trial in IDEs like VS Code, Cursor, Windsurf, and AWS Kiro.&nbsp;</p>



<p>If SDLC framed how AppSec worked for the last decade, ADLC will define what works next.&nbsp;</p>



<p><strong>Learn more and get your free trial at&nbsp;</strong><a href="https://checkmarx.dev/" target="_blank" rel="noreferrer noopener"><strong>https://checkmarx.dev</strong></a><strong></strong>&nbsp;</p>



<p></p>



<p><em>This article was originally <a href="https://www.linkedin.com/pulse/goodbye-sdlc-hello-adlc-how-appsec-adapt-checkmarx-hkkne/?trackingId=8U40d%2BjKQ8SdXULyWOl2mw%3D%3D">published</a> on Checkmarx&#8217;s LinkedIn Newsletter, &#8220;The Monthly Checkup&#8221;. </em></p>



<p></p>]]></content:encoded>
					
		
		
		
		<media:thumbnail url="https://checkmarx.com/wp-content/uploads/2026/01/appsec-adlc-vs-sdlc-150x150.webp" />
		<media:content url="https://checkmarx.com/wp-content/uploads/2026/01/appsec-adlc-vs-sdlc.webp" medium="image">
			<media:title type="html">appsec adlc vs sdlc</media:title>
			<media:thumbnail url="https://checkmarx.com/wp-content/uploads/2026/01/appsec-adlc-vs-sdlc-150x150.webp" />
		</media:content>
	</item>
		<item>
		<title>“Stranger Things” Happen With Today’s Software  </title>
		<link>https://checkmarx.com/ai-llm-tools-in-application-security/stranger-things-happen-with-todays-software/</link>
		
		<dc:creator><![CDATA[Shane McLaughlin]]></dc:creator>
		<pubDate>Tue, 20 Jan 2026 21:12:25 +0000</pubDate>
				<category><![CDATA[AI & LLM Tools in Application Security]]></category>
		<category><![CDATA[Application Security Trends & Insights]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Checkmarx One]]></category>
		<category><![CDATA[Secure Coding Best Practices for Developers]]></category>
		<category><![CDATA[AI generated code]]></category>
		<category><![CDATA[AppSec]]></category>
		<guid isPermaLink="false">https://staging.checkmarx.com/?p=106377</guid>

					<description><![CDATA[Today’s software stack looks like the Upside Down: complex, unpredictable, and full of hidden risk. Developers are the new defenders. Discover how a developer- 
first application security approach, powered by agentic AI, helps secure code as it’s created. ]]></description>
										<content:encoded><![CDATA[<p>For years, software development&nbsp;was&nbsp;more or less&nbsp;predictable. Developers built the stack. Security guarded the stack. Operations kept it standing long enough to make it to Friday.&nbsp;&nbsp;<br><br>Life&nbsp;seemed&nbsp;normal. Almost like the opening of&nbsp;<em>Stranger Things</em>,&nbsp;when the biggest worries of teenagers in&nbsp;the town of&nbsp;Hawkins&nbsp;were&nbsp;getting a date to the Prom or&nbsp;whether&nbsp;there’d&nbsp;be another&nbsp;<em>Friday&nbsp;the 13th</em>&nbsp;sequel.&nbsp;That ominous netherworld of chaos known as the “Upside Down” had yet to be opened.&nbsp;</p>



<p>But then things changed, like the moment Will Byers&nbsp;disappeared&nbsp;and Hawkins realized something was very wrong.&nbsp;The stack&nbsp;changed&nbsp;from being a&nbsp;simple&nbsp;concept that could be sketched on a whiteboard&nbsp;into&nbsp;a sprawling ecosystem&nbsp;of&nbsp;interconnected services and unpredictable&nbsp;components.&nbsp;Applications pulled in open-source dependencies with their own release calendars. Temporary workarounds quietly hardened into permanent architecture.&nbsp;&nbsp;<br>&nbsp;<br>Today’s&nbsp;network applications can be like&nbsp;the&nbsp;“Upside Down,”&nbsp;sprawling, unpredictable,&nbsp;and&nbsp;capable of chaos if left unchecked.&nbsp;For developers and AppSec professionals, this complexity&nbsp;is&nbsp;a force multiplier for risk.&nbsp;&nbsp;<br>&nbsp;<br>Checkmarx&nbsp;may not be&nbsp;a&nbsp;hoodie-wearing, Eggo-waffle loving, telekinetic&nbsp;superhero&nbsp;like&nbsp;11&nbsp;&nbsp;(brilliantly&nbsp;portrayed by&nbsp;actor&nbsp;Millie Bobby Brown).&nbsp;But it&nbsp;<em>does</em>&nbsp;deliver something close to a sixth&nbsp;sense for&nbsp;software:&nbsp;<a href="https://checkmarx.com/product/application-security-platform/" target="_blank" rel="noreferrer noopener">a unified, scalable security platform</a>&nbsp;that spots hidden issues early and stops vulnerabilities from ever crossing into production.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-1">
<strong>Welcome to the Secure Side of the Upside Down </strong> </h2>



<p>The boundary between “shipping software” and “securing software” is now mostly imaginary,&nbsp;like the line between “front&nbsp;end” and “back&nbsp;end” in a microservices repo with 47 folders named&nbsp;“shared.”&nbsp;Developers&nbsp;aren’t&nbsp;being dragged into security&nbsp;like the Demogorgon that got Barb. They are choosing it.&nbsp;&nbsp;</p>



<p>In Stack Overflow’s 2025 Developer Survey, when developers were asked what would turn them off or cause them to reject a technology, the #1 deal-breaker was “<a href="https://survey.stackoverflow.co/2025/work" target="_blank" rel="noreferrer noopener">security or privacy concerns</a>.” It outranked pricing, usability, and even the availability of better alternatives.&nbsp;&nbsp;</p>



<p>That’s&nbsp;not a minor preference.&nbsp;It’s&nbsp;P-0&nbsp;level ticket&nbsp;hitting&nbsp;the&nbsp;Jira&nbsp;workflow&nbsp;like a global outage.&nbsp;Security&nbsp;is&nbsp;no longer&nbsp;optional. It is&nbsp;the ultimate deal-breaker.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-2"><strong>The New Hawkins Heroes: Developers as Defenders</strong></h2>



<p>A transformational shift is happening in software development: those who write the code are becoming responsible for defending it. Developers are the new defenders. <br> <br>Developers sit at the epicenter of risk, seeing firsthand how vulnerabilities creep in through everyday actions: IDEs, pull requests, CI/CD pipelines, and production deploys. Like performance and reliability, security is a critical component that can make or break a product. </p>



<p><a href="https://checkmarx.com/learn/ai-security/building-trust-in-ai-powered-code-generation-a-guide-for-secure-adoption/" target="_blank" rel="noreferrer noopener">With&nbsp;at least&nbsp;41% of code being generated by AI</a>&nbsp;(and climbing)&nbsp;that shift&nbsp;doesn’t&nbsp;just accelerate velocity. It multiplies complexity.&nbsp;More components, more integrations, more code paths, and more chances for something small to become something serious. Every new link in the chain is a potential point of failure.&nbsp;</p>



<p>The&nbsp;good news? Developers are stepping up.&nbsp;</p>



<p>In GitLab’s 2024 Global&nbsp;DevSecOps&nbsp;Report,&nbsp;<a href="https://2631050.fs1.hubspotusercontent-na1.net/hubfs/2631050/GitLab%202024%20Global%20DevSecOps%20Report.pdf" target="_blank" rel="noreferrer noopener">58% of respondents agreed to some degree that they are primarily responsible for application security</a>.&nbsp;A&nbsp;2025 survey of 200&nbsp;global&nbsp;CISOs&nbsp;sponsored by&nbsp;Checkmarx&nbsp;found that “well over half&#8230;&nbsp;(56%) said&nbsp;<a href="https://securityboulevard.com/2025/05/ciso-survey-surfaces-shift-in-application-security-responsibilities/" target="_blank" rel="noreferrer noopener">most of their development teams are fully integrated into application security programs</a>, with 41% soliciting feedback from developers to improve security processes.”&nbsp;</p>



<p>Two important takeaways hide&nbsp;in this research.&nbsp;</p>



<p>First, developers&nbsp;aren’t&nbsp;“taking on security” as a philosophical statement.&nbsp;They’re&nbsp;doing it as a practical&nbsp;necessity because&nbsp;that’s&nbsp;where the leverage is. Security that arrives late arrives expensive.&nbsp;&nbsp;</p>



<p>Second, the shift&nbsp;doesn’t&nbsp;mean security teams&nbsp;fade into&nbsp;history like&nbsp;Steve Harrington’s big hair and high-waisted jeans.&nbsp;It means the operating model changes.&nbsp;<br>&nbsp;<br>AppSec teams still own standards, governance, and threat&nbsp;expertise, but&nbsp;more of the day-to-day prevention happens&nbsp;now happens upstream in the developers’ workflow, where agentic AI for application security empowers them to code fast and keep shipping without compromise.&nbsp;&nbsp;<br>&nbsp;<br>Of course, this&nbsp;doesn’t&nbsp;mean security teams have less to do.&nbsp;Prevention&nbsp;is now&nbsp;critical in&nbsp;early in the&nbsp;development life&nbsp;cycle,&nbsp;but&nbsp;security teams still face a constant stream of threats and issues&nbsp;to keep them busy.&nbsp;This evolution frees them&nbsp;up to&nbsp;think more strategically&nbsp;and&nbsp;focus&nbsp;on&nbsp;challenges like&nbsp;organizational risk, governance, and compliance.&nbsp;(By the way,&nbsp;Checkmarx&nbsp;research&nbsp;found that&nbsp;only&nbsp;<a href="https://checkmarx.com/press-releases/checkmarx-one-achieves-unprecedented-enterprise-adoption/" target="_blank" rel="noreferrer noopener">18% of organizations have AI governance policies in place.</a>&nbsp;That’s data for another&nbsp;episode, er,&nbsp;blog <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f60a.png" alt="😊" class="wp-smiley" style="height: 1em; max-height: 1em;" />.)&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-3">
<strong>Vecna-Proof Your Releases</strong> </h2>



<p>So, if you want developers to be more in-step with security, the question&nbsp;isn’t&nbsp;“how do&nbsp;we convince them?”&nbsp;</p>



<p>The question is: how do we make secure behavior the easiest behavior?&nbsp;</p>



<p>We’re entering an era where intelligent agents assist developers in real time, catching risks as code is written, offering context-aware fixes, and reducing remediation times from hours to seconds. The agentic future of Application Security is our top priority at Checkmarx, focusing on developers-first application security with a  <a href="https://checkmarx.com/product/developer-assist/" target="_blank" rel="noreferrer noopener">family of agents</a> who bring real-time, context-aware prevention. <br> <br>The stack is complicated but our mission is simple: secure code as it’s created, so teams move fast without adding risk. It’s the same instinct that kept the <em>Stranger Things</em> crew alive: don’t wander into the dark without a plan, and don’t ignore the weird noise just because you’re in a hurry. <br> <br>Spoiler alert: in the <em>Stranger Things</em> finale, Joyce Byers (the amazing Winona Ryder) delivers well-earned justice with an axe to eliminate the evil Vecna, ruler of the Upside Down. In today’s software world, developers are the defenders—and Checkmarx is the protection they can wield to eliminate vulnerabilities before they turn your stack upside down. <br><a href="https://checkmarx.com/report-future-of-appsec-2025/" target="_blank" rel="noreferrer noopener">Learn more about the future of Application Security.</a></p>]]></content:encoded>
					
		
		
		
	</item>
		<item>
		<title>AI Query Builder for SAST: Now Generally Available </title>
		<link>https://checkmarx.com/blog/ai-query-builder-for-sast-now-generally-available/</link>
		
		<dc:creator><![CDATA[Dudu Gil]]></dc:creator>
		<pubDate>Thu, 15 Jan 2026 12:25:34 +0000</pubDate>
				<category><![CDATA[AI & LLM Tools in Application Security]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[SAST]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<category><![CDATA[AI generated code]]></category>
		<guid isPermaLink="false">https://staging.checkmarx.com/?p=106350</guid>

					<description><![CDATA[When we&#160;first introduced AI Query Builder&#160;in early 2023, AI-assisted development was just beginning to reshape how teams wrote code. Fast forward to 2026, and the landscape has&#160;completely&#160;transformed.&#160; Today, developers&#160;don’t&#160;just use AI to write&#160;code,&#160;they’re&#160;relying&#160;on&#160;AI coding assistants like GitHub Copilot, Cursor, and Windsurf&#160;to&#160;generate entire functions, suggest architecture patterns, and accelerate feature development.&#160;&#160; To support this shift, the&#160;security [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>When we&nbsp;<a href="https://checkmarx.com/blog/introducing-ai-query-builder-for-sast/" target="_blank" rel="noreferrer noopener">first introduced AI Query Builder</a>&nbsp;in early 2023, AI-assisted development was just beginning to reshape how teams wrote code. Fast forward to 2026, and the landscape has&nbsp;completely&nbsp;transformed.&nbsp;</p>



<p>Today, developers&nbsp;don’t&nbsp;just use AI to write&nbsp;code,&nbsp;they’re&nbsp;relying&nbsp;on&nbsp;AI coding assistants like GitHub Copilot, Cursor, and Windsurf&nbsp;to&nbsp;generate entire functions, suggest architecture patterns, and accelerate feature development.&nbsp;&nbsp;</p>



<p>To support this shift, the&nbsp;security industry&nbsp;must&nbsp;evolve&nbsp;– and&nbsp;Checkmarx&nbsp;is leading the way with new&nbsp;AI capabilities&nbsp;like&nbsp;Checkmarx&nbsp;One Assist&nbsp;and&nbsp;Checkmarx&nbsp;Developer Assist.&nbsp;These&nbsp;tools autonomously prevent and remediate vulnerabilities across the SDLC, catching&nbsp;issues&nbsp;pre-commit and enforcing security policies throughout pipelines.&nbsp;</p>



<p>While agentic AI can automate much of application security,&nbsp;there’s&nbsp;still&nbsp;a critical piece of security that teams need: the ability to customize SAST detection&nbsp;to&nbsp;the unique patterns, frameworks, and business logic&nbsp;of their&nbsp;applications.&nbsp;</p>



<p>That’s&nbsp;why AI Query Builder for SAST, now generally available, is more&nbsp;important&nbsp;than ever.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-1">
<strong>Why SAST Customization Matters More in 2026</strong>&nbsp;</h2>



<p>When developers use AI assistants to&nbsp;build&nbsp;applications,&nbsp;they end up&nbsp;creating&nbsp;highly&nbsp;diverse implementations across custom frameworks, internal libraries, and organization-specific patterns that&nbsp;standard&nbsp;security rules&nbsp;often miss.&nbsp;</p>



<p>Out-of-the-box SAST queries are built to detect common vulnerabilities in standard code patterns&nbsp; &#8211; they’re&nbsp;excellent&nbsp;at&nbsp;finding SQL&nbsp;injections&nbsp;in typical database calls or XSS in well-known frameworks. But your organization&nbsp;doesn’t&nbsp;just&nbsp;use&nbsp;standard patterns&nbsp;anymore.&nbsp;You&nbsp;have:&nbsp;</p>



<p><strong>Custom security frameworks</strong>&nbsp;that sanitize inputs in ways&nbsp;traditional&nbsp;SAST&nbsp;doesn’t&nbsp;recognize,&nbsp;leading to false positives that waste developer time&nbsp;</p>



<p><strong>Internal libraries</strong>&nbsp;that introduce organization-specific vulnerabilities that standard queries miss,&nbsp;creating&nbsp;coverage gaps</p>



<p><strong>Business-critical logic</strong>&nbsp;with unique security requirements that need tailored detection&nbsp;such as&nbsp;industry-specific handling of&nbsp;PII&nbsp;patterns&nbsp;When SAST tools&nbsp;aren’t&nbsp;tuned to&nbsp;reality, two&nbsp;things happen:&nbsp;<br></p>



<ul class="wp-block-list">
<li>
<span data-contrast="auto" xml:lang="EN-US" lang="EN-US" class="TextRun MacChromeBold SCXW18876372 BCX0" style="font-size: 12pt; -webkit-user-drag: none; -webkit-font-smoothing: antialiased; line-height: 22.0083px; font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, sans-serif; font-weight: bold; font-variant-ligatures: none !important;">False positives erode trust</span><span data-contrast="auto" xml:lang="EN-US" lang="EN-US" class="TextRun SCXW18876372 BCX0" style="font-size: 12pt; -webkit-user-drag: none; line-height: 22.0083px; font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, sans-serif; font-variant-ligatures: none !important;"><span class="NormalTextRun SCXW18876372 BCX0" style="-webkit-user-drag: none;">. Developers get flagged for using approved sanitizer because SAST doesn’t know it exists. Too many false positives cause developers to ignore security findings.</span></span><span class="EOP SCXW18876372 BCX0" data-ccp-props="{}" style="font-size: 12pt; -webkit-user-drag: none; line-height: 22.0083px; font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, sans-serif;"> </span>
</li>



<li>
<span data-contrast="auto" xml:lang="EN-US" lang="EN-US" class="TextRun MacChromeBold SCXW18876372 BCX0" style="font-size: 12pt; -webkit-user-drag: none; -webkit-font-smoothing: antialiased; line-height: 22.0083px; font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, sans-serif; font-weight: bold; font-variant-ligatures: none !important;">Coverage gaps leave vulnerabilities</span><span class="TextRun SCXW18876372 BCX0" data-contrast="auto" xml:lang="EN-US" lang="EN-US" style="font-size: 12pt; -webkit-user-drag: none; line-height: 22.0083px; font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, sans-serif; font-variant-ligatures: none !important;"><span class="NormalTextRun SCXW18876372 BCX0" style="-webkit-user-drag: none;">.&nbsp;Your team built a custom authentication system, but your SAST&nbsp;can’t&nbsp;detect flaws in it, so vulnerabilities slip through to production.</span></span><span class="EOP SCXW18876372 BCX0" data-ccp-props="{}" style="font-size: 12pt; -webkit-user-drag: none; line-height: 22.0083px; font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, sans-serif;">&nbsp;</span>
</li>
</ul>



<p>The traditional solution to these problems was to hire SAST experts who understood query languages and security patterns and then wait for them to manually write custom detection rules. This solution doesn&#8217;t scale – especially when AI is accelerating development velocity and security teams are already stretched thin.</p>



<p>Something needed to change.</p>



<figure class="wp-block-image size-full is-resized"><img decoding="async" width="669" height="847" src="https://checkmarx.com/wp-content/uploads/2026/01/image-1.png" alt="" class="wp-image-106351" style="width:362px;height:auto" srcset="https://checkmarx.com/wp-content/uploads/2026/01/image-1.png 669w, https://checkmarx.com/wp-content/uploads/2026/01/image-1-237x300.png 237w, https://checkmarx.com/wp-content/uploads/2026/01/image-1-462x585.png 462w" sizes="(max-width: 669px) 100vw, 669px" /></figure>



<h2 class="wp-block-heading article-anchor" id="article-anchor-2">
<strong>The Value: Making Security Expertise Scalable</strong>&nbsp;</h2>



<p>AI Query Builder removes the&nbsp;expertise&nbsp;barrier that has long limited SAST customization.&nbsp;</p>



<p><strong>For security teams</strong>, this means you can respond to threats at the speed your organization moves. When developers adopt a new internal framework, you don’t need to wait weeks for a query expert to write new detection rules. You describe what you want to detect and AI generates the CxQL query in minutes. Want to iterate on it or test against your codebase? No problem – and you’ll still be able to deploy it the same day. </p>



<p><strong>For developers</strong>, this means <em>fewer </em>false positives and <em>more </em>relevant security guidance. When your security team can quickly finetune SAST to understand your code patterns (sanitzers, frameworks, security controls), you stop getting flagged false positive. And, the findings you do get are more likely to be real issues that need your attention. </p>



<p><strong>For AppSec managers</strong>, this means you can scale security coverage without scaling&nbsp;headcount. Your team&nbsp;doesn’t&nbsp;need&nbsp;deep&nbsp;CxQL&nbsp;expertise&nbsp;to create custom queries,&nbsp;enabling&nbsp;more people to contribute to security tuning&nbsp;and make&nbsp;your program more responsive and comprehensive.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-3">
<strong>Checkmarx&#8217;s&nbsp;AI Journey Since 2023</strong>&nbsp;</h2>



<p>When we&nbsp;launched&nbsp;early access&nbsp;to&nbsp;AI Query Builder&nbsp;in&nbsp;2023,&nbsp;we already saw how rapidly AI would transform security tooling&nbsp;–&nbsp;and&nbsp;since then,&nbsp;we’ve&nbsp;continued&nbsp;to push that AI innovation forward across our platform.&nbsp;&nbsp;</p>



<p>We&nbsp;introduced AI-powered remediation that automatically suggests fixes for vulnerabilities across SAST, SCA, secrets, and&nbsp;IaC. Our&nbsp;agentic AI&nbsp;capabilities, Checkmarx&nbsp;One&nbsp;and&nbsp;Checkmarx&nbsp;Developer Assist, &nbsp;autonomously prevent and remediate security issues,&nbsp;from&nbsp;real-time&nbsp;protection&nbsp;in the IDE to continuous policy enforcement across&nbsp;CI/CD&nbsp;pipelines.&nbsp;AI Query Builder was ahead of its time.&nbsp;Today, in a world where AI touches every part of the development and security lifecycle,&nbsp;it’s&nbsp;exactly the right capability at exactly the right&nbsp;time.&nbsp;</p>



<p>While AI agents automate prevention and remediation, AI Query Builder lets security teams customize the intelligence behind&nbsp;that&nbsp;detection&nbsp;itself. It ensures that your SAST scans&nbsp;–&nbsp;&nbsp;whether&nbsp;triggered manually, in CI/CD, or as part of broader security workflows&nbsp;–&nbsp;understand&nbsp;your&nbsp;unique code patterns and security requirements.&nbsp;</p>



<p>Because&nbsp;AI in&nbsp;security&nbsp;isn’t&nbsp;just about automation,&nbsp;it’s&nbsp;about&nbsp;<strong>democratizing&nbsp;expertise</strong>. Making it possible for&nbsp;more people on your team to do work that previously&nbsp;required&nbsp;specialized knowledge, enabling security&nbsp;to&nbsp;scale with development velocity instead of being perpetually behind.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-4">
<strong>How AI Query Builder Works</strong>&nbsp;</h2>



<p>AI Query Builder is&nbsp;built for SAST query creation.&nbsp;</p>



<h3 class="wp-block-heading">
<strong>The Basic Workflow</strong>&nbsp;</h3>



<p><strong>1. Describe the security concern in natural language</strong>&nbsp;</p>



<p>Instead of writing&nbsp;CxQL&nbsp;syntax, you describe what you want to detect. For example:&nbsp;</p>



<ul class="wp-block-list">
<li>&#8220;Detect SQL injection vulnerabilities in our custom&nbsp;DatabaseWrapper&nbsp;class when user input flows into the&nbsp;executeQuery&nbsp;method”&nbsp;</li>



<li>&#8220;Find authentication bypass risks when JWT tokens are validated using our internal TokenValidator library&#8221;</li>



<li>&#8220;Identify command injection when shell commands are constructed using string concatenation in our deployment scripts&#8221;&nbsp;</li>
</ul>



<p><strong>2. AI generates the&nbsp;CxQL&nbsp;query</strong>&nbsp;</p>



<p>The system translates your description into proper query syntax, understanding:&nbsp;</p>



<ul class="wp-block-list">
<li>Data flow analysis (how user input moves through your code)</li>



<li>Sanitization patterns (what makes input safe or unsafe)</li>



<li>Framework-specific APIs (your custom classes and methods)</li>



<li>Security patterns (what makes something a vulnerability)&nbsp;</li>
</ul>



<p>You can then adapt or customize as needed.&nbsp;</p>



<p><strong>3. Test against your codebase</strong>&nbsp;</p>



<p>The generated query runs&nbsp;immediately&nbsp;against your actual&nbsp;code,&nbsp;so you&nbsp;see&nbsp;real results,&nbsp;not theoretical examples. This&nbsp;helps&nbsp;you&nbsp;ascertain&nbsp;that&nbsp;the query catches what you want&nbsp;and&nbsp;isn’t&nbsp;flagging&nbsp;false positives.&nbsp;</p>



<p><strong>4. Refine and iterate</strong>&nbsp;</p>



<p>If the query&nbsp;isn&#8217;t&nbsp;quite right, you can:&nbsp;</p>



<ul class="wp-block-list">
<li>Adjust your natural language description and regenerate</li>



<li>Manually edit the&nbsp;CxQL&nbsp;for&nbsp;finetuned&nbsp;control</li>



<li>Test different variations to find the best balance of coverage and precision&nbsp;</li>
</ul>



<p><strong>5. Deploy to your scanning presets</strong>&nbsp;</p>



<p>Once&nbsp;validated, the custom query becomes part of your SAST scanning&nbsp;configuration,&nbsp;and every&nbsp;subsequent&nbsp;scan&nbsp;can use this&nbsp;customized detection logic.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-5">
<strong>What Makes This Different?</strong>&nbsp;</h2>



<p>Most &#8220;AI for&nbsp;code&#8221; tools are general-purpose language models trying to generate any kind of code.&nbsp;But&nbsp;AI Query Builder is specialized:&nbsp;</p>



<p><strong>Domain-specific intelligence</strong>: Trained specifically on security patterns and&nbsp;CxQL&nbsp;syntax, not general-purpose coding&nbsp;</p>



<p><strong>Context-aware</strong>: Understands the relationship between sources (user input), sanitizers (validation), and sinks (dangerous operations)&nbsp;</p>



<p><strong>Framework-flexible</strong>: Can adapt to your custom frameworks and libraries, not just public ones&nbsp;</p>



<p><strong>Integration-native</strong>: Works directly in&nbsp;Checkmarx&nbsp;One. No export/import workflows, no separate tools to learn</p>



<p>For example,&nbsp;when you&nbsp;say&nbsp;&#8220;SQL injection in our custom database wrapper,&#8221;&nbsp;the system know that it&nbsp;needs to:&nbsp;</p>



<ol class="wp-block-list">
<li>Identify&nbsp;where user input enters your application&nbsp;</li>



<li>Trace data flow through your specific&nbsp;DatabaseWrapper&nbsp;class&nbsp;</li>



<li>Check if input passes through sanitization before reaching the database call&nbsp;</li>



<li>Generate a query that catches this specific pattern without flagging safe uses&nbsp;</li>
</ol>



<p>This level of specialization is what makes AI Query Builder reliable for production security work.&nbsp;</p>



<h3 class="wp-block-heading">
<strong>Example: Tuning for False Positive Reduction</strong>&nbsp;</h3>



<p>Picture this&nbsp;scenario:&nbsp;Your organization uses an in-house sanitization library,&nbsp;called&nbsp;SecureValidator,&nbsp;that properly prevents XSS attacks. However,&nbsp;your&nbsp;SAST&nbsp;doesn’t&nbsp;know&nbsp;about&nbsp;this&nbsp;library,&nbsp;so it flags every use as a potential vulnerability.&nbsp;</p>



<p><strong>Without AI Query Builder</strong>,&nbsp;you’d&nbsp;need to:&nbsp;</p>



<ol class="wp-block-list">
<li>Find a query expert who understands&nbsp;CxQL&nbsp;</li>



<li>Locate the existing XSS detection query&nbsp;</li>



<li>Manually add your&nbsp;SecureValidator&nbsp;methods to the sanitizer list&nbsp;</li>



<li>Test the modified query&nbsp;</li>



<li>Deploy it&nbsp;</li>
</ol>



<p>This&nbsp;would take&nbsp;hours&nbsp;– even&nbsp;or days, assuming you&nbsp;actually&nbsp;have someone with the expertise available.&nbsp;</p>



<p><strong>With AI Query Builder</strong>, you:&nbsp;</p>



<ol class="wp-block-list">
<li>Describe: &#8220;Update the XSS query to recognize&nbsp;SecureValidator.sanitizeHtml() as a valid sanitization method”</li>



<li>Generate the modified query&nbsp;</li>



<li>Test it&nbsp;immediately&nbsp;</li>



<li>Deploy&nbsp;</li>
</ol>



<p>This takes&nbsp;<em>minutes</em>. Your developers&nbsp;See immediate impact and security findings become more trustworthy.&nbsp;</p>



<h3 class="wp-block-heading">
<strong>Example: Expanding Coverage for Custom Code</strong>&nbsp;</h3>



<p>And&nbsp;here’s&nbsp;another example:&nbsp;Your team built a custom authentication system with a method called&nbsp;AuthManager.validateSession(). You want to detect when session tokens are used without proper validation.&nbsp;</p>



<p><strong>Without AI Query Builder</strong>,&nbsp;you’d&nbsp;either:&nbsp;</p>



<ol class="wp-block-list">
<li>Accept that SAST&nbsp;won&#8217;t&nbsp;catch this vulnerability pattern, or&nbsp;</li>



<li>Hire a consultant to write a custom query, or&nbsp;</li>



<li>Wait for your internal query expert to have bandwidth (probably weeks)&nbsp;</li>
</ol>



<p><strong>With AI Query Builder</strong>, you:&nbsp;</p>



<ol class="wp-block-list">
<li>Describe: &#8220;Create a query to detect when session tokens are used without calling</li>



<li>AuthManager.validateSession() first”&nbsp;</li>



<li>Generate and test the query&nbsp;</li>



<li>Deploy it to production scanning&nbsp;</li>
</ol>



<p>Your coverage expands to include organization-specific security patterns that no&nbsp;standard&nbsp;tool would catch.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-6">
<strong>The Bigger Picture: Security Adapting to AI-Accelerated Development</strong>&nbsp;</h2>



<p>When developers use AI to code faster, security&nbsp;also&nbsp;needs AI to adapt faster. AI Query Builder ensures your SAST detection evolves&nbsp;just&nbsp;as quickly as your applications do.&nbsp;</p>



<p>The same way AI coding assistants have made developers more&nbsp;productive,&nbsp;AI security tools make security teams more responsive, comprehensive, and effective.&nbsp;</p>



<p>AI-powered remediation helps fix vulnerabilities&nbsp;faster&nbsp;and&nbsp;AI-powered prioritization helps&nbsp;team&nbsp;focus on what matters. AI-powered query building&nbsp;is the final piece of properly&nbsp;leveraging&nbsp;AI by detecting the right&nbsp;things in the first place.&nbsp;All together, these&nbsp;AI&nbsp;capabilities&nbsp;enable&nbsp;security programs to scale alongside&nbsp;modern development practices instead of being left behind.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-7">
<strong>Getting Started</strong>&nbsp;</h2>



<p>AI Query Builder is available now to all&nbsp;Checkmarx&nbsp;One&nbsp;customers&nbsp;with SAST.&nbsp;</p>



<p><em>To start using it:&nbsp;</em></p>



<ol class="wp-block-list">
<li>Navigate to your SAST workspace in&nbsp;Checkmarx&nbsp;One</li>



<li>Access the Query Builder interface&nbsp;</li>



<li>Describe a security concern you want to detect&nbsp;</li>



<li>Generate, test, and refine your query&nbsp;</li>



<li>Deploy it to your scanning presets</li>
</ol>



<p><strong><a href="https://docs.checkmarx.com/en/34965-373395-ai-query-builder.html" target="_blank" rel="noreferrer noopener">Full documentation&nbsp;available here</a>&nbsp;</strong></p>



<p>For teams just getting started with SAST customization, we recommend beginning with false positive reduction.&nbsp;Identify&nbsp;the most common false positives your developers&nbsp;encounter&nbsp;and use AI Query Builder to tune queries to recognize your&nbsp;security controls. This immediate impact on developer experience builds trust and&nbsp;demonstrates&nbsp;value quickly.&nbsp;</p>



<p>For teams already doing query customization, AI Query Builder accelerates your existing workflow. You can prototype queries faster, test more variations, and expand coverage to more frameworks and patterns than was previously&nbsp;feasible.&nbsp;</p>



<p>The future of application security is AI-assisted at every level. With AI Query Builder, your SAST detection becomes as adaptive and intelligent as the development teams&nbsp;you’re&nbsp;protecting.&nbsp;</p>]]></content:encoded>
					
		
		
		
		<media:thumbnail url="https://checkmarx.com/wp-content/uploads/2026/01/image-1-150x150.png" />
		<media:content url="https://checkmarx.com/wp-content/uploads/2026/01/image-1.png" medium="image">
			<media:title type="html">image</media:title>
			<media:thumbnail url="https://checkmarx.com/wp-content/uploads/2026/01/image-1-150x150.png" />
		</media:content>
	</item>
		<item>
		<title>The AI Inventory Gap: Why Your Organization Has No Idea What AI Assets Are Part of Your Software Supply Chain</title>
		<link>https://checkmarx.com/ai-llm-tools-in-application-security/the-ai-inventory-gap-why-your-organization-has-no-idea-what-ai-assets-are-part-of-your-software-supply-chain/</link>
		
		<dc:creator><![CDATA[David Dewaele]]></dc:creator>
		<pubDate>Sun, 11 Jan 2026 10:42:46 +0000</pubDate>
				<category><![CDATA[AI & LLM Tools in Application Security]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Supply Chain Security]]></category>
		<category><![CDATA[AI generated code]]></category>
		<category><![CDATA[Software Bill of Materials]]></category>
		<category><![CDATA[Software Supply Chain]]></category>
		<category><![CDATA[SSCS]]></category>
		<guid isPermaLink="false">https://staging.checkmarx.com/?p=106327</guid>

					<description><![CDATA[Your developers are&#160;already&#160;embedding or calling AI assets as part of&#160;your&#160;applications&#160;&#8211;&#160;whether you know it or not.&#160;Models, weights, MCPs, agent frameworks, and AI libraries are quietly making their way into codebases.&#160;&#160;&#160; Once these AI assets land in your repositories or container images, they become part of your software supply chain. The next&#160;Log4J&#160;doesn’t&#160;have to&#160;be a package; it&#160;can&#160;just as [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Your developers are&nbsp;already&nbsp;embedding or calling AI assets as part of&nbsp;your&nbsp;applications&nbsp;&#8211;&nbsp;whether you know it or not.&nbsp;Models, weights, MCPs, agent frameworks, and AI libraries are quietly making their way into codebases.&nbsp;&nbsp;&nbsp;</p>



<p>Once these AI assets land in your repositories or container images, they become part of your software supply chain. The next&nbsp;<a href="https://checkmarx.com/learn/aspm/inside-the-mind-of-an-attacker-how-malicious-code-is-crafted-and-deployed/" target="_blank" rel="noreferrer noopener">Log4J</a>&nbsp;doesn’t&nbsp;have to&nbsp;be a package; it&nbsp;can&nbsp;just as easily&nbsp;be a model, an MCP, or an AI&nbsp;asset&nbsp;you&nbsp;didn’t&nbsp;even know you shipped.&nbsp;</p>



<p>AI&nbsp;Supply Chain&nbsp;risks include any&nbsp;risks&nbsp;introduced by AI assets&nbsp;that&nbsp;become part of your software&nbsp;supply chain,&nbsp;<a href="https://checkmarx.com/zero-post/11-emerging-ai-security-risks-with-mcp-model-context-protocol/" target="_blank" rel="noreferrer noopener">ranging from</a>&nbsp;poisoned&nbsp;data, malicious or over-privileged MCPs/agents&nbsp;and unknown provenance.&nbsp;&nbsp;&nbsp;</p>



<p>And yet, despite&nbsp;this rapid adoption,&nbsp;organizations&nbsp;can’t&nbsp;answer a simple question:&nbsp;&nbsp;</p>



<p><strong>What AI components are we using&nbsp;in our software development, and where?&nbsp;</strong>&nbsp;</p>



<p>Without visibility,&nbsp;AI-related risk compounds:&nbsp;AI&nbsp;assets&nbsp;spread across codebases without security review, inventory, or policy controls, creating&nbsp;new&nbsp;blind spots&nbsp;or widening&nbsp;existing ones&nbsp;across&nbsp;your software supply chain.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-1">Why Most Organizations Lack Visibility into Their Devs’ AI Usage&nbsp;</h2>



<p>AI&nbsp;adoption is&nbsp;rapidly&nbsp;growing&nbsp;across&nbsp;organizations.&nbsp;In&nbsp;fact,&nbsp;our&nbsp;recent&nbsp;<a href="https://checkmarx.com/ai-llm-tools-in-application-security/just-released-the-future-of-appsec-in-the-era-of-ai-2026-industry-outlook/" target="_blank" rel="noreferrer noopener">Future of AppSec report</a>&nbsp;found that&nbsp;one in three respondents said over 60% of their organization’s code is written by AI.&nbsp;&nbsp;Yet only 18% have any sort of AI governance in place.&nbsp;</p>



<p>This&nbsp;combination of&nbsp;rapid growth in AI usage&nbsp;(especially among developers), combined with a lack of oversight,&nbsp;has created a visibility gap&nbsp;fuelled&nbsp;by fragmentation and tooling that was never built to address AI‑specific risks.&nbsp;</p>



<p>Here are the main reasons:&nbsp;&nbsp;</p>



<ul class="wp-block-list">
<li>Emergence of new AI-focused protocols and technologies. Example: MCP  (Model Context Protocol) was introduced only in November 2024. </li>
</ul>



<ul class="wp-block-list">
<li>AI fragmentation (Copilot, Claude, Microsoft, OpenAI, etc.). Multiple providers, teams picking different tools, no standardization. Lots of tools with different security approaches. </li>
</ul>



<ul class="wp-block-list">
<li>Code security and Supply Chain Security require different approaches. AppSec tools are evolving to detect AI-related code vulnerabilities—identifying prompt injections, tracking sensitive data flows to LLMs, and flagging improper output handling. This addresses how developers write and use AI in their code. But a separate challenge remains: gaining visibility into what AI assets exist across your software supply chain. Traditional scanners analyze data flows and patterns but AI supply chain security requires comprehensive asset inventory, provenance tracking, and governance. </li>
</ul>



<h2 class="wp-block-heading article-anchor" id="article-anchor-2">What is the AI Inventory Gap?&nbsp;</h2>



<p>The AI Inventory Gap refers to all the AI-related components embedded in your applications that your organization&nbsp;hasn&#8217;t&nbsp;tracked, reviewed, or governed,&nbsp;yet still ships as part of your software supply chain.&nbsp;</p>



<p>It typically includes:&nbsp;</p>



<ul class="wp-block-list">
<li>Models &amp; weights: Pre-trained or fine-tuned LLMs, CV models, embeddings </li>
</ul>



<ul class="wp-block-list">
<li>Agent frameworks: A software toolkit and structure for building, managing, and orchestrating autonomous AI agents </li>
</ul>



<ul class="wp-block-list">
<li>MCP servers: Program that enables AI models, particularly large language models (LLMs), to access external data, tools, and workflows, acting as a bridge for AI agents to interact with the real world. </li>
</ul>



<ul class="wp-block-list">
<li>Datasets: Training and evaluation data, sometimes with sensitive or licensed content </li>
</ul>



<ul class="wp-block-list">
<li>Prompts: Operational logic dispersed across code and configuration </li>
</ul>



<ul class="wp-block-list">
<li>AI libraries &amp; integrations: SDKs, connectors, and wrappers that pull AI into runtime </li>
</ul>



<h2 class="wp-block-heading article-anchor" id="article-anchor-3">The Risks of&nbsp;the AI Inventory Gap&nbsp;</h2>



<p>When&nbsp;AI components&nbsp;operate&nbsp;without&nbsp;control, the consequences are serious:&nbsp;From&nbsp;hidden attack surfaces, operational surprises, compliance failures,&nbsp;to&nbsp;reputational damage, often discovered only after an incident or audit begins.&nbsp;</p>



<p>Key risks&nbsp;include:&nbsp;&nbsp;</p>



<ul class="wp-block-list">
<li>Model poisoning: Silently introduces backdoors, blind spots, or biased behavior that attackers exploit without detection </li>
</ul>



<ul class="wp-block-list">
<li>Unverified or malicious weights: Sourced from unknown origins without integrity checks. Unverified or malicious weights are like running untrusted binaries, they  can expose you to remote code execution, contain hidden payloads or logic or create backdoors for data exfiltration or resource abuse. </li>
</ul>



<ul class="wp-block-list">
<li>Dataset exposure: Sensitive or licensed data leaked via training, prompts, or logs </li>
</ul>



<ul class="wp-block-list">
<li>Unsafe agents &amp; tools: Autonomous agents that can access files, networks, or services without guardrails </li>
</ul>



<ul class="wp-block-list">
<li>Unpinned versions: Silent updates to models or libraries change behavior and risk posture overnight. Unpinned versions can allow unexpected or malicious updates to be introduced automatically, leading to supply-chain attacks, breaking changes, or non-reproducible and insecure builds. </li>
</ul>



<ul class="wp-block-list">
<li>Compliance gaps: Missing documentation, provenance, and audit trails lead to penalties and delays </li>
</ul>



<h2 class="wp-block-heading article-anchor" id="article-anchor-4">AI Governance: Can You Trace Every Model, Dataset, and Dependency?&nbsp;</h2>



<p>AI adoption is following&nbsp;a&nbsp;pattern&nbsp;similar to&nbsp;what we&nbsp;see with open-source software: developers move fast&nbsp;while&nbsp;governance&nbsp;lags behind.&nbsp;&nbsp;</p>



<p>The expectation&nbsp;from&nbsp;a compliance point&nbsp;of view&nbsp;is that&nbsp;reporting&nbsp;and keeping an inventory of AI&nbsp;components&nbsp;is no longer optional&nbsp;</p>



<p>But the reality is messy:&nbsp;</p>



<ul class="wp-block-list">
<li>
<a href="https://checkmarx.com/the-appsec-regulatory-review-and-assessment-guide/" target="_blank" rel="noreferrer noopener">Regulatory Expectations Are Rising</a>: frameworks and regulations (e.g., EU AI-related requirements, AI governance standards) demand accountability and evidence, teams lack the tooling to inventory, assess, and report on AI usage across the enterprise. Compliance becomes reactive and costly.  </li>
</ul>



<ul class="wp-block-list">
<li>Developer-Led AI Adoption Outpaces Governance: Developers integrate models, datasets, and frameworks to solve real problems fast. If governance processes are slow or unclear, people ship and promise to “clean it up later.” Those quick wins become permanent dependencies, often without reviews, version pinning, or provenance checks. </li>
</ul>



<ul class="wp-block-list">
<li>Fragmented Work Processes Makes Inventory Hard: AI usage spans multiple teams and repos: data science, platform engineering, mobile, web, back-end, cloud functions. Without a central AI inventory, leadership cannot answer basic questions about what AI assets are being used, deployed, and what risks are attached. This leads to reactive security, and the risk of having to deal with vulnerabilities after it’s too late.  </li>
</ul>



<ul class="wp-block-list">
<li>No Clear Ownership of AI Governance: Developers are focused on shipping features. They experiment with models, libraries, MCPs, and agent frameworks to solve problems quickly—not to define governance boundaries or maintain inventories. </li>
</ul>



<p>Application security teams, meanwhile, are often left to document and report on AI usage after the fact.&nbsp;They’re&nbsp;suddenly asked to answer questions about models, datasets, and agents embedded across the software supply chain,&nbsp;without the visibility or tooling needed to do so.&nbsp;</p>



<p>At the leadership level, responsibility is&nbsp;also&nbsp;often fragmented. CTOs drive adoption and velocity, CISOs are accountable for risk and compliance, and no single function clearly owns end-to-end governance of AI assets. The result is predictable: AI moves fast, ownership stays unclear, and Shadow AI fills the gap.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-5">What can&nbsp;Organizations&nbsp;do about it?&nbsp;</h2>



<p>AI&nbsp;inventory&nbsp;isn’t&nbsp;a problem you solve with a single control.&nbsp;It requires ownership, visibility, and governance embedded into how software is built.&nbsp;</p>



<p>This is a&nbsp;relatively new&nbsp;challenge, and while tooling is evolving, organizations can take concrete steps today to regain control:&nbsp;</p>



<ul class="wp-block-list">
<li>Decide on a clear owner of AI inventory, with defined responsibilities and authority, to whom other teams report. </li>
</ul>



<ul class="wp-block-list">
<li>Baseline your AI usage: Run deterministic discovery across prioritized repos and services to build an initial inventory. </li>
</ul>



<ul class="wp-block-list">
<li>Classify &amp; assess risks: Tag assets by type (model, agent, dataset, prompt, library) and apply AI-specific risk checks. </li>
</ul>



<ul class="wp-block-list">
<li>Generate AI‑BOMs: Produce standards-aligned BOMs with provenance, licensing, dependencies, and risk metadata. </li>
</ul>



<ul class="wp-block-list">
<li>Define your policies: Blacklist/whitelist of assets, acceptable risk thresholds, block build for specific type of risks, etc. </li>
</ul>



<ul class="wp-block-list">
<li>Embed governance where work happens:  </li>
</ul>



<ul class="wp-block-list">
<li>Add PR checks, CI/CD gates, and dashboards to enforce policies and track trends. </li>
</ul>



<ul class="wp-block-list">
<li>Measure &amp; iterate: Monitor coverage, findings, MTTR, and compliance posture. Expand to more teams, apps, and environments. </li>
</ul>



<h2 class="wp-block-heading article-anchor" id="article-anchor-6">Final Thoughts&nbsp;&nbsp;</h2>



<p>AI has moved beyond the experiment phase. It is now part of the day-to-day reality of modern development teams, already deeply embedded into modern software stacks. But without visibility, every untracked model, dataset, or agent becomes a potential vulnerability.  </p>



<p>The bottom line? If you&nbsp;can’t&nbsp;see the AI in your software, you&nbsp;can’t&nbsp;control the risk.&nbsp;&nbsp;</p>



<p></p>



<figure class="wp-block-image size-full"><a href="https://checkmarx.com/request-a-demo/"><img decoding="async" width="901" height="321" src="https://checkmarx.com/wp-content/uploads/2026/01/image.png" alt="" class="wp-image-106328" srcset="https://checkmarx.com/wp-content/uploads/2026/01/image.png 901w, https://checkmarx.com/wp-content/uploads/2026/01/image-300x107.png 300w, https://checkmarx.com/wp-content/uploads/2026/01/image-768x274.png 768w, https://checkmarx.com/wp-content/uploads/2026/01/image-400x143.png 400w" sizes="(max-width: 901px) 100vw, 901px" /></a></figure>]]></content:encoded>
					
		
		
		
		<media:thumbnail url="https://checkmarx.com/wp-content/uploads/2026/01/image-150x150.png" />
		<media:content url="https://checkmarx.com/wp-content/uploads/2026/01/image.png" medium="image">
			<media:title type="html">image</media:title>
			<media:thumbnail url="https://checkmarx.com/wp-content/uploads/2026/01/image-150x150.png" />
		</media:content>
	</item>
		<item>
		<title>The ROI of Agentic AI AppSec</title>
		<link>https://checkmarx.com/blog/the-roi-of-agentic-ai-appsec/</link>
		
		<dc:creator><![CDATA[Rebecca Spiegel]]></dc:creator>
		<pubDate>Mon, 29 Dec 2025 18:20:11 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://staging.checkmarx.com/?p=106266</guid>

					<description><![CDATA[ROI Looks Different in the AI Era  LLMs now accelerate how code is written, refactored, and merged. Traditional “scan-and-fix later” workflows&#160;can’t&#160;keep up with that pace; they push findings downstream, inflate rework, and slow releases. The&#160;financial impact&#160;shows up as&#160;extra PR rewrites, pipeline reruns, context switching, and escalations.&#160;&#160; The fix is to&#160;move AppSec to the point of creation,&#160;inside the IDE,&#160;so [&#8230;]]]></description>
										<content:encoded><![CDATA[<h2 class="wp-block-heading article-anchor" id="article-anchor-1">ROI Looks Different in the AI Era </h2>



<p>LLMs now accelerate how code is written, refactored, and merged. Traditional “scan-and-fix later” workflows&nbsp;can’t&nbsp;keep up with that pace; they push findings downstream, inflate rework, and slow releases. The&nbsp;financial impact&nbsp;shows up as&nbsp;extra PR rewrites, pipeline reruns, context switching, and escalations.&nbsp;&nbsp;</p>



<p>The fix is to&nbsp;move AppSec to the point of creation,&nbsp;inside the IDE,&nbsp;so issues are prevented or remediated while the developer’s mental stack is fresh.&nbsp;</p>



<p>Agentic AppSec is autonomous, context-aware assistance that validates and remediates during coding, not after the commit. Gartner frames this category as AI Code Security Assistance (ACSA); <strong><a href="https://checkmarx.com/product/checkmarx-one-assist/">Checkmarx One </a><a href="https://checkmarx.com/product/checkmarx-one-assist/" target="_blank" rel="noreferrer noopener">Assist</a> </strong>operationalizes it through Developer Assist. </p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-2">A Practical ROI Model You Can Take to Your CFO </h2>



<p>For all the talk about developer productivity and AI acceleration, most AppSec leaders still struggle to express value in the language of finance. Your CFO doesn’t want “shift left” jargon and vulnerability counts, they want a structured model that translates engineering efficiency into measurable return. </p>



<p>When we analyze ROI for <strong>Checkmarx One <a href="https://checkmarx.com/product/developer-assist/" target="_blank" rel="noreferrer noopener">Developer Assist</a></strong>, we focus on five value buckets that both finance and engineering already recognize. Each ties directly to operational metrics your leadership team tracks, making it easy to build a defensible business case for Agentic AppSec: </p>



<h3 class="wp-block-heading">1. Mean Time to Remediate (MTTR) </h3>



<p>Inline findings and explainable fixes inside the IDE compress triage and remediation from hours to minutes. Since developers resolve vulnerabilities in context, fewer issues escape to late-stage testing or production, where every fix costs exponentially more. The result is measurable improvement in <a href="https://checkmarx.com/webinar-dora-using-security-to-speed-up-development/" target="_blank" rel="noreferrer noopener">DORA MTTR</a> and a more predictable release cadence. </p>



<h3 class="wp-block-heading">2. Throughput (Features per Period) and Lead Time for Changes </h3>



<ol class="wp-block-list"></ol>



<p>Every context switch, jumping from IDE to portal, waiting on a review, or rerunning a build, creates friction that slows throughput. When developers fix in-place, PR churn decreases and pipelines stabilize. That efficiency shows up directly as more completed work per sprint and a measurable reduction in Lead Time for Changes, one of the most visible metrics to executives tracking delivery velocity. </p>



<h3 class="wp-block-heading">3. False-Positive Drag</h3>



<p>Noise has a cost. Each false positive wastes time erodes trust in tools, and slows adoption. By combining high-fidelity detection with explainable remediation, Developer Assist reduces alert fatigue across the SDLC. A <a href="https://checkmarx.com/resources/best-buy/" target="_blank" rel="noreferrer noopener">Checkmarx case study found that Best Buy</a> reduced false positive by 80%, illustrating the real economic drag of noisy security and the ROI of precision. </p>



<h3 class="wp-block-heading">4. Rework and Failure Cost  </h3>



<p>Rework is one of the most underestimated drains on engineering productivity. Every post-merge defect triggers retesting, re-review, and sometimes a full CI/CD rerun. By catching vulnerabilities inside the IDE, Developer Assist prevents this expensive cycle before it begins. The result is fewer failed builds, lower operational overhead, and more stable release plans, which are benefits that directly translate into reduced operational expenses (OpEx) and improved predictability. </p>



<h3 class="wp-block-heading">5. Developer Experience (Retention and Flow) </h3>



<p>Security tools succeed or fail on adoption. If they slow engineers down, they’re disabled or ignored. Developer Assist meets developers where they work, offering AI-powered help that feels like collaboration, not interruption. Tools that improve flow and reduce cognitive friction boost both sentiment and retention, gains that compound over time into sustainable throughput and morale. </p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-3">A CFO’s Takeaway </h2>



<p>When you put it all together, these five metrics &#8211; TTR, throughput, false-positive drag, rework cost, and developer experience &#8211; form a complete Agentic AppSec ROI model. It ties productivity, quality, and cost together in one narrative that resonates from the engineering floor to the boardroom. Agentic AppSec is a measurable accelerator of business outcomes. The data is already in your DevOps pipeline, and the only question is whether you’re ready to quantify it. </p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-4">Mechanics, Not Magic, Make Value </h2>



<p>Every second counts when prevention happens in the IDE. By embedding detection, validation, and remediation directly where developers work, the result is measurable productivity and stronger security posture at the same time. </p>



<h3 class="wp-block-heading">Detect earlier, fix faster (MTTR and failure avoidance) </h3>



<p>Developer Assist analyzes source, manifests, IaC, and container descriptors as you type, surfacing explainable findings and one-click “Fix with Assist” flows right in the editor. Early detection reduces “late discovery” work and lowers the chance of broken builds. </p>



<h3 class="wp-block-heading">Explainable AI remediation (trust drives adoption) </h3>



<p>Structured prompts plus verified remediation data mean developers see why a change is needed, not just a diff. That “explain then apply” pattern speeds reviews and keeps security aligned to developer intent: critical for sustained adoption. </p>



<h3 class="wp-block-heading">Integrated coverage (fewer tools, fewer gaps) </h3>



<p>Because Developer Assist is powered by the&nbsp;Checkmarx&nbsp;platform, teams&nbsp;benefit&nbsp;from proven detection across SAST, SCA,&nbsp;IaC, secrets and container risks delivered in a consistent, IDE-first workflow. Reducing tool switches and&nbsp;consolidating&nbsp;signals also simplifies reporting upstream.&nbsp;</p>



<p>When AppSec becomes an active participant in development, not a passive gate at the end of it, security scales with the speed of code creation. Developer Assist bridges that gap, merging developer efficiency with enterprise-grade validation. The impact is cumulative: fewer missed vulnerabilities, faster clean builds, and quantifiable time savings that turn secure coding into a measurable business&nbsp;advantage.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-5">Estimate Your ROI in Two Steps </h2>



<p><strong>Step 1: Time saved per issue </strong></p>



<ul class="wp-block-list">
<li>Without&nbsp;IDE-level remediation: assume ~1–3 hours per issue (triage, rework, rebuilds).</li>



<li>With&nbsp;Developer Assist: much of that&nbsp;time&nbsp;collapses into minutes because context is&nbsp;fresh&nbsp;and changes are applied&nbsp;inline.&nbsp;</li>
</ul>



<p><strong>Step 2: Multiply by avoided rework </strong></p>



<ul class="wp-block-list">
<li>Count how many security-related build failures/reruns you had last quarter.</li>



<li>Apply your blended engineering hourly rate to the time you didn’t spend reworking those PRs.</li>
</ul>



<p><strong>Want a walkthrough? Our team can map DORA metrics to pre- vs post-Assist performance using your pipeline data. </strong><a href="https://checkmarx.com/request-a-demo/" target="_blank" rel="noreferrer noopener">Let’s talk.</a> </p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-6">What Makes a Tool Actually Agentic? And Does It Matter for ROI? </h2>



<p><a href="https://checkmarx.com/press-releases/checkmarx-named-a-leader-in-the-2025-gartner-magic-quadrant/" target="_blank" rel="noreferrer noopener">Gartner’s&nbsp;AI Code Security Assistance (ACSA)</a>&nbsp;lens&nbsp;emphasizes&nbsp;pre-commit, intent-aware control&nbsp;vs reactive scanning. In practice, this means&nbsp;fewer defects make it to late stages&nbsp;(where each fix is 3–10x&nbsp;more expensive than in development) and the ones that do arrive are already annotated with context.&nbsp;That’s&nbsp;why&nbsp;agentic&nbsp;beats “scan later” in cost curves.&nbsp;</p>



<p>Developer Assist pays for itself by&nbsp;eliminating&nbsp;rework at the source.&nbsp;When security happens in the IDE, you fix faster, ship faster, and report outcomes that resonate from dev teams to the board.&nbsp;</p>



<p><strong>Read More: <a href="https://checkmarx.com/?p=106264&amp;preview=true&amp;_thumbnail_id=106265" target="_blank" rel="noreferrer noopener">The Executive’s Guide to Quantifying Agentic AppSec ROI, From IDE Metrics to Board-Ready Numbers.</a> </strong></p>



<p><strong>Download: <a href="https://checkmarx.com/the-agentic-ai-buyers-guide/" target="_blank" rel="noreferrer noopener">The Agentic AI Buyer’s Guide</a> </strong></p>]]></content:encoded>
					
		
		
		
	</item>
		<item>
		<title>The Executive Guide to Quantifying Agentic AppSec ROI, From IDE Metrics to Board-Ready Numbers</title>
		<link>https://checkmarx.com/blog/the-executive-guide-to-quantifying-agentic-appsec-roi/</link>
		
		<dc:creator><![CDATA[Rebecca Spiegel]]></dc:creator>
		<pubDate>Mon, 29 Dec 2025 17:54:26 +0000</pubDate>
				<category><![CDATA[AI & LLM Tools in Application Security]]></category>
		<category><![CDATA[Application Security Trends & Insights]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Checkmarx One]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[AI generated code]]></category>
		<category><![CDATA[AppSec]]></category>
		<category><![CDATA[AppSec Maturity]]></category>
		<guid isPermaLink="false">https://staging.checkmarx.com/?p=106264</guid>

					<description><![CDATA[For executives, proving the ROI of security investments has always been complex. Traditional AppSec tools&#160;report on&#160;vulnerabilities,&#160;found and fixed, but those metrics rarely translate into tangible business value.&#160;Agentic&#160;AI AppSec, led by&#160;Checkmarx&#160;One&#160;Developer Assist,&#160;changes that equation.&#160; By embedding explainable, real-time remediation directly into the IDE, Developer Assist helps enterprises measure impact in terms that matter to both engineering [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>For executives, proving the ROI of security investments has always been complex. Traditional AppSec tools&nbsp;report on&nbsp;vulnerabilities,&nbsp;found and fixed, but those metrics rarely translate into tangible business value.&nbsp;Agentic&nbsp;AI AppSec, led by&nbsp;<strong>Checkmarx&nbsp;One&nbsp;<a href="https://checkmarx.com/product/developer-assist/" target="_blank" rel="noreferrer noopener">Developer Assist</a></strong><a href="https://checkmarx.com/product/developer-assist/" target="_blank" rel="noreferrer noopener">,</a>&nbsp;changes that equation.&nbsp;</p>



<p>By embedding explainable, real-time remediation directly into the IDE, Developer Assist helps enterprises measure impact in terms that matter to both engineering and finance: time saved, quality improved, and cost avoided.&nbsp;&nbsp;</p>



<p>Here’s&nbsp;how to make that case with metrics your business already trusts.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-1">Start With Metrics Your Business Already Trusts&nbsp;</h2>



<p>The first rule of security ROI: your CFO&nbsp;doesn’t&nbsp;buy “scan accuracy”.&nbsp;They&nbsp;buy measurable outcomes that improve throughput, reduce cost, or accelerate delivery.&nbsp;</p>



<p>That’s why the most credible ROI models for&nbsp;Agentic AppSec&nbsp;align with the&nbsp;DORA metrics&nbsp;engineering leaders already&nbsp;track,&nbsp;and&nbsp;extend them with quality and cost indicators.&nbsp;</p>



<h3 class="wp-block-heading">Lead Time for Changes (Cycle Time)&nbsp;</h3>



<p>Inline, IDE-level guidance shortens the time between code commit and deployment.&nbsp;</p>



<p>By surfacing vulnerabilities and fixes&nbsp;as developers code, teams spend less time revisiting PRs or waiting on security reviews. Fewer bottlenecks mean faster feature delivery and shorter feedback loops.&nbsp;</p>



<h3 class="wp-block-heading">Change Failure Rate&nbsp;</h3>



<p>Agentic AppSec catches misconfigurations, insecure dependencies, and code smells&nbsp;before&nbsp;a commit,&nbsp;not after a build breaks.&nbsp;Fewer failed&nbsp;builds&nbsp;and&nbsp;hot-fixes&nbsp;translate directly to higher release stability and lower unplanned work, which&nbsp;impacts&nbsp;both velocity and engineering morale.&nbsp;</p>



<h3 class="wp-block-heading">Mean Time to Remediate (MTTR)&nbsp;</h3>



<p>Traditional tools force developers to context-switch between security reports and code. Developer Assist embeds explainable remediation right inside the IDE.&nbsp;</p>



<p>Developers understand&nbsp;<em>why</em>&nbsp;a fix matters and can resolve it&nbsp;immediately,&nbsp;reducing MTTR across sprints and improving compliance reporting accuracy.&nbsp;</p>



<h3 class="wp-block-heading">False-Positive Rate&nbsp;</h3>



<p>Precision&nbsp;isn’t&nbsp;just a technical&nbsp;metric,&nbsp;it’s&nbsp;an economic one.&nbsp;Every false positive consumes developer time.&nbsp;Best Buy, a Checkmarx customer, reduced false positives by&nbsp;80%&nbsp;with&nbsp;Checkmarx One, reclaiming hundreds of developer hours per quarter. That reclaimed time is a quantifiable efficiency gain.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-2">Translate Engineering Signals&nbsp;into&nbsp;Dollars&nbsp;</h2>



<p>Once&nbsp;you’ve&nbsp;anchored your metrics,&nbsp;it’s&nbsp;time to connect them to&nbsp;financial impact. The key is reframing engineering efficiency as&nbsp;<em>cost avoidance and productivity gain</em>.&nbsp;Here’s&nbsp;how to quantify each dimension:</p>



<h3 class="wp-block-heading">Rework Avoided&nbsp;</h3>



<p>Rework is the silent tax on software delivery. Every time a vulnerability is caught post-merge, the fix requires retesting, redeploying, and re-reviewing.&nbsp;</p>



<p>To calculate the value of avoiding that rework:&nbsp;</p>



<ol class="wp-block-list">
<li>Gather last quarter’s data on&nbsp;security-related build failures or reruns.&nbsp;</li>



<li>Estimate the average time spent on each (triage + fix + retest).&nbsp;</li>



<li>Multiply that time&nbsp;by&nbsp;your&nbsp;blended engineering hourly rate.&nbsp;</li>



<li>Attribute the reduction in failures after Developer Assist adoption as the savings delta.</li>
</ol>



<ol start="4" class="wp-block-list">
<li>
</li>
</ol>



<p>What&nbsp;you’ll&nbsp;find is that&nbsp;even a modest 10% reduction in rework&nbsp;yields measurable ROI:&nbsp;because&nbsp;rework compounds across builds, QA cycles, and deployment delays.&nbsp;</p>



<h3 class="wp-block-heading">Time-To-Value Acceleration&nbsp;</h3>



<p>Time is revenue. Faster, cleaner releases mean features reach customers sooner, accelerating the revenue recognition timeline.&nbsp;Developer Assist’s inline guidance prevents bottlenecks that block PRs or delay&nbsp;merges. Tie your improvement in&nbsp;Lead&nbsp;Time for&nbsp;Changes&nbsp;directly to your product roadmap milestones.&nbsp;Finance already understands the concept of time-to-market; now&nbsp;they’ll&nbsp;see how in-IDE AppSec directly&nbsp;impacts&nbsp;it.&nbsp;</p>



<h3 class="wp-block-heading">Alert Fatigue Reduction&nbsp;</h3>



<p>Noise&nbsp;doesn’t&nbsp;just frustrate&nbsp;developers,&nbsp;it drains resources. Every false positive&nbsp;triggers&nbsp;a triage cycle that adds no business value.&nbsp;By reducing false positives through explainable AI and high-fidelity scanning, Developer Assist saves real hours.&nbsp;Use the&nbsp;Best Buy 80% reduction benchmark&nbsp;as a directional proxy in your&nbsp;initial&nbsp;model, and&nbsp;replace it with your own metrics after a 30-day pilot.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-3">What “Agentic” Changes in the Cost Model&nbsp;</h2>



<p>Executives are hearing the term&nbsp;<em>Agentic AI</em>&nbsp;more often, but what it really means for ROI is straightforward: it shifts AppSec from a reactive process to an autonomous, context-aware assistant.&nbsp;As Gartner’s framing of&nbsp;<a href="https://checkmarx.com/blog/what-is-acsa-defining-ai-code-security-assistance-for-the-enterprise/" target="_blank" rel="noreferrer noopener">AI Code Security Assistance (ACSA)</a>&nbsp;describes, these systems&nbsp;assist&nbsp;developers with policy-aware validation in real time,&nbsp;closing the gap between development and security.&nbsp;</p>



<p>That shift has two major financial effects:&nbsp;</p>



<ol start="1" class="wp-block-list">
<li>Defect prevention instead of post-factum correction.&nbsp;<br>Fewer defects reach production, and those that do carry richer metadata for faster triage.&nbsp;</li>
</ol>



<ol start="2" class="wp-block-list">
<li>Cost compression.&nbsp;<br>The cost of fixing a defect late in the lifecycle is&nbsp;3–10x&nbsp;higher&nbsp;than fixing it during development. By detecting and resolving issues at the creation point, Developer Assist drives&nbsp;a direct&nbsp;cost avoidance multiple.&nbsp;</li>
</ol>



<p>In essence,&nbsp;agentic&nbsp;AppSec redefines security from a cost center into a throughput engine,&nbsp;one that pays dividends in efficiency, developer satisfaction, and customer trust.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-4">From Metrics to Board-Ready Outcomes&nbsp;</h2>



<p>Agentic AI AppSec&nbsp;doesn’t&nbsp;just change how developers work; it changes how executives justify security investment.&nbsp;By reframing technical metrics into measurable outcomes&nbsp;like&nbsp;reduced rework, accelerated delivery, fewer false positives, and higher developer efficiency,&nbsp;Developer Assist gives both CISOs and CFOs a clear ROI narrative supported by real data.&nbsp;Security&nbsp;isn’t&nbsp;slowing you down anymore.&nbsp;It’s&nbsp;making every release faster, safer, and smarter.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-5">How&nbsp;Checkmarx&nbsp;One&nbsp;Developer Assist&nbsp;Implements&nbsp;Agentic&nbsp;for&nbsp;ROI&nbsp;</h2>



<h3 class="wp-block-heading">Inline Prevention and Explainable Fixes</h3>



<p>The combination of IDE-native detection and explainable remediation shortens MTTR and reduces Change Failure Rate,&nbsp;two&nbsp;<a href="https://checkmarx.com/blog/the-rhythm-of-revolution-ais-role-in-the-next-tech-tipping-point/" target="_blank" rel="noreferrer noopener">Google&nbsp;DORA metrics</a>&nbsp;with direct&nbsp;Operating Expenses&nbsp;impact.&nbsp;</p>



<h3 class="wp-block-heading">Fewer Tools to Juggle, Clearer Reporting Up the Stack&nbsp;</h3>



<p>Because Developer Assist is powered by the&nbsp;Checkmarx&nbsp;platform, you get consistent detection across SAST/SCA/IaC/Secrets/Containers with in-IDE guidance,&nbsp;and unified reporting for execs. That reduces swivel-chair time and makes trend reporting credible.&nbsp;</p>



<h3 class="wp-block-heading">Adoption That Sticks&nbsp;</h3>



<p>If developers&nbsp;don’t&nbsp;trust a tool, it&nbsp;won’t&nbsp;move metrics.&nbsp;Checkmarx&nbsp;content emphasizes just-in-time, in-flow&nbsp;assistance&nbsp;that teaches while fixing, which&nbsp;is&nbsp;critical&nbsp;for sustained adoption and compounding ROI.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-6">Your 30-Day&nbsp;Proof&nbsp;Plan (Feel Free to Copy/Paste)&nbsp;</h2>



<p><em>Week 1 – Baseline&nbsp;</em></p>



<ul class="wp-block-list">
<li>Extract last-quarter DORA metrics (Lead Time, Change Failure Rate, MTTR).&nbsp;</li>



<li>Pull counts for security-related build failures and average time per failure.&nbsp;</li>
</ul>



<p>W<em>eek 2 – Pilot</em></p>



<ul class="wp-block-list">
<li>Enable Developer Assist for 1–2 active teams in VS Code/Cursor/Windsurf.&nbsp;</li>



<li>Track: inline fixes applied, PRs with fewer revisions, build failures avoided.&nbsp;</li>
</ul>



<p><em>Week 3 – Compare</em></p>



<ul class="wp-block-list">
<li>Contrast pilot teams vs. control on DORA metrics + failure counts.&nbsp;</li>



<li>Capture anecdotal feedback on explainability and dev flow.&nbsp;</li>
</ul>



<p><em>Week 4 – Roll-up&nbsp;</em></p>



<ul class="wp-block-list">
<li>Convert time deltas into dollar savings.&nbsp;</li>



<li>Exec slide: “From IDE events → DORA improvement → cost avoided.”&nbsp;</li>
</ul>



<h2 class="wp-block-heading article-anchor" id="article-anchor-7">FAQs&nbsp;Execs&nbsp;Will&nbsp;Ask (and&nbsp;Concise&nbsp;Answers)&nbsp;</h2>



<p><strong>Is this just SCA in the editor?</strong>&nbsp;</p>



<p>&nbsp;No. Developer&nbsp;Assist brings in-IDE guidance backed by the&nbsp;Checkmarx&nbsp;platform across code, dependencies,&nbsp;IaC, secrets, and container descriptors&nbsp;with explainable remediation, not just alerts.&nbsp;</p>



<p><strong>How is this different from reactive scanning?</strong>&nbsp;</p>



<p>&nbsp;It prevents issues before they hit the repo/CI and annotates fixes with context&nbsp;developers&nbsp;understand,&nbsp;improving both MTTR and adoption.&nbsp;&nbsp;</p>



<p><strong>Is there&nbsp;analyst&nbsp;alignment for this approach?&nbsp;</strong>&nbsp;</p>



<p>Yes!&nbsp;<a href="https://www.gartner.com/doc/reprints?id=1-2M5Q4EI5&amp;ct=251024&amp;st=sb" target="_blank" rel="noreferrer noopener">Gartner</a>’s&nbsp;AI Coding Security Assistant&nbsp;(<a href="https://checkmarx.com/ai-llm-tools-in-application-security/the-productivity-security-paradox-of-ai-coding-assistants/" target="_blank" rel="noreferrer noopener">ACSA)</a>&nbsp;concept describes exactly this: policy-aware assistants&nbsp;validating&nbsp;code at creation.&nbsp;</p>



<h2 class="wp-block-heading article-anchor" id="article-anchor-8">Close the&nbsp;Loop&nbsp;Between the IDE and the&nbsp;Boardroom&nbsp;</h2>



<p>Agentic AppSec&nbsp;isn’t&nbsp;a cost center;&nbsp;it’s&nbsp;a throughput engine. With Developer Assist, leaders see cleaner sprints, fewer reruns, faster releases, and measurable MTTR gains,&nbsp;all traceable to in-IDE prevention and explainable remediation.&nbsp;</p>



<p><strong>Download</strong>:&nbsp;<strong><a href="https://checkmarx.com/the-agentic-ai-buyers-guide/" target="_blank" rel="noreferrer noopener">The&nbsp;Agentic&nbsp;AI Buyer’s Guide</a>&nbsp;</strong></p>



<p><strong>Read: <a href="https://checkmarx.com/blog/the-roi-of-agentic-ai-appsec/">The ROI of Agentic AI AppSec </a></strong></p>



<p></p>]]></content:encoded>
					
		
		
		
	</item>
	</channel>
</rss>
