<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

<channel>
	<title>AWS DevOps &amp; Developer Productivity Blog</title>
	<atom:link href="https://aws.amazon.com/blogs/devops/feed/" rel="self" type="application/rss+xml"/>
	<link>https://aws.amazon.com/blogs/devops/</link>
	<description/>
	<lastBuildDate>Mon, 06 Apr 2026 18:10:56 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>Streamlining Cloud Compliance at GoDaddy Using CDK Aspects</title>
		<link>https://aws.amazon.com/blogs/devops/streamlining-cloud-compliance-at-godaddy-using-cdk-aspects/</link>
		
		<dc:creator><![CDATA[Juan Pablo Melgarejo Zamora]]></dc:creator>
		<pubDate>Fri, 03 Apr 2026 18:06:03 +0000</pubDate>
				<category><![CDATA[AWS Cloud Development Kit]]></category>
		<category><![CDATA[AWS CDK]]></category>
		<guid isPermaLink="false">8f2146c1ae51c759e22eef53df737554ee92fb3b</guid>

					<description>This is a guest post written by Jasdeep Singh Bhalla from GoDaddy. AWS Cloud Development Kit (CDK) Aspects are a powerful mechanism that allows you to apply organization-wide policies, like security rules, tagging standards, and compliance requirements across your entire infrastructure as code. By implementing the Visitor pattern, Aspects can inspect and modify every construct […]</description>
										<content:encoded>&lt;p&gt;&lt;em&gt;This is a guest post written by Jasdeep Singh Bhalla from GoDaddy.&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/aspects.html"&gt;AWS Cloud Development Kit (CDK) Aspects&lt;/a&gt; are a powerful mechanism that allows you to apply organization-wide policies, like security rules, tagging standards, and compliance requirements across your entire infrastructure as code. By implementing the Visitor pattern, Aspects can inspect and modify every construct in your CDK application before it’s synthesized into &lt;a href="https://aws.amazon.com/cloudformation/"&gt;AWS CloudFormation&lt;/a&gt; templates, enabling you to enforce organizational standards automatically at build time.&lt;/p&gt; 
&lt;p&gt;At GoDaddy, we’ve used CDK Aspects to transform how we enforce compliance across our massive AWS footprint. Our Cloud Governance team is responsible for ensuring every AWS resource deployed across thousands of accounts adheres to strict security, compliance, and operational standards.&lt;/p&gt; 
&lt;p&gt;We have a simple goal:&lt;/p&gt; 
&lt;p&gt;&lt;em&gt;Make it easy for developers to do the right thing without slowing them down.&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;Traditionally, we relied on documentation, Slack threads, and peer reviews to flag misconfigurations. But as our cloud footprint grew, this approach quickly hit its limits. It simply didn’t scale.&lt;/p&gt; 
&lt;p&gt;&lt;em&gt;We needed something proactive&lt;/em&gt;.&lt;/p&gt; 
&lt;h2&gt;From reactive to proactive: CloudFormation Hooks&lt;/h2&gt; 
&lt;p&gt;Our first major leap forward came through &lt;a href="https://docs.aws.amazon.com/cloudformation-cli/latest/hooks-userguide/what-is-cloudformation-hooks.html"&gt;CloudFormation Hooks&lt;/a&gt;. These allow us to validate every resource in a CloudFormation template against our compliance rules at deployment time. If a resource passes, it gets deployed. If not, the deployment is blocked, and we provide developers with clear, actionable error messages to help them fix the template.&lt;/p&gt; 
&lt;p&gt;This worked well, but it wasn’t perfect. Developers would often only discover issues after writing their entire templates using manual values to make the template compliant, and attempting deployment. This was a time-consuming process, and it was not a good developer experience.&lt;/p&gt; 
&lt;p&gt;We needed an automated way to make CloudFormation templates compliant with our compliance rules without manual effort.&lt;/p&gt; 
&lt;p&gt;This is where &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/aspects.html"&gt;CDK Aspects&lt;/a&gt; come in.&lt;/p&gt; 
&lt;h2&gt;CDK Aspects: compliance while you code&lt;/h2&gt; 
&lt;p&gt;CDK Aspects are our answer to proactive, early-stage compliance enforcement — catching and resolving issues at the code level, before deployment.&lt;/p&gt; 
&lt;p&gt;In AWS CDK, an Aspect is a lightweight &lt;a href="https://refactoring.guru/design-patterns/visitor"&gt;&lt;em&gt;Visitor&lt;/em&gt;&lt;/a&gt; that can inspect and act on every construct in your infrastructure code before it’s synthesized into a CloudFormation template. This means you can apply organization-wide rules as developers write CDK code, not after.&lt;/p&gt; 
&lt;p&gt;Think of it as linting for your infrastructure.&lt;/p&gt; 
&lt;p&gt;Want to ensure all &lt;a href="https://aws.amazon.com/s3/"&gt;Amazon Simple Storage Service (Amazon S3)&lt;/a&gt; buckets have encryption enabled? Require specific tags on resources? Block public access on security groups? With &lt;strong&gt;CDK Aspects&lt;/strong&gt;, all of that becomes not only possible, but automatic.&lt;/p&gt; 
&lt;h2&gt;Under the hood: how CDK Aspects work&lt;/h2&gt; 
&lt;p&gt;CDK Aspects are a powerful mechanism for inspecting and modifying your infrastructure as code. At their core, they use the &lt;a href="https://refactoring.guru/design-patterns/visitor"&gt;Visitor Pattern&lt;/a&gt;, which allows you to traverse a tree of objects (constructs) and perform operations on each node without modifying the constructs themselves directly.&lt;/p&gt; 
&lt;h3&gt;Interface IAspect&lt;/h3&gt; 
&lt;p&gt;An Aspect is a class that implements the IAspect interface:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;interface IAspect {
    visit(node: IConstruct): void;
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The single &lt;code&gt;visit()&lt;/code&gt; method is called for every construct in the scope where the Aspect is applied. Inside &lt;code&gt;visit()&lt;/code&gt;, you can inspect, modify, or enforce rules on the construct.&lt;/p&gt; 
&lt;h3&gt;Adding Aspects&lt;/h3&gt; 
&lt;p&gt;To attach an Aspect to a construct (or a tree of constructs), use the following method:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;Aspects.of(myConstruct).add(new SomeAspect());&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;This adds the Aspect to the construct’s internal list.&lt;/p&gt; 
&lt;p&gt;When &lt;code&gt;cdk deploy&lt;/code&gt; is run, a CDK app goes through several phases:&lt;/p&gt; 
&lt;div id="attachment_25149" style="width: 1429px" class="wp-caption alignnone"&gt;
 &lt;img aria-describedby="caption-attachment-25149" loading="lazy" class="wp-image-25149 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/30/cdk_app_lifecycle.png" alt="A horizontal flow diagram illustrating the CDK App Lifecycle consisting of six sequential stages connected by arrows from left to right. The stages are: 1) CDK app source code, 2) Construct, 3) Prepare, 4) Validate, 5) Synthesize, and 6) Deploy. Each stage is represented by a light blue rectangular box with centered text. The entire lifecycle flow is contained within a gray bordered frame with the title 'CDK App Lifecycle' at the top. This diagram shows the progression of AWS Cloud Development Kit (CDK) applications from initial source code through deployment." width="1419" height="226"&gt;
 &lt;p id="caption-attachment-25149" class="wp-caption-text"&gt;Figure 1: CDK app lifecycle from source code to deployment&lt;/p&gt;
&lt;/div&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Construction&lt;/strong&gt; – Constructs are instantiated.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Preparation&lt;/strong&gt; – Final modifications and Aspects are applied.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Validation&lt;/strong&gt; – Checks for invalid configurations.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Synthesis&lt;/strong&gt; – CloudFormation templates are generated.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Deployment&lt;/strong&gt; – Resources are provisioned in AWS.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Aspects are executed during the &lt;strong&gt;Preparation&lt;/strong&gt; phase, which happens automatically. This ensures all rules, validations, or mutations are applied before synthesis, so your generated CloudFormation templates are compliant and valid before deployment.&lt;/p&gt; 
&lt;p&gt;During the &lt;strong&gt;Preparation&lt;/strong&gt; phase of the CDK lifecycle, CDK traverses the construct tree and calls &lt;code&gt;visit()&lt;/code&gt; on each node in top-down order (parent → children). Inside &lt;code&gt;visit()&lt;/code&gt;, you can inspect, modify, or enforce rules on the construct.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;visit(node: IConstruct) {
    if (node instanceof s3.Bucket) {
        node.encryption = s3.BucketEncryption.KMS; // Mutates the resource
    }
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: CDK Aspects to automatically add S3 Encryption to all S3 buckets in a stack by mutating the resource.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;class EnforceBucketEncryption implements IAspect {
    visit(node: IConstruct) {
        if (node instanceof s3.Bucket) {
            node.encryption = s3.BucketEncryption.KMS; // Mutates the resource
        }
    }
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;This aspect can be registered on the stack by calling the following method:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;Aspects.of(this).add(new EnforceBucketEncryption());&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The template generated by CDK will look something like this:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-yaml"&gt;Resources:
    MyBucket:
        Type: AWS::S3::Bucket
        Properties:
            BucketEncryption:
                ServerSideEncryptionConfiguration:
                    - ServerSideEncryptionByDefault:
                          SSEAlgorithm: aws:kms
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;From the example above, you can see that the &lt;strong&gt;BucketEncryption&lt;/strong&gt; property is added to the &lt;strong&gt;MyBucket&lt;/strong&gt; resource by the Aspect.&lt;/p&gt; 
&lt;p&gt;During the &lt;strong&gt;prepare&lt;/strong&gt; phase, CDK traverses the construct tree from top to bottom – starting at the &lt;strong&gt;App&lt;/strong&gt;, then each &lt;strong&gt;Stack&lt;/strong&gt;, and down to resources like &lt;strong&gt;S3&lt;/strong&gt; buckets. At each node, Aspects are applied by invoking &lt;code&gt;aspect.visit(node)&lt;/code&gt;, allowing inspection and modification of resources. By the time CDK reaches the &lt;strong&gt;synth&lt;/strong&gt; step, the CloudFormation template already includes these Aspect-driven changes, ensuring compliance and best practices are consistently enforced before deployment.&lt;/p&gt; 
&lt;h2&gt;Types of CDK Aspects&lt;/h2&gt; 
&lt;p&gt;AWS CDK distinguishes between two types of Aspects based on how they interact with your infrastructure: those that modify resources (mutating) and those that only inspect them (read-only).&lt;/p&gt; 
&lt;h3&gt;Mutating Aspects&lt;/h3&gt; 
&lt;p&gt;Mutating Aspects modify resources automatically (like adding encryption or logging). These change the properties of resources as they traverse your constructs. They are ideal for enforcing compliance and best practices like:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Enforcing &lt;a href="https://aws.amazon.com/kms/"&gt;AWS Key Management Service (AWS KMS)&lt;/a&gt; encryption on S3 buckets&lt;/li&gt; 
 &lt;li&gt;Setting default timeouts on &lt;a href="https://aws.amazon.com/lambda/"&gt;AWS Lambda&lt;/a&gt; functions&lt;/li&gt; 
 &lt;li&gt;Changing RemovalPolicy in test stacks&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Mutating Aspects are powerful, but overusing them can introduce unintended changes, because they modify resources without explicit developer action.Always make sure you are aware of the changes, and avoid applying them to production resources without caution. Use logging or annotations to help you understand and debug the changes.&lt;/p&gt; 
&lt;p&gt;For example, the following aspect sets the default timeout on all Lambda functions to 300 seconds:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;class SetDefaultTimeouts implements IAspect {
    visit(node: IConstruct) {
        if (node instanceof lambda.Function) {
            node.timeout = 300; // mutates the resource
        }
    }
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Read-only Aspects&lt;/h3&gt; 
&lt;p&gt;Read-only Aspects only inspect resources and report findings without modifying them. They’re ideal for compliance checks, tagging audits, and policy validation.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;class RequireTags implements IAspect {
    visit(node: IConstruct) {
        const tags = Tags.of(node);
        if (!tags.hasTag("project_budget_number")) {
            Annotations.of(node).addWarning(
                "Missing required tag: project_budget_number",
            );
        }
    }
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;At GoDaddy, we use mutating Aspects to enforce compliance automatically, reducing manual work and ensuring stacks are compliant with our standards by default. We add read-only Aspects for stricter audits where mutation isn’t appropriate.&lt;/p&gt; 
&lt;h2&gt;Common use cases for CDK Aspects&lt;/h2&gt; 
&lt;p&gt;When building cloud infrastructure at scale, enforcing consistency across stacks can be an ongoing challenge. AWS CDK Aspects let you automatically enforce security, compliance, and operational standards across your entire infrastructure.&lt;/p&gt; 
&lt;p&gt;Below are some of the most impactful use cases where Aspects can save you time, reduce risk, and improve governance.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Security &amp;amp; compliance&lt;/strong&gt;: Security and compliance go hand in hand, and Aspects are a powerful way to enforce both before resources ever reach AWS. With Aspects, you can: 
  &lt;ul&gt; 
   &lt;li&gt;Enforce encryption on S3 buckets, &lt;a href="https://aws.amazon.com/rds/"&gt;Amazon Relational Database Service (Amazon RDS)&lt;/a&gt; databases, and &lt;a href="https://aws.amazon.com/ebs/"&gt;Amazon Elastic Block Store (Amazon EBS)&lt;/a&gt; volumes&lt;/li&gt; 
   &lt;li&gt;Require versioning on S3 buckets&lt;/li&gt; 
   &lt;li&gt;Flag wildcard * permissions in &lt;a href="https://aws.amazon.com/iam/"&gt;Identity and Access Management (IAM)&lt;/a&gt; policies&lt;/li&gt; 
   &lt;li&gt;Validate that required tags (like cost allocation tags) are present&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;IAM policies&lt;/strong&gt;: Aspects make it trivial and consistent across all stacks to apply the same permissions boundary to every IAM role.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Tagging enforcement&lt;/strong&gt;: Tags are the backbone of cost allocation, compliance, and automation. Yet, they’re easy to forget. With Aspects, you can enforce required tags across every resource.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Operational best practices&lt;/strong&gt;: Use Aspects to enforce sensible defaults and operational hygiene: 
  &lt;ul&gt; 
   &lt;li&gt;Set Lambda function timeouts and memory sizes&lt;/li&gt; 
   &lt;li&gt;Ensure logging is enabled for S3, &lt;a href="https://aws.amazon.com/api-gateway/"&gt;Amazon API Gateway(API Gateway)&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/cloudtrail/"&gt;AWS CloudTrail (CloudTrail)&lt;/a&gt;&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Testing made easier&lt;/strong&gt;: You can use Aspects to override the default RemovalPolicy for test stacks to DESTROY, ensuring everything is cleaned up automatically.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Working around 3rd-party constructs&lt;/strong&gt;: Sometimes external constructs create resources you can’t fully configure, like an S3 bucket without encryption or logging options. Instead of waiting on maintainers or forking the library, you can apply an Aspect to modify those resources directly.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Network policies&lt;/strong&gt;: Networking issues often lead to security incidents. With Aspects, you can: 
  &lt;ul&gt; 
   &lt;li&gt;Validate resources are deployed in approved &lt;a href="https://aws.amazon.com/vpc/"&gt;Amazon Virtual Private Cloud (Amazon VPC)&lt;/a&gt; or subnets&lt;/li&gt; 
   &lt;li&gt;Prevent accidental public IP assignments&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Cost control&lt;/strong&gt;: Budgets matter. Aspects can help by: 
  &lt;ul&gt; 
   &lt;li&gt;Flagging high-cost instance types before deployment&lt;/li&gt; 
   &lt;li&gt;Limiting use of expensive storage classes&lt;/li&gt; 
   &lt;li&gt;Warning when provisioned throughput exceeds thresholds&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;You can combine these use cases into reusable, testable policies that run consistently across all CDK stacks.&lt;/p&gt; 
&lt;h2&gt;CDK Aspects in action at GoDaddy&lt;/h2&gt; 
&lt;p&gt;At GoDaddy, we define CDK Aspects and distribute them through a wrapper Stack that development teams use when building infrastructure with CDK. Every template a team creates is automatically made compliant with GoDaddy’s cloud compliance rules, without requiring manual updates or fixes.&lt;/p&gt; 
&lt;p&gt;As an example, some of the Aspects are:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;S3BucketAspect&lt;/strong&gt; – Enforces encryption, logging, and public access block on S3 buckets.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;IAMRoleAspect&lt;/strong&gt; – Flags wildcard permissions and enforces naming conventions.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;LambdaFunctionAspect&lt;/strong&gt; – Validates timeouts, memory limits, and Amazon VPC configurations.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;These Aspects enforce security, operational, and tagging standards at the code level. When you use the wrapper stack, the relevant Aspects are applied automatically, injecting the necessary properties into the CloudFormation template before deployment. This is intended to support compliance without slowing down development.&lt;/p&gt; 
&lt;p&gt;Each of these aspects implements the IAspect interface and is applied to stacks directly:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;Aspects.of(myStack).add(new S3BucketAspect());
Aspects.of(myStack).add(new IAMRoleAspect());
Aspects.of(myStack).add(new LambdaFunctionAspect());&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;This approach means that when a developer defines a new resource, like an S3 bucket or IAM role, all relevant compliance rules are automatically applied:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;const bucket = new s3.Bucket(myStack, "MyBucket", {
    // developer doesn't need to manually configure encryption or logging
});
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;During the CDK &lt;strong&gt;preparation&lt;/strong&gt; phase, the aspect injects the required properties into the CloudFormation template, which is done before the template is synthesized and deployed. This ensures that the resources are deployed with the required properties at GoDaddy’s standards.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-yaml"&gt;# s3-bucket.yaml - generated by `cdk synth`
Resources:
    MyBucket:
        Type: AWS::S3::Bucket
        Properties:
            BucketEncryption:
                ServerSideEncryptionConfiguration:
                    - ServerSideEncryptionByDefault:
                          SSEAlgorithm: AES256
            PublicAccessBlockConfiguration:
                BlockPublicAcls: true
                BlockPublicPolicy: true
                IgnorePublicAcls: true
                RestrictPublicBuckets: true
            LoggingConfiguration:
                DestinationBucketName: logging-bucket
                LogFilePrefix: logs/
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Consider a team that deploys hundreds of S3 buckets per month, where each bucket requires encryption, logging, versioning, and public access block. With CDK Aspects, you don’t have to manually update the CloudFormation template to add the required properties.&lt;/p&gt; 
&lt;p&gt;This saves a lot of engineering effort and time, and ensures that the resources are deployed with the required properties that meet GoDaddy’s standards.&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;At GoDaddy, CDK Aspects have significantly transformed our approach to cloud infrastructure compliance.&lt;/p&gt; 
&lt;p&gt;Because Aspects run during the CDK preparation phase, before synthesis and deployment, they inject required properties like encryption, logging, and access controls into CloudFormation templates automatically. Developers write their CDK code as usual, and compliance happens behind the scenes. This eliminates the manual effort of configuring each resource to meet security and compliance standards.&lt;/p&gt; 
&lt;p&gt;This proactive approach has also improved developer productivity. Instead of discovering compliance failures after attempting deployment (as was the case with CloudFormation Hooks alone), developers now get compliant templates on the first synthesis. Fewer failed deployments mean faster iteration cycles and less time spent debugging configuration issues.&lt;/p&gt; 
&lt;p&gt;Policy enforcement is consistent because every team uses the same wrapper Stack with the same Aspects applied. Whether a team deploys an S3 bucket, an IAM role, or a Lambda function, the same tagging, encryption, network isolation, and cost control rules are applied uniformly.&lt;/p&gt; 
&lt;p&gt;Finally, this model scales. Adding a new compliance rule means updating a single Aspect in the shared library, and every stack that uses the wrapper inherits the change automatically. This lets our Cloud Governance team enforce standards across thousands of accounts with minimal operational overhead.&lt;/p&gt; 
&lt;p&gt;If you’re working with AWS CDK at scale, adopting CDK Aspects isn’t just a nice-to-have, it’s essential. Your future self and your security team will thank you.&lt;/p&gt; 
&lt;p&gt;To get started, pick one compliance rule, like S3 encryption, and implement it as a mutating Aspect. Once you see how it works, expand to cover tagging and IAM policies. From there, package your Aspects into a shared library that teams across your organization can adopt.&lt;/p&gt; 
&lt;h2&gt;References&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/cdk/"&gt;AWS CDK&lt;/a&gt; – Basics of AWS Cloud Development Kit&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/constructs.html"&gt;AWS CDK Constructs&lt;/a&gt; – Learn to build your own Constructs&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/aspects.html"&gt;AWS CDK Aspects&lt;/a&gt; – How to use AWS CDK Aspects&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://refactoring.guru/design-patterns/visitor"&gt;Visitor Design Pattern&lt;/a&gt; – More about Visitor Design Pattern&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;The content and opinions in this blog are those of the third-party author and AWS is not responsible for the content or accuracy of this blog.&lt;/p&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/30/jasdeep.jpeg" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Jasdeep Singh Bhalla&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;is a Senior Software Engineer at GoDaddy, focused on scaling Cloud Infrastructure governance and automation across thousands of AWS accounts. He builds frameworks, APIs, and tools that make it easy for developers to follow best practices in AWS, while ensuring every resource is secure, compliant, and production-ready by default.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/10/07/Cropped_Image.png" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Juan Pablo Melgarejo Zamora&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;is a Sr. Solutions Architect at Amazon Web Services, where he helps customers architect and optimize their cloud solutions. Throughout his career, he has built expertise across Data Engineering, High Performance Computing (HPC), and DevOps practices. He leverages this diverse technical background to provide comprehensive solutions for AWS customers. Outside of work, he enjoys cooking and traveling.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Securely connect AWS DevOps Agent to private services in your VPCs</title>
		<link>https://aws.amazon.com/blogs/devops/securely-connect-aws-devops-agent-to-private-services-in-your-vpcs/</link>
		
		<dc:creator><![CDATA[Alexandra Huides]]></dc:creator>
		<pubDate>Wed, 01 Apr 2026 15:37:03 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">c9a1eecefe2a3f347e4905b1e4a936e396a036ae</guid>

					<description>AWS DevOps Agent is your always-available operations teammate that resolves and proactively prevents incidents, optimizes application reliability and performance, and handles on-demand SRE tasks across AWS, multicloud, and on-premises environments. It integrates with your existing observability tools to correlate telemetry, code, and deployment data to reduce Mean Time To Repair (MTTR) and drive operational excellence. […]</description>
										<content:encoded>&lt;p&gt;&lt;a href="https://aws.amazon.com/devops-agent/"&gt;AWS DevOps Agent&lt;/a&gt; is your always-available operations teammate that resolves and proactively prevents incidents, optimizes application reliability and performance, and handles on-demand SRE tasks across AWS, multicloud, and on-premises environments. It integrates with your existing observability tools to correlate telemetry, code, and deployment data to reduce Mean Time To Repair (MTTR) and drive operational excellence.&lt;/p&gt; 
&lt;p&gt;Many organizations extend AWS DevOps Agent with custom &lt;a href="https://modelcontextprotocol.io/"&gt;Model Context Protocol (MCP)&lt;/a&gt; tools and other integrations that give the agent access to internal systems such as private package registries, self-hosted observability platforms, internal documentation APIs, and source control instances like GitHub Enterprise and GitLab. These services often run inside an &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/"&gt;Amazon Virtual Private Cloud (Amazon VPC)&lt;/a&gt; with no public internet access, which means AWS DevOps Agent can’t reach them by default.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/configuring-capabilities-for-aws-devops-agent-connecting-to-privately-hosted-tools.html"&gt;Private connections for AWS DevOps Agent&lt;/a&gt; let you securely connect your Agent Space to services running in your VPC, or internal network, without exposing them to the public internet. Private connections work with any integration that needs to reach a private endpoint, including MCP servers, self-hosted Grafana or Splunk instances, and source control systems.&lt;/p&gt; 
&lt;p&gt;In this post, we dive into how private connections work, what makes them secure, and how to set one up using the &lt;a href="https://aws.amazon.com/console/"&gt;AWS Management Console&lt;/a&gt; and the &lt;a href="https://aws.amazon.com/cli/"&gt;AWS Command Line Interface (AWS CLI)&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;How private connections work&lt;/h2&gt; 
&lt;p&gt;A private connection creates a secure network path between AWS DevOps Agent and a target resource in your VPC. Under the hood, AWS DevOps Agent uses &lt;a href="https://docs.aws.amazon.com/vpc-lattice/latest/ug/"&gt;Amazon VPC Lattice&lt;/a&gt; to establish this secure and private connectivity path. VPC Lattice is an application networking service that lets you connect, secure, and monitor communication between applications across VPCs, accounts, and compute types, without managing the underlying network infrastructure.&lt;/p&gt; 
&lt;p&gt;When you create a private connection, the following occurs:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;You provide the VPC, subnets, and (optionally) security groups that have network connectivity to your target service.&lt;/li&gt; 
 &lt;li&gt;AWS DevOps Agent creates a service-managed &lt;a href="https://docs.aws.amazon.com/vpc/latest/privatelink/resource-gateway.html"&gt;resource gateway&lt;/a&gt; and provisions its elastic network interfaces (ENIs) in the subnets you specified.&lt;/li&gt; 
 &lt;li&gt;The agent uses the resource gateway to route traffic to your target service’s IP address or DNS name over the private network path.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;The resource gateway is fully managed by AWS DevOps Agent and appears as a read-only resource in your account. You don’t need to configure or maintain it. The only resources created in your VPC are ENIs in the subnets you specify. These ENIs serve as the entry point for private traffic and are managed entirely by the service. They don’t accept inbound connections from the internet, and you retain full control over their traffic through your own security groups.&lt;/p&gt; 
&lt;h2&gt;Security&lt;/h2&gt; 
&lt;p&gt;Private connections are designed with multiple layers of security:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;No public internet exposure&lt;/strong&gt;: All traffic between AWS DevOps Agent and your target service stays on the AWS network. Your service never needs a public IP address or internet gateway.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Service-controlled resource gateway&lt;/strong&gt;: The service-managed resource gateway is read-only in your account. It can only be used by AWS DevOps Agent, and no other service or principal can route traffic through it. You can verify this in &lt;a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/"&gt;AWS CloudTrail&lt;/a&gt; logs, which record all VPC Lattice API calls.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Your security groups, your rules&lt;/strong&gt;: You control outbound traffic from the ENIs through security groups that you own and manage. Traffic from the DevOps Agent is subject to the outbound rules of the security group you associate with the private connection resource gateway ENIs, and the inbound rules of your target.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Service-linked roles with least privilege&lt;/strong&gt;: AWS DevOps Agent uses a &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html"&gt;service-linked role&lt;/a&gt; to manage the resource gateway. This role is scoped to resources tagged with AWSAIDevOpsManaged and cannot access any other resources in your account.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;DNS support&lt;/strong&gt;: You can reference your services by their DNS names. Note that the DNS names need to be publicly resolvable.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;If you plan on managing your own resource configurations, make sure your organization allows VPC Lattice actions in &lt;a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html"&gt;service control policies (SCPs)&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Architecture&lt;/h2&gt; 
&lt;p&gt;The following diagram (figure-1) shows the network path for a private connection:&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-1-2.png"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25183" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-1-2.png" alt="" width="3949" height="1515"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Figure 1: AWS DevOps Agent private connections&lt;/p&gt; 
&lt;p&gt;In this architecture:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;AWS DevOps Agent initiates a request to your target service.&lt;/li&gt; 
 &lt;li&gt;The request is routed through the service-managed resource gateway.&lt;/li&gt; 
 &lt;li&gt;An ENI in your VPC receives the traffic and forwards it to your target service’s IP address or DNS name.&lt;/li&gt; 
 &lt;li&gt;Your security groups govern which traffic is allowed through the ENIs.&lt;/li&gt; 
 &lt;li&gt;From your target service’s perspective, the request originates from private IP addresses of ENIs within your VPC.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;h2&gt;Prerequisites&lt;/h2&gt; 
&lt;p&gt;Before creating a private connection, verify that you have the following:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;An active Agent Space: You need an existing Agent Space in your account. If you don’t have one, see &lt;a href="https://docs.aws.amazon.com/devops-agent/latest/userguide/getting-started.html"&gt;Getting started with AWS DevOps Agent&lt;/a&gt;.&lt;/li&gt; 
 &lt;li&gt;A privately reachable target service: Your MCP server, observability platform, or other service must be reachable at a known private IP address or DNS name from the VPC where the resource gateway is deployed. The service can run in the same VPC, a peered VPC, or on-premises, as long as it’s routable from the resource gateway subnets. The service must be listening on a TCP port that you can specify when creating the connection.&lt;/li&gt; 
 &lt;li&gt;Subnets in your VPC: Identify one subnet per Availability Zone where the Resource Gateway ENIs will be created. We recommend selecting subnets in multiple Availability Zones for high availability. These subnets must have network connectivity to your target service.&lt;/li&gt; 
 &lt;li&gt;(Optional) Security groups: If you want to control traffic with specific rules, prepare up to five security group IDs to attach to the ENIs. If you omit security groups, AWS DevOps Agent uses the VPC default security group.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Private connections are account-level resources. After you create a private connection, you can reuse it across multiple integrations and Agent Spaces that need to reach the same host.&lt;/p&gt; 
&lt;h2&gt;Create a private connection&lt;/h2&gt; 
&lt;p&gt;You can create a private connection using the AWS Management Console or the AWS CLI.&lt;/p&gt; 
&lt;h3&gt;&lt;strong&gt;Using the AWS Console&lt;/strong&gt;&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: Open the &lt;a href="https://console.aws.amazon.com/devops-agent/"&gt;AWS DevOps Agent console&lt;/a&gt;. In the navigation pane, choose &lt;strong&gt;Capability providers:&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-2.png"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25190" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-2.png" alt="" width="1585" height="888"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Figure 2: DevOps Agent Capability Providers&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: Select &lt;strong&gt;Private connections&lt;/strong&gt;.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-3.png"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25189" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-3.png" alt="" width="2055" height="968"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Figure 3: DevOps Agent private connections&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: Choose &lt;strong&gt;Create a new connection&lt;/strong&gt;.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-4.png"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25188" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-4.png" alt="" width="1939" height="971"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Figure-4: DevOps agent – Create a new connection&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;: Configure the private connection:&lt;/p&gt; 
&lt;p&gt;For Connection details:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Name&lt;/strong&gt;: Enter a descriptive name for the connection&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;For Resource location:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;VPC&lt;/strong&gt; &lt;strong&gt;where your resource is located&lt;/strong&gt;: Select the VPC where your resource is located, or a VPC that has access to your resource&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt; Subnets&lt;/strong&gt;: One subnet per availability zone in your VPC to host the managed resource gateway ENIs.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;IP address type&lt;/strong&gt;: IPv4, IPv6 or dual stack, representing IP address family your private connection should use. I selected dual stack.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-5.png"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25186" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-5.png" alt="" width="1925" height="978"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Figure-5: DevOps agent – Configure private connection&lt;/p&gt; 
&lt;p&gt;For Access control:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Security groups&lt;/strong&gt;: Select the security group to be used by DevOps Agent to target your private resources&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;(Optional) TCP port ranges&lt;/strong&gt;: Optionally restrict the port ranges DevOps Agent can use to connect to your resources. If you don’t specific this, all ports are allowed.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;For Service target details:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Host address&lt;/strong&gt;: The DNS name or IP address of your target resource or service. I used &lt;code&gt;mymcpserver.test.skipv5.net&lt;/code&gt;. The service IPs must be reachable from the selected VPC. If you choose a DNS name, it must be publicly resolvable.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;(Optional) Certificate public key&lt;/strong&gt;: The certificate public key DevOps Agents uses to securely connect to your target.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-6.png"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25187" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-6.png" alt="" width="1926" height="1060"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Figure-6: DevOps agent – Configure private connection (continued)&lt;/p&gt; 
&lt;p&gt;Select &lt;strong&gt;Create Connection&lt;/strong&gt;.&lt;/p&gt; 
&lt;p&gt;The connection status changes to &lt;strong&gt;Create in progress&lt;/strong&gt;. This process can take up to 10 minutes. When the status changes to &lt;strong&gt;Completed&lt;/strong&gt;, the network path is ready.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-7.png"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25185" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-7.png" alt="" width="1941" height="568"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Figure-7: Private connection successfully created&lt;/p&gt; 
&lt;p&gt;If the status changes to &lt;strong&gt;Create failed&lt;/strong&gt;, verify that the subnets you specified have available IP addresses, that your account has not reached VPC Lattice service quotas, and that no restrictive IAM policies are preventing the service-linked role from creating resources.&lt;/p&gt; 
&lt;h3&gt;&lt;strong&gt;Using the AWS CLI&lt;/strong&gt;&lt;/h3&gt; 
&lt;p&gt;Run the following command to create a private connection. Replace values with your own.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;aws devops-agent create-private-connection \
    --name my-test-private-connection \
    --mode '{
        "serviceManaged": {
            "hostAddress": "mymcpserver.test.skipv5.net",
            "resourceGatewayConfig": {
                "create": {
                    "vpcId": "vpc-00ef99bef2632b9ac",
                    "subnetIds": [
                        "subnet-034f636837473de13",
                        "subnet-00bdfb9edf7cc1ca7"
                    ],
                    "securityGroupIds": [
                        "sg-082788aaec0517905"
                    ]
                }
            }
        }
    }'&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The response includes the connection name and a status of CREATE_IN_PROGRESS:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;{
    "name": " my-test-private-connection",
    "status": "CREATE_IN_PROGRESS",
    "resourceGatewayId": "rgw-0f7415325b107a945",
    "hostAddress": "mymcpserver.test.skipv5.net",
    "vpcId": "vpc-00ef99bef2632b9ac"
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;To check the connection status, use the &lt;code&gt;describe-private-connection&lt;/code&gt; command:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;aws devops-agent describe-private-connection \
    --name my-test-private-connection&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;When the status is &lt;strong&gt;Completed&lt;/strong&gt;, your private connection is ready to use.&lt;/p&gt; 
&lt;h2&gt;Verify the connection&lt;/h2&gt; 
&lt;p&gt;After the private connection reaches the Completed state, verify that AWS DevOps Agent can reach your target service:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Open the &lt;a href="https://console.aws.amazon.com/devops-agent/"&gt;AWS DevOps Agent console&lt;/a&gt; and navigate to your Agent Space.&lt;/li&gt; 
 &lt;li&gt;Start a new chat session.&lt;/li&gt; 
 &lt;li&gt;Invoke a command that uses the integration backed by your private connection. For example, if your MCP tool provides access to an internal knowledge base, ask the agent a question that requires that knowledge base.&lt;/li&gt; 
 &lt;li&gt;Confirm that the agent returns results from the private service.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;If the connection fails, verify the following:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Security group rules&lt;/strong&gt;: Verify that the security groups attached to the ENIs allow outbound traffic on the port your service listens on. Also verify that your service’s security group allows inbound traffic from the ENI security group.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Subnet connectivity&lt;/strong&gt;: Verify that the subnets you selected have reachability to your service. If the service runs in a different subnet, confirm that the route tables allow traffic between them.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Service availability&lt;/strong&gt;: Confirm that your service is running and accepting connections on the expected port.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Example: Connect to a self-hosted Grafana instance&lt;/h2&gt; 
&lt;p&gt;One of the most common use cases for private connections is connecting AWS DevOps Agent to a self-hosted Grafana instance. Many teams run Grafana inside a VPC with no public endpoint to visualize metrics, logs, and traces. With the built-in Grafana integration and a private connection, you can give the agent read-only access to your dashboards, alerts, and data sources, the same observability data your on-call engineers rely on during incidents.&lt;/p&gt; 
&lt;p&gt;AWS DevOps Agent provides a dedicated Grafana integration that hosts the &lt;a href="https://github.com/grafana/mcp-grafana"&gt;official open-source Grafana MCP server&lt;/a&gt; on your behalf, so you don’t need to deploy or manage any MCP server infrastructure yourself. The integration supports Grafana Cloud, Grafana Enterprise, and self-hosted Grafana OSS (v9.1 and later).&lt;/p&gt; 
&lt;p&gt;For self-hosted instances that aren’t publicly accessible, pair the Grafana integration with a private connection so the agent can reach your instance over the private network.&lt;/p&gt; 
&lt;h3&gt;&lt;strong&gt;Step 1: Create a Grafana service account&lt;/strong&gt;&lt;/h3&gt; 
&lt;p&gt;In your Grafana instance, create a &lt;a href="https://grafana.com/docs/grafana/latest/administration/service-accounts/"&gt;service account&lt;/a&gt; with &lt;strong&gt;Viewer&lt;/strong&gt; role permissions. This gives AWS DevOps Agent read-only access to your dashboards, alert rules, and data sources. Generate an access token for the service account and save it for the next step.&lt;/p&gt; 
&lt;h3&gt;&lt;strong&gt;Step 2: Create a private connection to your Grafana instance&lt;/strong&gt;&lt;/h3&gt; 
&lt;p&gt;Because your Grafana instance runs in a private subnet, create a private connection first so that AWS DevOps Agent can reach it.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Using the AWS Console:&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Follow the steps in the Create a private connection section of this blog, using your Grafana instance’s address as the host address.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Using the AWS CLI:&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;aws devops-agent create-private-connection \
    --name grafana-connection \
    --mode '{
        "serviceManaged": {
            "hostAddress": "grafana.internal.example.com",
            "resourceGatewayConfig": {
                "create": {
                    "vpcId": "vpc-0123456789abcdef0",
                    "subnetIds": [
                        "subnet-0123456789abcdef0",
                        "subnet-0123456789abcdef1"
                    ],
                    "portRanges": ["443"]
                }
            }
        }
    }'
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Wait for the connection status to reach &lt;strong&gt;Completed&lt;/strong&gt; state before proceeding.&lt;/p&gt; 
&lt;h3&gt;&lt;strong&gt;Step 3: Add the Grafana integration to your Agent Space&lt;/strong&gt;&lt;/h3&gt; 
&lt;p&gt;Register and associate the Grafana service so the agent knows how to connect.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Using the AWS Console:&lt;/strong&gt;&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Open the &lt;a href="https://console.aws.amazon.com/devops-agent/"&gt;AWS DevOps Agent console&lt;/a&gt; and navigate to your Agent Space.&lt;/li&gt; 
 &lt;li&gt;In the &lt;strong&gt;Integrations&lt;/strong&gt; section, choose &lt;strong&gt;Add integration&lt;/strong&gt;.&lt;/li&gt; 
 &lt;li&gt;Select the &lt;strong&gt;Grafana&lt;/strong&gt; tile.&lt;/li&gt; 
 &lt;li&gt;Enter your Grafana instance URL (for example, https://grafana.internal.example.com).&lt;/li&gt; 
 &lt;li&gt;Paste the service account access token you generated in Step 1.&lt;/li&gt; 
 &lt;li&gt;Choose &lt;strong&gt;Save&lt;/strong&gt;.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;&lt;strong&gt;Using the AWS CLI:&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;First, register the Grafana service with your instance URL and service account token:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;aws devops-agent register-service \
    --service mcpservergrafana \
    --private-connection-name grafana-connection \
    --service-details '{
        "mcpservergrafana": {
            "name": "grafana",
            "endpoint": "https://grafana.internal.example.com",
            "authorizationConfig": {
                "bearerToken": {
                    "tokenName": "grafana-sa-token",
                    "tokenValue": "&amp;lt;SERVICE_ACCOUNT_TOKEN&amp;gt;"
                }
            }
        }
    }' \
    --region us-east-1
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Save the returned &lt;strong&gt;serviceId&lt;/strong&gt;, then associate the Grafana service with your Agent Space:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;aws devops-agent associate-service \
    --agent-space-id &amp;lt;AGENT_SPACE_ID&amp;gt; \
    --service-id &amp;lt;SERVICE_ID&amp;gt; \
    --configuration '{
        "mcpservergrafana": {
            "endpoint": "https://grafana.internal.example.com"
        }
    }' \
    --region us-east-1
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The response includes webhook information that you can use in Step 4 to configure alert notifications. The agent routes traffic to your Grafana instance through the private connection you created in Step 2, matching on the host address.&lt;/p&gt; 
&lt;p&gt;To verify the integration, start a chat in your Agent Space and ask the agent something like “Summarize my recent Grafana alerts.” If the agent returns alert data from your dashboards, the private connection and Grafana integration are both working.&lt;/p&gt; 
&lt;h3&gt;&lt;strong&gt;Step 4: Enable Grafana webhook notifications (optional)&lt;/strong&gt;&lt;/h3&gt; 
&lt;p&gt;To let AWS DevOps Agent automatically start investigations when Grafana fires an alert, configure a webhook contact point in your Grafana instance. Point the webhook to your Agent Space’s webhook URL and provide the authentication secret from the integration settings. When an alert triggers, Grafana sends a notification to the agent, which begins an investigation using your Grafana data alongside the other resources in your Agent Space.&lt;/p&gt; 
&lt;p&gt;For details on the AWS DevOps Agent Grafana integration, and details on how you can also use this integration with &lt;a href="https://aws.amazon.com/grafana/"&gt;AWS Managed Grafana (AMG)&lt;/a&gt;, check the &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/connecting-telemetry-sources-connecting-grafana.html"&gt;AWS DevOps Agent Grafana integration documentation&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;How DNS resolution and host routing work&lt;/h2&gt; 
&lt;p&gt;When you create a private connection, the host address you provide (for example, &lt;code&gt;my-alb.us-east-1.elb.amazonaws.com&lt;/code&gt;) is the DNS name that VPC Lattice resolves to route traffic to your target. This DNS name must be publicly resolvable, even if it resolves to private IP addresses. When you register a service integration and specify an endpoint URL (for example,&lt;code&gt; https://my-grafana.internal.corp&lt;/code&gt;), that URL is used for the Host header and Service Name Indicator (SNI) on the TLS connection, it is not used for DNS resolution. This separation means you can point multiple service integrations at the same private connection (and therefore the same target, such as an Application Load Balancer), each with a different endpoint hostname. The target can then use the Host header to route requests to different backends.&lt;/p&gt; 
&lt;p&gt;For example, you could create a single private connection with your ALB’s DNS name as the host address, and register separate Grafana and MCP server integrations, each with their own endpoint URL. The ALB receives both through the same private connection and uses host-based routing rules to direct each request to the correct target group.&lt;/p&gt; 
&lt;h2&gt;Advanced setup using existing VPC Lattice resources&lt;/h2&gt; 
&lt;p&gt;If your organization already uses Amazon VPC Lattice and manages your own resource configurations, you can create a private connection in self-managed mode. Instead of having AWS DevOps Agent create a resource gateway for you, you provide the Amazon Resource Name (ARN) of an existing resource configuration that points to your target service.&lt;/p&gt; 
&lt;p&gt;This approach is useful when you:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Want full control over the resource gateway and resource configuration lifecycle.&lt;/li&gt; 
 &lt;li&gt;Need to share resource configurations across multiple AWS accounts or services.&lt;/li&gt; 
 &lt;li&gt;Require VPC Lattice access logs for detailed traffic monitoring.&lt;/li&gt; 
 &lt;li&gt;Run a hub-and-spoke network architecture.&lt;/li&gt; 
 &lt;li&gt;Require zero trust fine grained access controls for access to your resources or services.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;To create a self-managed private connection using the AWS Console, click on &lt;strong&gt;Create a new connection&lt;/strong&gt;, and select &lt;strong&gt;Use existing resource configuration&lt;/strong&gt;, as shown in the following screen capture (figure-8). Then select your existing resource configuration for the drop-down list.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-8.png"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25184" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/figure-8.png" alt="" width="1978" height="794"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Figure-8: Advanced setting using existing VPC Lattice resources&lt;/p&gt; 
&lt;p&gt;To create a self-managed private connection with the &lt;strong&gt;AWS CLI&lt;/strong&gt;:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;aws devops-agent create-private-connection \
    --name my-advanced-connection \
    --mode '{
        "selfManaged": {
            "resourceConfigurationId": "rcfg-0123456789abcdef0"
        }
    }'&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;For more details on setting up VPC Lattice resource gateways and resource configurations, see the Amazon VPC Lattice User Guide.&lt;/p&gt; 
&lt;h2&gt;Clean up&lt;/h2&gt; 
&lt;p&gt;To avoid ongoing charges, delete private connections that you no longer need.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Using the AWS Console&lt;/strong&gt;&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Open the &lt;a href="https://console.aws.amazon.com/devops-agent/"&gt;AWS DevOps Agent console&lt;/a&gt;.&lt;/li&gt; 
 &lt;li&gt;In the navigation pane, choose Capability providers, then choose Private connections.&lt;/li&gt; 
 &lt;li&gt;Select the private connection you want to delete.&lt;/li&gt; 
 &lt;li&gt;Choose Delete.&lt;/li&gt; 
 &lt;li&gt;Confirm the deletion.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;&lt;strong&gt;Using the AWS CLI&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;aws devops-agent delete-private-connection \
    --name my-test-private-connection&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The response returns a status of DELETE_IN_PROGRESS. AWS DevOps Agent removes the managed resource gateway and ENIs from your VPC. After deletion completes, the connection no longer appears in your list of private connections.&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;Private connections for AWS DevOps Agent provide a secure, managed way to connect your Agent Space to services running inside your VPC. By using Amazon VPC Lattice, private connections keep all traffic off the public internet while your existing security control remain in place, giving you full control over network access through your own security groups.&lt;/p&gt; 
&lt;p&gt;To get started, open the &lt;a href="https://console.aws.amazon.com/devops-agent/"&gt;AWS DevOps Agent console&lt;/a&gt; and create your first private connection. For more details, see &lt;a href="https://docs.aws.amazon.com/devops-agent/latest/userguide/private-connections.html"&gt;Private connections&lt;/a&gt; in the AWS DevOps Agent User Guide.&lt;/p&gt; 
&lt;h2&gt;About the authors&lt;/h2&gt; 
&lt;p&gt;
 &lt;!-- First Author --&gt;&lt;/p&gt; 
&lt;div class="blog-author-box" style="border: 1px solid #d5dbdb;padding: 15px"&gt; 
 &lt;p class="Alexandra Huides"&gt;&lt;img loading="lazy" class="alignleft wp-image-1288 size-thumbnail" src="https://d2908q01vomqb2.cloudfront.net/5b384ce32d8cdef02bc3a139d4cac0a22bb029e8/2024/08/08/alex-blog-bio-resized.png" alt="Alex Huides" width="125" height="125"&gt;&lt;/p&gt; 
 &lt;h3 class="lb-h4"&gt;Alexandra Huides&lt;/h3&gt; 
 &lt;p style="color: #879196;font-size: 1rem"&gt;Alexandra Huides is a Principal Networking Specialist Solutions Architect in the AWS Networking Services product team at Amazon Web Services. She focuses on helping customers build and develop networking architectures for highly scalable and resilient AWS environments. Alex is also a public speaker for AWS, and is helping customers adopt IPv6. Outside work, she loves sailing, especially catamarans, traveling, discovering new cultures, running and reading.&lt;/p&gt; 
&lt;/div&gt; 
&lt;p&gt;
 &lt;!-- Second Author --&gt;&lt;/p&gt; 
&lt;div class="blog-author-box" style="border: 1px solid #d5dbdb;padding: 15px"&gt; 
 &lt;p class="IMG_8290"&gt;&lt;img loading="lazy" class="alignleft wp-image-1288 size-thumbnail" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/tipu-small-img.jpg" alt="Tipu Qureshi" width="125" height="125"&gt;&lt;/p&gt; 
 &lt;h3 class="lb-h4"&gt;Tipu Qureshi&lt;/h3&gt; 
 &lt;p style="color: #879196;font-size: 1rem"&gt;Tipu Qureshi is a Senior Principal Technologist in AWS Agentic AI, focusing on operational excellence and incident response automation. He works with AWS customers to design resilient, observable cloud applications and autonomous operational systems.&lt;/p&gt; 
&lt;/div&gt; 
&lt;p&gt;
 &lt;!-- Third Author --&gt;&lt;/p&gt; 
&lt;div class="blog-author-box" style="border: 1px solid #d5dbdb;padding: 15px"&gt; 
 &lt;p class="IMG_8290"&gt;&lt;img loading="lazy" class="alignleft wp-image-1288 size-thumbnail" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/jordanmerrick.jpg" alt="Jordan Merrick" width="125" height="125"&gt;&lt;/p&gt; 
 &lt;h3 class="lb-h4"&gt;Jordan Merrick&lt;/h3&gt; 
 &lt;p style="color: #879196;font-size: 1rem"&gt;Jordan Merrick is a Senior Software Engineer in AWS Agentic AI, where he works on observability and identity. He builds secure, scalable agentic systems for AWS customers.&lt;/p&gt; 
&lt;/div&gt; 
&lt;p&gt;
 &lt;!-- Fourth Author --&gt;&lt;/p&gt; 
&lt;div class="blog-author-box" style="border: 1px solid #d5dbdb;padding: 15px"&gt; 
 &lt;p class="IMG_8290"&gt;&lt;img loading="lazy" class="alignleft wp-image-1288 size-thumbnail" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/mohakk.jpg" alt="Mohak Kohli" width="125" height="125"&gt;&lt;/p&gt; 
 &lt;h3 class="lb-h4"&gt;Mohak Kohli&lt;/h3&gt; 
 &lt;p style="color: #879196;font-size: 1rem"&gt;Mohak Kohli is a Software Development Engineer at Amazon Web Services. He works across various domains with focus on VPC Lattice and PrivateLink.&lt;/p&gt; 
&lt;/div&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Leverage Agentic AI for Autonomous Incident Response with AWS DevOps Agent</title>
		<link>https://aws.amazon.com/blogs/devops/leverage-agentic-ai-for-autonomous-incident-response-with-aws-devops-agent/</link>
		
		<dc:creator><![CDATA[Janardhan Molumuri]]></dc:creator>
		<pubDate>Tue, 31 Mar 2026 22:58:13 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Thought Leadership]]></category>
		<guid isPermaLink="false">64794be57cac8b4171fa32815dd671f13895fb10</guid>

					<description>Introduction Teams running distributed workloads face a persistent operational challenge: when something breaks, the information needed to resolve it is scattered across logs, deployment pipelines, configuration histories, and third-party monitoring tools. A Site Reliability Engineer (SRE) responding to a 2 AM page must manually correlate telemetry from multiple sources, trace dependencies across services, and form […]</description>
										<content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt; 
&lt;p&gt;Teams running distributed workloads face a persistent operational challenge: when something breaks, the information needed to resolve it is scattered across logs, deployment pipelines, configuration histories, and third-party monitoring tools. A Site Reliability Engineer (SRE) responding to a 2 AM page must manually correlate telemetry from multiple sources, trace dependencies across services, and form hypotheses — a process that routinely takes hours. As systems grow in complexity, the need for an AI-powered operational teammate — an SRE agent — has become increasingly clear.&lt;/p&gt; 
&lt;h3&gt;The Do It Yourself (DIY) path and its limits&lt;/h3&gt; 
&lt;p&gt;Teams exploring this space often start by using their favorite AI coding tools to help during an investigation, a thin wrapper over an large language model (LLM). On-call engineers wake up and looks at the incident details, tickets, give coding tools access to logs, monitoring tools and ask it to launch investigation. These approaches can deliver value for straightforward scenarios, but real world application architectures at scale require context across accounts, monitoring systems, and application topology awareness, enforce governance and access controls, and retained learning from past incidents to ensure a comprehensive incident management. As environments scale, the gap between a simple coding tool with limited context and a production-grade operational agentic teammate widens.&lt;/p&gt; 
&lt;h3&gt;A fully managed alternative&lt;/h3&gt; 
&lt;p&gt;&lt;a href="https://aws.amazon.com/devops-agent/"&gt;AWS DevOps Agent&lt;/a&gt; is your always-available operations teammate that resolves and proactively &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/working-with-devops-agent-autonomous-incident-response.html"&gt;prevents&lt;/a&gt; incidents, optimizes application reliability and performance, and handles &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/working-with-devops-agent-on-demand-devops-tasks.html"&gt;on-demand SRE tasks&lt;/a&gt; across AWS, multicloud, and on-prem environments. DevOps Agent delivers a comprehensive agentic SRE paradigm, shifting teams from reactive firefighting to proactive, AI-driven operational excellence.&lt;/p&gt; 
&lt;p&gt;But what makes AWS DevOps Agent more powerful than what individual SREs can do with their coding agent? In this post, we walk through a serverless URL shortener application on AWS and demonstrate how DevOps Agent — built on &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/about-aws-devops-agent-what-are-devops-agent-spaces.html"&gt;topology intelligence&lt;/a&gt;, a three-tier skills hierarchy, cross-account investigation, and continuous learning — delivers capabilities that a simple LLM wrapper cannot replicate, acting as a true operational teammate at scale that reduces Mean Time to Resolution (MTTR) from hours to minutes.&lt;/p&gt; 
&lt;h2&gt;Prerequisites&lt;/h2&gt; 
&lt;p&gt;Before getting started with DevOps Agent, ensure you have:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;An active &lt;a href="https://aws.amazon.com/resources/create-account/"&gt;AWS account&lt;/a&gt; with an eligible AWS Support plan (contact your AWS account team for supported tiers)&lt;/li&gt; 
 &lt;li&gt;Appropriate &lt;a href="https://aws.amazon.com/iam/"&gt;IAM&lt;/a&gt; permissions to configure cross-account observability&lt;/li&gt; 
 &lt;li&gt;AWS services deployed in a &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/about-aws-devops-agent-supported-regions.html"&gt;supported Region&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/cloudwatch/"&gt;Amazon CloudWatch&lt;/a&gt; and &lt;a href="https://aws.amazon.com/cloudtrail/"&gt;AWS CloudTrail&lt;/a&gt; (included by default)&lt;/li&gt; 
 &lt;li&gt;additional &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/configuring-capabilities-for-aws-devops-agent-connecting-telemetry-sources-index.html"&gt;integrations&lt;/a&gt; connected for enhanced capabilities — &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/connecting-to-ticketing-and-chat-connecting-servicenow.html"&gt;ServiceNow&lt;/a&gt; for ticketing, &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/connecting-to-ticketing-and-chat-connecting-slack.html"&gt;Slack&lt;/a&gt; for team notifications, or &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/connecting-to-ticketing-and-chat-connecting-pagerduty.html"&gt;PagerDuty&lt;/a&gt; for on-call management&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Application Overview&lt;/h2&gt; 
&lt;p&gt;You are an SRE Engineer at a SaaS company that offers URL shortener service deployed on AWS. The application uses fully &lt;a href="https://aws.amazon.com/lambda/serverless-architectures-learn-more/"&gt;serverless architecture&lt;/a&gt;, creates short codes, redirects to original URLs, and tracks analytics.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/cloudfront/"&gt;&lt;strong&gt;Amazon CloudFront&lt;/strong&gt;&lt;/a&gt; serves static assets from an &lt;a href="https://aws.amazon.com/s3/"&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;&lt;/a&gt; bucket&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/api-gateway/"&gt;&lt;strong&gt;Amazon API Gateway&lt;/strong&gt;&lt;/a&gt; routes API requests to &lt;a href="https://aws.amazon.com/lambda/"&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt;&lt;/a&gt; functions&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/dynamodb/"&gt;&lt;strong&gt;Amazon DynamoDB&lt;/strong&gt;&lt;/a&gt; stores URL mappings accessed by Lambda functions&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-25159 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/url-shortener-1.png" alt="Serverless three-tier architecture for a URL shortener application" width="1690" height="920"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Fig 1 – URL Shortener Application&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;This architecture is straightforward to build but operationally complex to troubleshoot. A latency spike in the Redirect function could stem from DynamoDB throttling, a Lambda cold start regression, an API Gateway configuration change, or a CloudFront cache invalidation — and the signals live in different log groups, metrics namespaces, and trace spans. This is exactly where DevOps Agent demonstrates its value as an &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/working-with-devops-agent-autonomous-incident-response.html"&gt;autonomous&lt;/a&gt; operational teammate.&lt;/p&gt; 
&lt;h2&gt;An Investigation in Action&lt;/h2&gt; 
&lt;p&gt;This workflow demonstrates the DevOps Agent autonomously detecting and diagnosing a production incident in just 4 minutes without human intervention, starting when a CloudWatch alarm triggers due to elevated 5xx errors and systematically testing hypotheses until it identifies DynamoDB write throttling caused by a recent code deployment. The DevOps Agent then autonomously posts a complete root cause analysis with specific mitigation recommendations to Slack, including the problematic commit and suggesting either on-demand capacity or a rollback—all accomplished in under 5 minutes from initial alarm to actionable solution.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-25130 size-large" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/30/investigation-in-action-logicial-flow-140x1024.png" alt="Diagram showing the step-by-step flow of a logical investigation process" width="140" height="1024"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Fig 2. Logical Investigation workflow&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-25131 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/30/devops-agent-investigation-flow.png" alt="Workflow of the AWS DevOps Agent showing how it moves from detecting an incident to analyzing the root cause and suggesting mitigation steps" width="1515" height="3916"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 3 – AWS DevOps Agent investigation workflow demonstrating the automated flow from initial incident detection through root cause analysis to actionable mitigation recommendations&lt;/em&gt;&lt;/p&gt; 
&lt;h2&gt;Why DevOps Agent is Different&lt;/h2&gt; 
&lt;p&gt;DevOps Agent is not a chat interface layered over a large language model. It is built on &lt;a href="https://aws.amazon.com/bedrock/agentcore/"&gt;Amazon Bedrock AgentCore&lt;/a&gt; with dedicated infrastructure for memory, policies, evaluations, and observability. Below, we break down six key capabilities — the 6 Cs — that collectively make DevOps Agent a fully functional nextgen operational teammate.&lt;/p&gt; 
&lt;h3&gt;1. Context&lt;/h3&gt; 
&lt;p&gt;An LLM without operational context is limited to generic suggestions. DevOps Agent solves this through &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/about-aws-devops-agent-what-are-devops-agent-spaces.html"&gt;Agent Spaces&lt;/a&gt; — isolated logical containers that provide cross-account access to cloud resources, telemetry sources, code repositories, CI/CD pipelines, and ticketing systems. Within each Agent Space, DevOps Agent builds an application resource topology by auto-discovering resources — containers, network components, log groups, alarms, and deployments — and mapping their interconnections across AWS, &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/configuring-capabilities-for-aws-devops-agent-connecting-azure-index.html"&gt;Azure&lt;/a&gt;, and on-prem environments. A &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/about-aws-devops-agent-learned-skills.html"&gt;learning agent&lt;/a&gt; runs in the background, analyzing infrastructure, telemetry, and code to generate an inferred topology at the application and service layer . DevOps Agent maintains deep, AWS-native &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/configuring-capabilities-for-aws-devops-agent.html"&gt;integrations&lt;/a&gt; with services like &lt;a href="https://aws.amazon.com/pm/eks/"&gt;Amazon Elastic Kubernetes Service (EKS)&lt;/a&gt;, providing introspection into Kubernetes clusters, pod logs, and cluster events for both public and &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/configuring-capabilities-for-aws-devops-agent-connecting-to-privately-hosted-tools.html"&gt;private&lt;/a&gt; environments — capabilities that require privileged access external tools don’t have. DevOps Agent doesn’t just know your resource topology, it knows your telemetry, deployment timeline, and infrastructure and application code. It discovers and knows the relationship between resources, alarms, metrics, and log groups. When it detects a latency spike, it automatically checks &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/connecting-to-cicd-pipelines-connecting-github.html"&gt;GitHub&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/connecting-to-cicd-pipelines-connecting-gitlab.html"&gt;GitLab&lt;/a&gt;, Azure DevOps for recent merges, correlates deployment timestamps with metric anomalies, and determines whether a code change is the probable cause. In the URL shortener example, the agent identifies that a commit adding batch DynamoDB writes was deployed 47 minutes before throttling began — a correlation a human SRE might take 30 minutes to discover manually.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;In our URL shortener&lt;/strong&gt;, DevOps Agent maps the dependency chain from CloudFront through API Gateway to each Lambda function and down to the DynamoDB table. When a latency spike hits the URL Redirect function, the agent traces the relationship graph to determine whether the root cause is DynamoDB read throttling, a Lambda concurrency limit, or an API Gateway timeout configuration — correlating CloudWatch metrics, Lambda traces, and DynamoDB consumed capacity in a single investigation.&lt;/p&gt; 
&lt;h3&gt;2. Control&lt;/h3&gt; 
&lt;p&gt;Context without governance creates risk. Agent Spaces provide centralized control over what the agent can access and how it operates. Administrators define which AWS and Azure accounts, telemetry and code integrations, and &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/configuring-capabilities-for-aws-devops-agent-connecting-mcp-servers.html"&gt;MCP servers&lt;/a&gt; are available within each Agent Space using granular &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/aws-devops-agent-security-devops-agent-iam-permissions.html"&gt;IAM permissions&lt;/a&gt;, This eliminates the inconsistency of individual developers configuring their own toolchains — some thoroughly, some partially, some not at all — and removes the need for ad-hoc onboarding processes for new team members. Every reasoning step and action is logged in immutable audit journals that the agent cannot modify after recording, providing complete &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/aws-devops-agent-security.html"&gt;transparency&lt;/a&gt; into decision-making. AWS DevOps Agent is secured from day one with immutable audit trails logging every reasoning step and tool invocation, AWS CloudTrail integration, IAM Identity Center authentication with granular permissions, and Agent Space-level data governance that isolates investigation data and respects organizational &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/aws-devops-agent-security.html"&gt;security&lt;/a&gt; configurations.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;For the URL shortener&lt;/strong&gt;, the administrator configures a single Agent Space with read access to the production account’s CloudWatch logs, the DynamoDB table metrics, the GitHub repository, and the Slack channel for incident coordination. Every SRE on the team inherits this consistent, controlled configuration — no individual setup required.&lt;/p&gt; 
&lt;h3&gt;3. Convenience&lt;/h3&gt; 
&lt;p&gt;Once an Agent Space is configured, every developer and SRE on the team gets immediate, zero-setup access to the agent’s full operational context — topology, telemetry, code repositories, and ticketing integrations — without configuring anything themselves. This is a meaningful departure from the alternative, where each engineer individually connects their coding agent to Model Context Protocol (&lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/configuring-capabilities-for-aws-devops-agent-connecting-mcp-servers.html"&gt;MCP&lt;/a&gt;) servers for CloudWatch, their observability tool, their source repository, and their ticketing system. In practice, some engineers will complete that setup, some will partially configure it, and some never will — resulting in inconsistent tooling across the team and an onboarding burden for every new hire. With DevOps Agent, the admin configures the Agent Space once, and engineers simply log in to the &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/about-aws-devops-agent-what-is-a-devops-agent-web-app.html"&gt;Operator Web App&lt;/a&gt;, or interact through Slack — whichever tool they already use. The agent provides context-aware responses, maintains conversation history, and supports natural language queries against the application topology without any per-user setup.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;For the URL shortener team&lt;/strong&gt;, a new SRE joining the on-call rotation doesn’t need to spend a day wiring up access to the three Lambda function log groups, the DynamoDB metrics dashboard, and the GitHub repository. They log in to the Agent Space and immediately ask, “Show me all Lambda functions connected to this DynamoDB table” — the topology, telemetry access, and code context are already there.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-25163 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/da-fig3.png" alt="Screenshot showing how the AWS DevOps Agent connects to MCP servers and communication tools" width="1855" height="793"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Fig 4 – AWS DevOps Agent MCP server and Communications integrations&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-25164 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/da-fig4.png" alt="Screenshot showing the AWS DevOps Agent's telemetry integration configuration" width="1837" height="735"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Fig 5 – AWS DevOps Agent Telemetry integrations&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-25165 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/da-fig5.png" alt="Screenshot showing the AWS DevOps Agent's multi-cloud and pipeline integration settings" width="1828" height="718"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Fig 6 – AWS DevOps Agent Multi-Cloud and pipeline integrations&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;4. Collaboration&lt;/h3&gt; 
&lt;p&gt;DevOps Agent is not a passive Q&amp;amp;A tool, it is an autonomous teammate. When an incident triggers via a CloudWatch alarm, PagerDuty alert, Dynatrace Problem, ServiceNow ticket, or any other event source you configure through the &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/connecting-to-ticketing-and-chat-connecting-slack.html"&gt;webhook&lt;/a&gt;, the agent begins investigating immediately without human prompting. It generates hypotheses, queries telemetry and code data sources to test them, and coordinates across collaboration channels — posting investigation timelines in Slack, updating ServiceNow tickets, and routing findings to stakeholders. Extensibility through the MCP and &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/configuring-capabilities-for-aws-devops-agent-connecting-telemetry-sources-index.html"&gt;built-in integrations&lt;/a&gt; with CloudWatch, &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/connecting-telemetry-sources-connecting-datadog.html"&gt;Datadog&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/connecting-telemetry-sources-connecting-dynatrace.html"&gt;Dynatrace&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/connecting-telemetry-sources-connecting-new-relic.html"&gt;New Relic&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/connecting-telemetry-sources-connecting-splunk.html"&gt;Splunk&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/connecting-telemetry-sources-connecting-grafana.html"&gt;Grafana&lt;/a&gt;, GitHub, GitLab, and Azure DevOps ensures the agent can pull signals from wherever the team’s operational data lives. The agent also performs proactive weekly prevention recommendations, analyzing recent incidents to suggest specific improvements across code optimization, observability coverage, infrastructure resilience, and governance practices. Additionally, DevOps Agent operates within the broader frontier agent ecosystem, where investigation findings can include agent-ready instructions for Kiro to implement fixes.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;When the URL shortener experiences a DynamoDB throttling event at 3 AM&lt;/strong&gt;, DevOps Agent detects the alarm, investigates autonomously, identifies that a traffic spike exceeded the table’s provisioned capacity, and posts a mitigation plan in Slack — all before the on-call engineer finishes reading the page. The weekly prevention evaluation then recommends switching to on-demand capacity mode and adding a CloudWatch alarm on ConsumedWriteCapacityUnits to catch future spikes earlier.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-25166 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/da-fig6.png" alt="Screenshot showing Slack notifications sent by the AWS DevOps Agent during an investigation" width="860" height="615"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Fig 7 – AWS DevOps Agent Slack investigation notifications&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-25167 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/da-fig7.png" alt="Screenshot showing prevention recommendations generated by the AWS DevOps Agent in the Ops Backlog" width="1498" height="922"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Fig 8 – AWS DevOps Agent prevention recommendations in the Ops Backlog&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;5. Continuous Learning&lt;/h3&gt; 
&lt;p&gt;This is where AWS DevOps Agent most clearly separates itself from thin LLM wrappers. The agent implements a sophisticated three-tier &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/about-aws-devops-agent-devops-agent-skills.html"&gt;skill hierarchy&lt;/a&gt;:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;AWS-provided skills&lt;/strong&gt; – Built-in capabilities developed by AWS engineers and scientists that reflect proven operational approaches and are continuously maintained under the hood.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;User-defined skills&lt;/strong&gt; – Custom skills that you define to help the agent work more effectively within your specific organizational context and workflows.&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/about-aws-devops-agent-learned-skills.html"&gt;&lt;strong&gt;Learned skills&lt;/strong&gt;&lt;/a&gt; – Operating continuously in the background, AWS DevOps Agent includes a learning sub-agent that performs two critical functions. First, it scans your cloud infrastructure, telemetry data, and code repositories to continuously learn and update your application topology—understanding resources and their relationships to help zero in on key logs related to specific alarms. Second, it analyzes past investigations to identify patterns and optimize future troubleshooting workflows, becoming more effective over time.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;For the URL shortener&lt;/strong&gt;, after DevOps Agent resolves three DynamoDB throttling incidents over a month, the Learning Agent identifies the recurring pattern and generates a learned skill that accelerates future investigations of the same class. The next time throttling occurs, the agent skips exploratory hypotheses and immediately checks provisioned capacity against consumed capacity, reducing investigation time further. The SRE team also uploads a runbook describing their canary deployment process, which the agent references when evaluating whether a recent deployment correlates with an incident.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-25168 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/31/da-fig8.png" alt="Screenshot showing user-defined and learned skills configured for the AWS DevOps Agent" width="1498" height="922"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Fig 9 – AWS DevOps Agent user-defined and learned skills&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;6. Cost Effective&lt;/h3&gt; 
&lt;p&gt;You could build your own agent, but you would still need to pay for the model tokens it consumes. More importantly, you would need to staff a team to develop, maintain, and operate the agent and its integrations. You would also need to periodically evaluate model quality, latency, and costs as underlying models change. With AWS DevOps Agent, you get a team of AWS engineers and scientists who do all of that for you.&lt;/p&gt; 
&lt;p&gt;DevOps Agent uses usage-based pricing — you pay only for the time the agent actively works on a task. There is no per-seat licensing or idle infrastructure cost. The agent works at machine speed, completing investigations in minutes that would take a human engineer hours, and only charges for the actual seconds of active computation.&lt;/p&gt; 
&lt;p&gt;Behind the scenes, DevOps Agent employs significant data retrieval optimizations that reduce cost while improving accuracy. Its query optimization techniques across tools achieve up to 15x faster querying across massive datasets by leveraging AWS-specific access patterns and data characteristics. These optimizations mean the agent consumes less compute per investigation while delivering more precise results — a direct benefit of deep AWS integration that generic LLM wrappers cannot replicate.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;For the URL shortener&lt;/strong&gt;, instead of an SRE spending two hours manually querying CloudWatch Logs Insights across three Lambda function log groups and correlating with DynamoDB metrics, DevOps Agent completes the same investigation in minutes using optimized queries — at a fraction of the cost of engineering time.&lt;/p&gt; 
&lt;h2&gt;Proven real-world results&lt;/h2&gt; 
&lt;p&gt;Customers and partners using AWS DevOps Agent in preview report up to 75% lower MTTR, 80% faster investigations, and 94% root cause accuracy, enabling 3–5x faster incident resolution.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Western Governor’s University (WGU)&lt;/strong&gt;, a leading online university serving over 191,000 students, was among the first organizations to deploy Amazon DevOps Agent into production, doing so even ahead of the preview launch at re:Invent. As a large-scale Dynatrace user, WGU leverages the DevOps Agent’s native Dynatrace integration, enabling Dynatrace Intelligence to automatically route problem records to the Agent for investigation and return enriched findings directly back into Dynatrace.&lt;/p&gt; 
&lt;p&gt;During a recent production investigation, WGU’s SRE team used the AWS DevOps Agent to analyze a service disruption scenario, reducing total resolution time from an estimated two hours to just 28 minutes—a 77% improvement in MTTR. AWS DevOps Agent quickly pinpointed the root cause within an AWS Lambda function’s configuration, surfacing critical operational knowledge that had previously existed only in undiscovered internal documentation.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Zenchef&lt;/strong&gt; is a restaurant technology platform that helps restaurants manage reservations, table operations, digital menus, payments, and guest marketing from a single commission-free system. With a focused DevOps team managing several production environments across multiple business units, they faced a real test when an API integration issue affecting a downstream partner surfaced during a company hackathon, with engineers engaged in the event and nothing significant showing up in monitoring to point them in the right direction.&lt;/p&gt; 
&lt;p&gt;Rather than pulling engineers off the hackathon, the team brought the issue to AWS DevOps Agent. It worked through the problem systematically, ruling out authentication as a contributing factor, shifting investigation focus on Amazon Elastic Container Service (Amazon ECS) deployments, and ultimately tracing the root cause to a code regression in which a new version failed to handle an unrecognized enum value in the database. The full investigation wrapped in 20-30 minutes, roughly a 75% reduction compared to the 1-2 hours it would have taken manually, and the findings were shared directly with the responsible engineer.&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;&lt;a href="https://aws.amazon.com/devops-agent/"&gt;AWS DevOps Agent&lt;/a&gt; is architecturally distinct from LLM wrappers. Its &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/about-aws-devops-agent-what-is-a-devops-agent-topology.html"&gt;topology intelligence&lt;/a&gt; service maps AWS service relationships to understand application dependencies. Its three-tier skill hierarchy with a validation-based Learning Agent creates compounding operational knowledge specific to each customer environment. Its cross-account investigation capability, governed autonomy model, and immutable audit trails address enterprise requirements that no external wrapper can satisfy.&lt;/p&gt; 
&lt;p&gt;The 6 Cs — Context, Control, Convenience, Collaboration, Continuous Learning, and Cost Effective — are not marketing categories. They represent concrete engineering investments: Agent Spaces for isolation, topology, optimized log queries for performance, federated credential management for cross-account access, and a skills architecture that learns and improves with every investigation. For any team operating distributed and complex architectural applications on AWS — DevOps Agent reduces the operational burden of incident response while building institutional knowledge that makes every future investigation faster and more accurate.&lt;/p&gt; 
&lt;p&gt;Ready to get started? Visit the &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/"&gt;AWS DevOps Agent documentation&lt;/a&gt; to explore the setup process, join the &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/767d3081-b4fa-4e08-81da-17e5c94a1a08/en-US"&gt;AWS DevOps Agent workshop&lt;/a&gt; for hands-on experience, and/or contact your AWS account team to configure your first Agent Space.&lt;/p&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/30/tqquresh.png" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Tipu Qureshi&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Tipu Qureshi is a Senior Principal Technologist in AWS Agentic AI, focusing on operational excellence and incident response automation. He works with AWS customers to design resilient, observable cloud applications and autonomous operational systems.&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/30/billfine.png" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Bill Fine&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Bill Fine is a Product Management Leader for Agentic AI at AWS, where he leads product strategy and customer engagement for AWS DevOps Agent.&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/30/jalioto.png" alt="" width="106" height="106"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Joe Alioto&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Joe is a World Wide Senior Specialist Solutions Architect for Cloud Operations focusing on Observability and Centralized Operations Management on AWS. He has over two decades of hands-on operations engineering and architecture experience. When he isn’t working, he enjoys spending time with his family, learning new technologies and pc gaming.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/01/23/janmuri-high-res-current-photo.jpeg" alt="" width="109" height="164"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Janardhan Molumuri&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Janardhan Molumuri is a Principal Technical Leader at AWS, comes with over two decades of Engineering leadership experience, advising customers on Cloud and AI Adoption strategies and emerging technologies including generative AI. He has passion for thought leadership, speaking, writing, and enjoys exploring technology trends to solve problems at scale.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Standardizing construct properties with AWS CDK Property Injection</title>
		<link>https://aws.amazon.com/blogs/devops/standardizing-construct-properties-with-aws-cdk-property-injection/</link>
		
		<dc:creator><![CDATA[Marco Frattallone]]></dc:creator>
		<pubDate>Thu, 05 Mar 2026 17:39:48 +0000</pubDate>
				<category><![CDATA[AWS Cloud Development Kit]]></category>
		<category><![CDATA[AWS CloudFormation]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Technical How-to]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[AWS CDK]]></category>
		<guid isPermaLink="false">8967086f6fa18ca83c08a4c03fd9c6259b56144f</guid>

					<description>Standardizing CDK construct properties across a large organization requires repetitive manual effort that scales poorly as teams and repositories grow. Development teams working with AWS Cloud Development Kit (AWS CDK) must apply the same configuration properties across similar resources to meet security, compliance, and operational standards but manual configuration leads to drift, maintenance burden, and […]</description>
										<content:encoded>&lt;p&gt;Standardizing CDK construct properties across a large organization requires repetitive manual effort that scales poorly as teams and repositories grow. Development teams working with AWS Cloud Development Kit (&lt;a href="https://docs.aws.amazon.com/cdk/"&gt;AWS CDK&lt;/a&gt;) must apply the same configuration properties across similar resources to meet security, compliance, and operational standards but manual configuration leads to drift, maintenance burden, and compliance gaps. In this post, you learn how to use Property Injection, a feature introduced in AWS CDK v2.196.0, to automatically apply default properties to constructs without modifying existing code.&lt;/p&gt; 
&lt;h2&gt;The Challenge of Infrastructure Standardization&lt;/h2&gt; 
&lt;p&gt;Organizations implementing infrastructure as code face a fundamental tension between developer productivity and operational consistency. CDK provides abstractions for defining cloud resources, but ensuring compliance with organizational security policies, compliance requirements, and operational standards requires repetitive manual configuration.&lt;br&gt; Consider this scenario: an organization’s security policy requires that all SecurityGroups disable outbound traffic by default. Development teams must apply these settings to every &lt;code&gt;SecurityGroup&lt;/code&gt;:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-typescript"&gt;
new SecurityGroup(stack, 'api-sg', {
  vpc: myVpc,
  allowAllOutbound: false,        // Required by security policy
  allowAllIpv6Outbound: false     // Required by security policy
});

new SecurityGroup(stack, 'db-sg', {
  vpc: myVpc,
  allowAllOutbound: false,        // Same configuration repeated
  allowAllIpv6Outbound: false     // Same configuration repeated
});
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;This manual approach creates four specific problems:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Configuration drift:&lt;/strong&gt; Teams omit required properties or apply them inconsistently&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Maintenance burden:&lt;/strong&gt; Policy updates require coordinated changes across multiple repositories and teams&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Developer friction&lt;/strong&gt;: Repetitive configuration tasks slow development velocity and increase cognitive load&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Compliance gaps:&lt;/strong&gt; Manual processes introduce human error, creating security or compliance violations&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Custom construct libraries address these challenges but require refactoring every construct instantiation in existing code and create learning curves for development teams already familiar with standard CDK patterns.&lt;/p&gt; 
&lt;h2&gt;Introducing Property Injection&lt;/h2&gt; 
&lt;p&gt;&lt;strong&gt;AWS CDK Property Injection addresses these challenges by automatically applying default properties to constructs without requiring changes to existing code.&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/blueprints.html"&gt;Property Injection&lt;/a&gt; is a feature introduced in AWS CDK v2.196.0 that intercepts construct creation and automatically applies organizational defaults. With this approach, you can enforce standards consistently while preserving existing development workflows and code patterns.&lt;/p&gt; 
&lt;p&gt;After implementing Property Injection, the same &lt;code&gt;SecurityGroup&lt;/code&gt; creation requires only the vpc parameter, security defaults are applied automatically:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-typescript"&gt;// Your existing code remains unchanged
new SecurityGroup(stack, 'my-sg', {
  vpc: myVpc
  // Security defaults applied automatically by Property Injection
});&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The key benefits of this approach include:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Zero-impact adoption:&lt;/strong&gt; Existing CDK code continues to work without modification&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Centralized policy management:&lt;/strong&gt; Standards are defined once and applied automatically&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Consistent enforcement:&lt;/strong&gt; Policies are applied uniformly across all applications and teams&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Reduced maintenance overhead:&lt;/strong&gt; Policy updates require changes in only one location&lt;/li&gt; 
&lt;/ul&gt; 
&lt;div id="attachment_25079" style="width: 886px" class="wp-caption alignnone"&gt;
 &lt;img aria-describedby="caption-attachment-25079" loading="lazy" class="wp-image-25079 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/03/01-cdk-property-injection-mechanism.png" alt="This diagram shows the five-step Property Injection process in a clear two-column format. The left column outlines each process step, while the right column shows the corresponding implementation details with properly formatted TypeScript code. The flow demonstrates how CDK intercepts SecurityGroup creation, applies organizational security defaults through property injectors, merges them with developer-specified properties, and creates a fully configured SecurityGroup that meets both developer requirements and organizational standards." width="876" height="1035"&gt;
 &lt;p id="caption-attachment-25079" class="wp-caption-text"&gt;Figure 1: CDK Property Injection Mechanism&lt;/p&gt;
&lt;/div&gt; 
&lt;p&gt;Property Injection operates transparently within CDK, intercepting construct creation to apply predefined defaults before merging them with any properties explicitly provided by developers. This ensures that organizational standards are consistently applied while maintaining the flexibility for developers to override defaults when specific use cases require it.&lt;/p&gt; 
&lt;h2&gt;Understanding the Implementation Approach&lt;/h2&gt; 
&lt;p&gt;Property Injection works by implementing the &lt;code&gt;IPropertyInjector&lt;/code&gt; interface, which allows you to define default properties for specific construct types. These injectors are registered with CDK stacks and automatically apply their defaults during construct instantiation.&lt;br&gt; The implementation follows three steps: define the defaults you want to apply, register the injector with your stack, and let CDK handle the automatic application of these defaults to matching constructs.&lt;/p&gt; 
&lt;h3&gt;Implementation Guide&lt;/h3&gt; 
&lt;p&gt;This section shows you how to implement Property Injection for &lt;code&gt;SecurityGroup&lt;/code&gt; constructs.&lt;/p&gt; 
&lt;h4&gt;Step 1: Create a Property Injector&lt;/h4&gt; 
&lt;p&gt;Create a class that implements the IPropertyInjector interface:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-typescript"&gt;import { IPropertyInjector, InjectionContext } from 'aws-cdk-lib';
import { SecurityGroup, SecurityGroupProps } from 'aws-cdk-lib/aws-ec2';

export class SecurityGroupDefaults implements IPropertyInjector {
  readonly constructUniqueId: string;

  constructor() {
    this.constructUniqueId = SecurityGroup.PROPERTY_INJECTION_ID;
  }

  inject(originalProps: SecurityGroupProps, context: InjectionContext): SecurityGroupProps {
    return {
      // Apply organizational defaults
      allowAllIpv6Outbound: false,
      allowAllOutbound: false,
      // Original properties override defaults when specified
      ...originalProps,
    };
  }
}&lt;/code&gt;&lt;/pre&gt; 
&lt;h4&gt;Step 2: Add the Injector to Your Stack&lt;/h4&gt; 
&lt;p&gt;Apply the injector to your CDK stack:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-typescript"&gt;import { Stack } from 'aws-cdk-lib';
import { SecurityGroupDefaults } from './security-defaults';

const stack = new Stack(app, 'MyStack', {
  propertyInjectors: [
    new SecurityGroupDefaults()
  ]
});&lt;/code&gt;&lt;/pre&gt; 
&lt;h4&gt;Step 3: Use Constructs Normally&lt;/h4&gt; 
&lt;p&gt;Create constructs as usual. The injector applies defaults automatically:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-typescript"&gt;// This SecurityGroup receives the injected defaults:
// - allowAllOutbound: false
// - allowAllIpv6Outbound: false
new SecurityGroup(stack, 'my-sg', {
  vpc: myVpc
});

// You can override defaults when necessary
new SecurityGroup(stack, 'special-sg', {
  vpc: myVpc,
  allowAllOutbound: true  // Overrides the injected default
});&lt;/code&gt;&lt;/pre&gt; 
&lt;div id="attachment_25081" style="width: 1119px" class="wp-caption alignnone"&gt;
 &lt;img aria-describedby="caption-attachment-25081" loading="lazy" class="wp-image-25081 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/03/02-cdk-code-before-after.png" alt="This side-by-side comparison shows the difference between manual configuration and Property Injection. The left side (Before) shows three SecurityGroup definitions, each requiring manual specification of allowAllOutbound: false and allowAllIpv6Outbound: false, leading to repetitive code, inconsistency risk, and maintenance burden. The right side (After) shows the same SecurityGroups created with the VPC parameter alone after a one-time Property Injection setup, demonstrating the DRY principle, consistent defaults, and reduced maintenance." width="1109" height="675"&gt;
 &lt;p id="caption-attachment-25081" class="wp-caption-text"&gt;Figure 2: CDK Code Before vs After Property Injection&lt;/p&gt;
&lt;/div&gt; 
&lt;h2&gt;Property Injection vs L2 Constructs&lt;/h2&gt; 
&lt;p&gt;You can achieve the same enforcement of default properties by creating custom &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/constructs.html#constructs_lib"&gt;L2 constructs&lt;/a&gt; with built-in defaults. However, Property Injection is better suited for standardizing existing codebases without refactoring, while L2 Constructs are better suited for new projects where you want custom APIs and multi-resource abstractions.&lt;/p&gt; 
&lt;div id="attachment_25083" style="width: 1073px" class="wp-caption alignnone"&gt;
 &lt;img aria-describedby="caption-attachment-25083" loading="lazy" class="size-full wp-image-25083" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/03/03-decision-tree.png" alt="This decision tree guides the selection between Property Injection and L2 Constructs for CDK standardization. Starting with existing CDK applications, it evaluates willingness to accept potential breaking changes from new defaults. If breaking changes are acceptable or no existing code exists, it assesses whether custom APIs, naming improvements, or multi-resource patterns are needed beyond simple defaults. The tree leads to three outcomes: Property Injection (blue) for transparent defaults with existing code compatibility, L2 Constructs (orange) for custom APIs and purpose-built abstractions, or a Hybrid approach (green) combining both techniques for maximum flexibility." width="1063" height="683"&gt;
 &lt;p id="caption-attachment-25083" class="wp-caption-text"&gt;Figure 3: Decision Tree – Property Injection vs L2 Constructs&lt;/p&gt;
&lt;/div&gt; 
&lt;h2&gt;Implementation Comparison&lt;/h2&gt; 
&lt;p&gt;Consider an application with multiple &lt;code&gt;SecurityGroup&lt;/code&gt; instantiations that need standardized security defaults.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;L2 Construct approach&lt;/strong&gt; requires creating a custom construct and updating each instantiation:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;// Step 1: Create custom L2 construct
export class SecureSecurityGroup extends SecurityGroup {
  constructor(scope: Construct, id: string, props: SecurityGroupProps) {
    super(scope, id, {
      allowAllOutbound: false,
      allowAllIpv6Outbound: false,
      ...props
    });
  }
}

// Step 2: Update each instantiation throughout your codebase
// Change from:
new SecurityGroup(stack, 'sg1', { vpc: myVpc })
new SecurityGroup(stack, 'sg2', { vpc: myVpc })
new SecurityGroup(stack, 'sg3', { vpc: myVpc })

// To:
new SecureSecurityGroup(stack, 'sg1', { vpc: myVpc })
new SecureSecurityGroup(stack, 'sg2', { vpc: myVpc })
new SecureSecurityGroup(stack, 'sg3', { vpc: myVpc })&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Property Injection approach&lt;/strong&gt; requires one-time stack configuration:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;// Step 1: Add injector to stack configuration
stack.propertyInjectors = [new SecurityGroupDefaults()];

// Step 2: Existing SecurityGroup calls receive defaults automatically
new SecurityGroup(stack, 'sg1', { vpc: myVpc })  // Gets defaults
new SecurityGroup(stack, 'sg2', { vpc: myVpc })  // Gets defaults  
new SecurityGroup(stack, 'sg3', { vpc: myVpc })  // Gets defaults&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Key Differences&lt;/h2&gt; 
&lt;p&gt;Property Injection works with existing construct calls, requiring no changes to how developers instantiate SecurityGroups or other constructs. This approach overrides constructs from external libraries and can be implemented without modifying existing code. Developers continue using familiar CDK APIs without learning new interfaces.&lt;/p&gt; 
&lt;p&gt;L2 Constructs require updating all constructor calls throughout your codebase. This approach cannot modify third-party construct creation since you must change each instantiation to use your custom construct. Implementation requires refactoring existing code and developers must learn your custom construct APIs instead of standard CDK interfaces. L2 constructs serve multiple purposes beyond complex business logic – simple L2 constructs provide domain-specific naming conventions and cleaner APIs, while complex L2 constructs orchestrate three or more resources and implement business rules.&lt;/p&gt; 
&lt;h2&gt;When to Choose Each Approach&lt;/h2&gt; 
&lt;p&gt;&lt;strong&gt;Choose Property Injection when you need to standardize existing infrastructure.&lt;/strong&gt; Property Injection excels in scenarios where you already have CDK applications deployed and need to apply consistent defaults retroactively. Property Injection works transparently with existing code, requiring no changes to how developers instantiate constructs. This makes it useful when you have existing CDK applications that you want to standardize without disrupting current development workflows.&lt;/p&gt; 
&lt;p&gt;Property Injection also solves the challenge of applying defaults to constructs from third-party libraries. Since you cannot modify external library code, Property Injection enforces organizational standards on any construct type, regardless of its source. Additionally, when you want to implement standards without changing existing code, Property Injection operates at the framework level, automatically applying defaults during construct instantiation without requiring developers to modify their existing implementations.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Choose L2 Constructs when you need custom APIs or multi-resource patterns.&lt;/strong&gt; L2 Constructs provide the right abstraction when you want to create purpose-built interfaces that differ from standard CDK APIs. This includes simple wrappers with domain-specific naming, complex business logic, validation rules, or multi-resource orchestration patterns. L2 Constructs excel when you want to create opinionated APIs that simplify common patterns by hiding complexity behind intuitive interfaces.&lt;/p&gt; 
&lt;p&gt;L2 Constructs suit new application development where you can design the API from the start. This approach creates purpose-built abstractions that match your organization’s specific use cases and terminology. Unlike Property Injection, which applies defaults to existing construct APIs, with L2 Constructs you can design entirely new APIs that directly represent your business domain and operational patterns.&lt;/p&gt; 
&lt;h2&gt;Implementation Patterns&lt;/h2&gt; 
&lt;h3&gt;Stack Integration Methods&lt;/h3&gt; 
&lt;p&gt;The CDK provides two methods for adding Property Injectors to stacks:&lt;/p&gt; 
&lt;div id="attachment_25088" style="width: 1107px" class="wp-caption alignnone"&gt;
 &lt;img aria-describedby="caption-attachment-25088" loading="lazy" class="wp-image-25088 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/03/04-cdk-stack-integration.png" alt="This diagram demonstrates two methods for adding Property Injectors to CDK stacks. Method 1 (blue) shows adding injectors directly in the Stack constructor’s propertyInjectors array. Method 2 (orange) shows using PropertyInjectors.of(stack).add() after stack creation. Both methods produce identical results with green checkmarks indicating success. The diagram includes usage examples showing normal SecurityGroup instantiation (blue) that inherits defaults automatically, and override scenarios (orange) where developers explicitly override injected defaults. The bottom section shows the resulting CloudFormation output: default SecurityGroups have empty egress rules (green), while overridden ones include outbound traffic rules (orange)." width="1097" height="784"&gt;
 &lt;p id="caption-attachment-25088" class="wp-caption-text"&gt;Figure 4: CDK Stack Integration Methods&lt;/p&gt;
&lt;/div&gt; 
&lt;p&gt;&lt;strong&gt;Method 1: Stack Constructor&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;const stack = new Stack(app, 'MyStack', {
  propertyInjectors: [new SecurityGroupDefaults()]
});&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Method 2: PropertyInjectors.of()&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;const stack = new Stack(app, 'MyStack');
PropertyInjectors.of(stack).add(new SecurityGroupDefaults());&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Both methods produce the same result. Choose the method that best fits your existing code structure. For more details, see the &lt;code&gt;PropertyInjectors&lt;/code&gt; API documentation.&lt;/p&gt; 
&lt;h3&gt;Organization-Wide Implementation&lt;/h3&gt; 
&lt;p&gt;For organization-wide standardization, create a shared library of injectors:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;// @myorg/cdk-injectors package
export const ORGANIZATION_INJECTORS: IPropertyInjector[] = [
  new SecurityGroupDefaults(),
  new LambdaFunctionDefaults(),
  new S3BucketDefaults(),
];

// Teams import and use the shared injectors
import { ORGANIZATION_INJECTORS } from '@myorg/cdk-injectors';

const stack = new Stack(app, 'TeamStack', {
  propertyInjectors: ORGANIZATION_INJECTORS
});&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Scope Hierarchy&lt;/h3&gt; 
&lt;p&gt;Property Injectors can be applied at different levels in the &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/constructs.html#constructs_tree"&gt;CDK construct tree&lt;/a&gt;:&lt;/p&gt; 
&lt;div class="mceTemp"&gt; 
 &lt;p&gt;&lt;img loading="lazy" class="wp-image-25089 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/03/05-cdk-scope-hierarchy.png" alt="This diagram illustrates the three-level hierarchy of Property Injector scopes in CDK with integrated resolution examples. The App level (blue) shows a BucketInjector ‘b1’ that applies globally, with an example showing how Stack2 buckets use this injector. The Stage level (green) demonstrates a FunctionInjector ‘f1’ that applies to all stacks within the stage, including an example of Stack1 functions using this injector. The Stack level (orange) shows two stacks: Stack1 with its own BucketInjector ‘b2’ that overrides the app-level injector, and Stack2 with no injectors that inherits from parent scopes. The Resolution Rules box (green) explains that CDK searches from most specific (stack) to most general (app), with the first match winning per construct type. Arrows show the hierarchical relationship between scopes." width="1124" height="754"&gt;&lt;/p&gt; 
 &lt;ul id="attachment_25089" class="wp-caption alignnone" style="width: 1124px"&gt;
  Figure 5: CDK Scope Hierarchy &amp;amp; Injector Resolution
 &lt;/ul&gt; 
&lt;/div&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;App level:&lt;/strong&gt; Applies to all stacks in the application&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Stage level:&lt;/strong&gt; Applies to all stacks within a specific stage&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Stack level:&lt;/strong&gt; Applies only to constructs within a specific stack&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;CDK searches for applicable injectors starting from the construct’s immediate parent scope and moving upward. The first matching injector for each construct type is used.&lt;/p&gt; 
&lt;h2&gt;Best Practices&lt;/h2&gt; 
&lt;p&gt;When implementing Property Injection, begin with high-impact constructs like SecurityGroups, VPCs, and Lambda functions that require repetitive configuration.&lt;/p&gt; 
&lt;p&gt;These constructs have the highest frequency of misconfiguration and the most direct compliance impact, making them the most valuable targets for early adoption.&lt;/p&gt; 
&lt;p&gt;Document your defaults by explaining what properties your injectors provide and why. Include examples and link to relevant policies that drive the requirements. With this documentation, developers can understand standards and make informed override decisions.&lt;/p&gt; 
&lt;p&gt;Write automated tests using CDK testing utilities to verify that injectors apply expected defaults. Test both standard scenarios and cases where developers override properties to prevent regressions when updating injector logic.&lt;/p&gt; 
&lt;p&gt;Version injectors carefully using semantic versioning principles because changes affect all applications. Coordinate updates across teams and provide migration guides for breaking changes or changes to default values.&lt;/p&gt; 
&lt;p&gt;Design override mechanisms so that developers can handle edge cases while benefiting from organizational standards. Property Injection operates as defaults, not restrictions, so design injectors to merge gracefully with developer-specified properties.&lt;/p&gt; 
&lt;h2&gt;Limitations and Considerations&lt;/h2&gt; 
&lt;p&gt;Property Injection operates as a default mechanism rather than a compliance enforcement system. Developers retain the ability to override injected properties, which means organizations cannot rely solely on Property Injection for strict compliance requirements. For teams that need mandatory compliance, combine Property Injection with &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/aspects.html"&gt;CDK Aspects&lt;/a&gt; or &lt;a href="https://docs.aws.amazon.com/config/"&gt;AWS Config rules&lt;/a&gt; to validate and enforce standards.&lt;/p&gt; 
&lt;p&gt;The feature works exclusively with L2 constructs, as documented in the official AWS CDK guidance. The &lt;code&gt;IPropertyInjector&lt;/code&gt; interface targets specific L2 construct types, and &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/constructs.html#constructs_l1_using"&gt;L1 (CloudFormation) constructs&lt;/a&gt; use different instantiation patterns that bypass the property injection mechanism entirely. Organizations with L1 construct usage need alternative standardization approaches.&lt;/p&gt; 
&lt;p&gt;Property Injection introduces debugging complexity because injected properties do not appear directly in application code. Developers troubleshooting construct behavior must understand which injectors apply to specific construct types and how those injectors modify properties. This hidden behavior requires documentation that lists each injector, the properties it sets, and the policy it enforces, along with clear naming conventions to maintain code clarity.&lt;/p&gt; 
&lt;p&gt;The feature requires CDK v2.196.0 or later, which affects adoption timelines for organizations using older CDK versions. Teams must plan upgrade paths and test compatibility before implementing Property Injection across their applications.&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;Property Injection provides a mechanism for applying consistent default properties to CDK constructs without requiring changes to existing code. This approach reduces repetitive configuration, improves consistency, and simplifies maintenance of CDK applications.&lt;br&gt; Property Injection is the right choice for organizations that need to standardize construct configurations across existing codebases while preserving developer workflows. When combined with proper testing and documentation, Property Injection becomes a reliable foundation for infrastructure governance across your organization.&lt;/p&gt; 
&lt;h3&gt;About the authors:&lt;/h3&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="//d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/04/20231126_103410-150x150.jpg" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Put Cheung&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Put Cheung is a Senior Software Development Engineer at AWS Security. He is a part of a team that is making it easier for builders to configure AWS Resources securely. AWS CDK Property Injection is an important step toward this goal.&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/03/03/rico.png" alt="" width="120" height="120"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Rico Huijbers&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Rico Huijbers is a Software Engineer at Amazon Web Services. He is extremely lazy and is therefore on a quest to eradicate the need for repetitive manual work from software engineering. Rico loves working on AWS CDK—it’s the tool he wishes he had 5 years earlier.&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/01/10/CW_8565.jpg" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Marco Frattallone&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Marco Frattallone is a Senior Technical Account Manager at AWS focused on supporting Partners. He works closely with Partners to help them build, deploy, and optimize their solutions on AWS, providing guidance and leveraging best practices. Marco focuses on helping Partners adopt emerging AWS services and translate technical capabilities into business outcomes. Outside work, he enjoys outdoor cycling, sailing, and exploring new cultures.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Automate AWS Lambda Runtime Upgrades with AWS Transform custom</title>
		<link>https://aws.amazon.com/blogs/devops/automate-aws-lambda-runtime-upgrades-with-aws-transform-custom/</link>
		
		<dc:creator><![CDATA[Venugopalan Vasudevan]]></dc:creator>
		<pubDate>Mon, 02 Mar 2026 11:54:36 +0000</pubDate>
				<category><![CDATA[AWS Lambda]]></category>
		<category><![CDATA[AWS Transform]]></category>
		<category><![CDATA[Best Practices]]></category>
		<category><![CDATA[Technical How-to]]></category>
		<category><![CDATA[modernization]]></category>
		<category><![CDATA[Python]]></category>
		<guid isPermaLink="false">007a08276d2dc9748def0a95340230802ad3aac7</guid>

					<description>Introduction Organizations carry a growing burden of technical debt — aging codebases, outdated runtimes, and legacy frameworks that slow innovation, increase security risk, and inflate maintenance costs. Addressing this debt requires tackling a wide range of code transformation challenges: version upgrades, runtime migrations, framework transitions, and language translations, all of which must be repeated across […]</description>
										<content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt; 
&lt;p&gt;Organizations carry a growing burden of technical debt — aging codebases, outdated runtimes, and legacy frameworks that slow innovation, increase security risk, and inflate maintenance costs. Addressing this debt requires tackling a wide range of code transformation challenges: version upgrades, runtime migrations, framework transitions, and language translations, all of which must be repeated across multiple codebases. Today, most organizations perform these tasks manually, consuming &lt;a href="https://www.sciencedirect.com/science/article/abs/pii/S0164121219301335"&gt;20–30% of enterprise software development&lt;/a&gt; effort. Even where automation exists, it’s typically narrow in scope and requires significant upfront investment — leaving most organizations unable to scale transformations effectively and, as a result, unable to meaningfully reduce the technical debt that continues to compound over time.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://aws.amazon.com/transform/custom/"&gt;AWS Transform custom&lt;/a&gt; addresses this gap — an intelligent AI agent that learns organization-specific code transformations, executes them consistently at scale, and improves from developer feedback, without requiring specialized automation expertise. This blog explores how AWS Transform custom tackles one of the most pressing transformation challenges today.&lt;/p&gt; 
&lt;h3&gt;Managing Lambda Runtime Lifecycles&lt;/h3&gt; 
&lt;p&gt;&lt;a href="https://aws.amazon.com/lambda/"&gt;AWS Lambda&lt;/a&gt; follows a runtime deprecation policy that aligns with the end of community long-term support for programming languages, with all&amp;nbsp;current deprecation schedules available in the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html#runtime-support-policy"&gt;AWS Lambda runtime documentation&lt;/a&gt;. As upstream language maintainers deprecate runtime versions – including &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html#runtimes-deprecated"&gt;Python 3.8, Node.js 14,&amp;nbsp;and Java 8 &lt;/a&gt;&amp;nbsp;, organizations face the critical challenge of upgrading hundreds or thousands of Lambda functions before these runtimes reach end-of-life.&lt;/p&gt; 
&lt;p&gt;When a Lambda runtime reaches end of life, functions lose access to security patches and technical support, leaving applications potentially exposed to known vulnerabilities and compliance risks. Performance degrades as optimizations in newer runtimes go unrealized, and technical debt compounds — the longer you wait, the harder and more expensive the migration becomes.&lt;/p&gt; 
&lt;p&gt;For organizations managing hundreds or thousands of Lambda functions across multiple runtimes and languages, the effort is amplified by the scale of coordination required, the manual burden of testing and validation, and the reality that upgrade expertise is often siloed within a handful of engineers. It’s a recurring, high-stakes cycle that pulls teams away from building features.&lt;/p&gt; 
&lt;p&gt;This is exactly the type of repeatable, organization-wide transformation that AWS Transform custom code transformations can automate.&amp;nbsp;The following sections explore how you can use AWS Transform custom to address Python runtime upgrades and demonstrate the automated approach with a practical example.&lt;/p&gt; 
&lt;h2&gt;Sample application&lt;/h2&gt; 
&lt;p&gt;This demonstration uses&amp;nbsp;&lt;a href="https://github.com/aws-samples/sam-python-crud-sample.git"&gt;SAM Python CRUD Sample&lt;/a&gt; — an open-source serverless application built with &lt;a href="https://aws.amazon.com/serverless/sam/"&gt;AWS SAM&lt;/a&gt; that implements a full CRUD API. The application consists of five Lambda functions that create, read, update, list, and delete activity records. The following walkthrough shows how AWS Transform custom automates Python runtime upgrades by migrating these Lambda functions from a deprecated runtime (Python 3.8) to a modern runtime version (Python 3.13).&lt;/p&gt; 
&lt;h2&gt;Prerequisites&lt;/h2&gt; 
&lt;p&gt;Before beginning the transformation process, verify the following requirements:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-get-started.html#custom-installation"&gt;AWS Transform CLI&lt;/a&gt; installed and configured in your development environment&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-get-started.html#custom-authentication"&gt;Authentication&lt;/a&gt; with AWS credentials configured locally and proper IAM permissions to call AWS Transform&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://git-scm.com/install/"&gt;Git&lt;/a&gt; installed for cloning sample repositories&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://github.com/astral-sh/uv"&gt;uv package manager&lt;/a&gt;&amp;nbsp;for Python environments&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;For detailed setup&amp;nbsp;instructions, see the &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-get-started.html"&gt;Getting Started with AWS Transform custom&lt;/a&gt;&amp;nbsp;guide.&lt;/p&gt; 
&lt;h2&gt;Hands-On Example: Lambda Runtime Upgrade&lt;/h2&gt; 
&lt;h3&gt;Step 1: Prepare the Sample Project&lt;/h3&gt; 
&lt;p&gt;Clone a sample Python Lambda repository to your local environment&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;git clone https://github.com/aws-samples/sam-python-crud-sample.git
cd sam-python-crud-sample&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Verify the Initial Setup and make sure all the Tests Pass:&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;uv venv --python 3.8 # uv will automatically download Python 3.8 if not already installed
source .venv/bin/activate
uv pip install -r requirements.txt
uv pip install -r requirements_dev.txt
uv pip install "moto[dynamodb]&amp;lt;3"
python -m pytest tests/ -v -o "addopts="&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Step 2: Start the Transformation&lt;/h3&gt; 
&lt;p&gt;AWS Transform custom supports both interactive and&amp;nbsp;non-interactive&amp;nbsp;execution (non-interactive mode for CI/CD and batch execution is covered at the end). Launch the CLI in interactive mode with -t to trust all tools, which lets you use natural language to invoke and define transformations:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;atx -t&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Note :&lt;/strong&gt; The &lt;code&gt;-t&lt;/code&gt; flag trusts all tool executions without prompting for confirmation. This is convenient&lt;/p&gt; 
&lt;p&gt;for walkthroughs but means the agent can run shell commands automatically. Review the &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-workflows.html"&gt;Trust settings&lt;/a&gt; for details on controlling tool&amp;nbsp;permissions.&lt;/p&gt; 
&lt;p&gt;From here, you can use natural language to list and invoke transformations.&lt;/p&gt; 
&lt;p&gt;&lt;code&gt;&amp;gt;list all the transformations available&lt;/code&gt;&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-25061 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/28/image-1-6.png" alt="AWS Managed Transformations list" width="2300" height="1255"&gt;&lt;/p&gt; 
&lt;p&gt;This lists all available transformations — both AWS-managed and custom. AWS provides built-in transformations for common tasks like language upgrades (Java, Python, Node.js), SDK migrations, and Graviton migration. You can also create your own custom transformations using natural language, docs, and code samples.&lt;/p&gt; 
&lt;p&gt;Invoke the transformation by specifying the transformation name &lt;code&gt;AWS/python-version-upgrade&lt;/code&gt;&amp;nbsp;and project path. The agent will prompt you for additional inputs like target Python version and codebase path during the flow.&lt;/p&gt; 
&lt;p&gt;&lt;code&gt;&amp;gt; Run&amp;nbsp;AWS/python-version-upgrade on my&amp;nbsp;project&lt;/code&gt;&lt;/p&gt; 
&lt;p&gt;&lt;img src="./media/media/image2.png" alt=""&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25060" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/28/image-2-7.png" alt="Python version upgrade transformation start" width="2332" height="595"&gt;&lt;/p&gt; 
&lt;h3&gt;Step 3: Transformation Planning&lt;/h3&gt; 
&lt;p&gt;Before starting the planning process, you can provide any additional feedback if any. You can say &lt;code&gt;proceed&lt;/code&gt;&amp;nbsp;if you don’t have specific preferences.&lt;/p&gt; 
&lt;p&gt;&lt;img src="./media/media/image3.png" alt=""&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25059" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/28/image-6-1.png" alt="Python version upgrade transformation pre-planning" width="2287" height="1127"&gt;&lt;/p&gt; 
&lt;p&gt;This will start the planning process, where the agent will analyze all the source files, context, additional guidance and generate a step-by-step comprehensive plan detailing:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Runtime version updates (Python 3.8 → 3.13)&lt;/li&gt; 
 &lt;li&gt;Dependency compatibility checks&lt;/li&gt; 
 &lt;li&gt;Code pattern updates for Python 3.13 compatibility&lt;/li&gt; 
 &lt;li&gt;AWS Lambda configuration changes&lt;/li&gt; 
 &lt;li&gt;Infrastructure as Code changes&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;The transformation plan is designed to help maintain functionality while leveraging Python 3.13’s improvements.&lt;/p&gt; 
&lt;p&gt;&lt;img src="./media/media/image4.png" alt=""&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25058" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/28/image-11-4.png" alt="Python version upgrade transformation plan summary" width="2315" height="1192"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25057" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/28/image-13.png" alt="Python version upgrade transformation plan summary" width="2327" height="875"&gt;&lt;/p&gt; 
&lt;p&gt;You can review the plan and provide feedback, or tell the agent to go ahead and execute it.&lt;/p&gt; 
&lt;h3&gt;Step 4: Transformation Execution and Validation&lt;/h3&gt; 
&lt;p&gt;After reviewing the plan, AWS Transform custom executes the transformation automatically, updating:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Lambda runtime configuration&lt;/li&gt; 
 &lt;li&gt;Python version-specific syntax&lt;/li&gt; 
 &lt;li&gt;Dependency versions for Python 3.13 compatibility&lt;/li&gt; 
 &lt;li&gt;Any deprecated function calls&lt;/li&gt; 
 &lt;li&gt;Any Infrastructure as code templates as well&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;At the end of each step, the agent commits the incremental changes to a local git branch. If build or test errors occur, the agent attempts to self-debug and resolve issues.&amp;nbsp;As a user, you can stop the transformation and provide feedback if necessary.&amp;nbsp;Once all the steps are complete, the agent will produce a summary of changes.&lt;/p&gt; 
&lt;p&gt;&lt;img src="./media/media/image6.png" alt=""&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25056" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/28/image-14.png" alt="Python version upgrade transformation execution completion" width="2300" height="305"&gt;&lt;/p&gt; 
&lt;p&gt;Next, the agent runs a full validation — comparing the executed changes against the plan for any deviations, verifying all exit criteria are met, and running build commands and unit tests to confirm everything passes.&lt;/p&gt; 
&lt;p&gt;Once the validation is complete, the agent will ask for feedback. You can provide feedback on the execution results and ask the agent to modify/add/remove changes if needed.&lt;/p&gt; 
&lt;p&gt;&lt;img src="./media/media/image7.png" alt=""&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25055" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/28/image-15-1.png" alt="Python version upgrade transformation validation completion" width="2305" height="520"&gt;&lt;/p&gt; 
&lt;h3&gt;Step 5: Verify Changes&lt;/h3&gt; 
&lt;p&gt;Once the validation is complete, the agent summarizes the changes:&lt;/p&gt; 
&lt;p&gt;&lt;img src="./media/media/image8.png" alt=""&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-25054" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/28/image-16-1.png" alt="Python version upgrade transformation summary of changes" width="2295" height="862"&gt;&lt;/p&gt; 
&lt;p&gt;You can quit the &lt;code&gt;atx&lt;/code&gt; session by issuing &lt;code&gt;/quit&lt;/code&gt; in the terminal.&lt;/p&gt; 
&lt;p&gt;All the changes are committed to a local staging branch. You can also view this by executing following commands&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;git status
git branch
git diff main &amp;lt;atx-result-staging-...&amp;gt;&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;With these steps, you can upgrade your Lambda functions which are already running on deprecated runtimes or nearing EOL with reduced manual effort.&lt;/p&gt; 
&lt;h2&gt;Non-Interactive mode&lt;/h2&gt; 
&lt;p&gt;You can also run this transformation in a non-interactive mode with the following command supplying all the information, so that agent can run without asking for any user inputs.&amp;nbsp;This mode is designed for headless execution, CI/CD pipeline integration and bulk execution where no human intervention is available or desired.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;atx custom def exec -p . -n AWS/python-version-upgrade --configuration "validationCommands=pytest,additionalPlanContext=The target Python version to upgrade to is Python 3.13" -x -t&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Parameter breakdown:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;code&gt;-n&lt;/code&gt; AWS/python-version-upgrade: Name of the AWS managed Python migration transformation&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;-p .&lt;/code&gt;: Path to the current directory containing your Lambda function&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;-t&lt;/code&gt; : Trust all tools without prompting&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;-x&lt;/code&gt; : non-interactive headless mode&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;--configuration&lt;/code&gt; : Validation commands to be used after the transformation and additional instructions to the agent . This example, configures the agent to use “&lt;code&gt;pytest&lt;/code&gt;” as the validation command after transformation is complete and specifies the target version of python3.13 as additionalPlanContext. This helps agent with additional context during planning of the changes.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;You can also specify these parameters in a config.json and execute like below.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;atx custom def exec --configuration 'file://config.json'&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;code&gt;config.json&lt;/code&gt; file contains all the information about the project repository path, transformation name, build and validation commands to use. Save the below snippet to config.json in the current directory.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-json"&gt;{
  "codeRepositoryPath": ".",
  "transformationName": "python-version-upgrade",
  "validationCommands": "pytest",
  "additionalPlanContext": "The target Python version to upgrade to is Python 3.13"
}&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;How to scale this to multiple Lambda function upgrades?&lt;/h2&gt; 
&lt;p&gt;Now that you have successfully used AWS Transform custom to upgrade a single Lambda function, you can scale this to hundreds or thousands of functions across your organization. Central engineering teams can create campaigns through the &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-get-started.html#custom-web-application"&gt;AWS Transform web application&lt;/a&gt; to define the transformation, specify target repositories, and track progress across the organization. For execution at scale, choose the model that fits your environment — both run in &lt;strong&gt;your environment&lt;/strong&gt;, with access to your existing development resources, build systems, and tool chains. You don’t need to move your code anywhere — AWS Transform custom meets you where you are.&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Batch script execution&lt;/strong&gt; — Ideal for teams that want to run transformations directly on developer machines, EC2 instances or existing CI/CD infrastructure. Wrap the AWS Transform custom CLI in a batch processing script that iterates across multiple repositories using a CSV or JSON input file. The script supports both serial and parallel execution modes with configurable job limits, retry mechanisms, and comprehensive logging. Refer to this &lt;a href="https://github.com/aws-samples/aws-transform-custom-samples/tree/main/scaled-execution-bash"&gt;GitHub repo&lt;/a&gt; for the sample batch launcher script and execution instructions.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Containerized execution on AWS&lt;/strong&gt; — Best suited for enterprise-scale rollouts where you need managed infrastructure, job orchestration, and centralized monitoring. Run transformations using containers deployed on &lt;a href="https://aws.amazon.com/batch/"&gt;AWS Batch&lt;/a&gt; with &lt;a href="https://aws.amazon.com/fargate/"&gt;AWS Fargate&lt;/a&gt;. This solution provides a REST API for job submission, automatic IAM credential management, and full &lt;a href="https://aws.amazon.com/cloudwatch/"&gt;Amazon CloudWatch&lt;/a&gt; monitoring — all deployable with a single &lt;a href="https://aws.amazon.com/cdk/"&gt;AWS CDK&lt;/a&gt; command. To get started, refer to this &lt;a href="https://github.com/aws-samples/aws-transform-custom-samples/tree/main/scaled-execution-containers"&gt;GitHub repo&lt;/a&gt; and &lt;a href="https://aws.amazon.com/blogs/devops/building-a-scalable-code-modernization-solution-with-aws-transform-custom/"&gt;blog&lt;/a&gt;.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;h2&gt;Cleanup&lt;/h2&gt; 
&lt;p&gt;If you followed along with the hands-on example, remove the cloned repository and virtual environment to free up local resources:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;deactivate
cd ..
rm -rf ./sam-python-crud-sample&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;If you created any AWS resources during testing, delete them to avoid ongoing charges.&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;Keeping Lambda runtimes current is a recurring operational burden that only grows with scale. What starts as a simple version bump quickly compounds into dependency updates, syntax changes, infrastructure modifications, and extensive testing — multiplied across every function in your fleet.&lt;/p&gt; 
&lt;p&gt;AWS Transform custom turns this into a repeatable, automated workflow. As we demonstrated, upgrading a multi-function Python 3.8 application to Python 3.13 required just a single CLI invocation — the agent can handle planning, code changes, dependency updates, infrastructure configuration, and validation end to end. And with non-interactive mode and the scaled execution options, you can extend this to hundreds of repositories without manual intervention.&lt;/p&gt; 
&lt;p&gt;To get started:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Install the &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-get-started.html#custom-installation"&gt;AWS Transform CLI&lt;/a&gt; and try the Python upgrade transformation on one of your own projects.&lt;/li&gt; 
 &lt;li&gt;Explore the full list of &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-managed-transformations.html"&gt;AWS managed transformations&lt;/a&gt; available beyond Python runtime upgrades.&lt;/li&gt; 
 &lt;li&gt;Check out the &lt;a href="https://github.com/aws-samples/aws-transform-custom-samples"&gt;scaled execution samples&lt;/a&gt; to plan your organization-wide rollout.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;About the authors&lt;/h3&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/28/Venu-NewPhoto.png" alt="Venu-author" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Venugopalan Vasudevan&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Venugopalan Vasudevan (Venu) is a Senior Specialist Solutions Architect at AWS, where he leads Agentic AI initiatives focused on AWS Transform. He helps customers adopt and scale AI-powered developer and modernization solutions to accelerate innovation and business outcomes.&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/28/gokul.jpg" alt="gokul-author" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Gokul Sarangaraju&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Gokul Sarangaraju is a Senior Solutions Architect at AWS, specializing in code modernization using agentic AI and AWS services. He helps customers adopt AWS technologies, optimize costs and usage, and build scalable, cost-effective data analytics solutions.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Choosing between Amazon ECS Blue/Green Native or AWS CodeDeploy in AWS CDK</title>
		<link>https://aws.amazon.com/blogs/devops/choosing-between-amazon-ecs-blue-green-native-or-aws-codedeploy-in-aws-cdk/</link>
		
		<dc:creator><![CDATA[Franco Abregu]]></dc:creator>
		<pubDate>Wed, 11 Feb 2026 18:59:00 +0000</pubDate>
				<category><![CDATA[Amazon Elastic Container Service]]></category>
		<category><![CDATA[AWS Cloud Development Kit]]></category>
		<category><![CDATA[AWS CodeDeploy]]></category>
		<category><![CDATA[Best Practices]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Amazon ECS]]></category>
		<category><![CDATA[Infrastructure as Code]]></category>
		<guid isPermaLink="false">439dac6a9215eacf0da68fdab3819172ee2aa87f</guid>

					<description>March 2026: This post has been updated to reflect that Amazon ECS now supports canary and linear deployment strategies natively as of October 2025. The recommendation has been updated accordingly to reflect ECS-native as the default choice for new deployments. Blue/green deployments on Amazon Elastic Container Service (Amazon ECS) have long been a go-to pattern […]</description>
										<content:encoded>&lt;blockquote&gt;
 &lt;p&gt;&lt;strong&gt;March 2026:&lt;/strong&gt; This post has been updated to reflect that Amazon ECS now supports canary and linear deployment strategies natively as of October 2025. The recommendation has been updated accordingly to reflect ECS-native as the default choice for new deployments.&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;p&gt;Blue/green deployments on &lt;a href="https://aws.amazon.com/es/ecs/"&gt;Amazon Elastic Container Service (Amazon ECS)&lt;/a&gt; have long been a go-to pattern for shipping zero-downtime deployments. Historically, the recommended approach in the &lt;a href="https://aws.amazon.com/cdk/"&gt;AWS Cloud Development Kit (AWS CDK)&lt;/a&gt; was to wire ECS to &lt;a href="https://aws.amazon.com/codedeploy/"&gt;AWS CodeDeploy&lt;/a&gt; for traffic shifting, lifecycle hooks, and tight integration with AWS CodePipeline.&lt;/p&gt; 
&lt;p&gt;In July 2025, Amazon ECS launched built-in blue/green deployments. This allows you to operate directly within the ECS service, without requiring the use of Amazon CodeDeploy. In October 2025, Amazon ECS further enhanced this capability by adding support for canary and linear deployment strategies, achieving feature parity with CodeDeploy.&lt;/p&gt; 
&lt;p&gt;This post explains what changed, how the new ECS-native blue/green model compares to CodeDeploy, and how to decide which path to take in your CDK projects.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-25019 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/11/figure1-2.png" alt="Figure1: Amazon ECS blue/green deployment with AWS CodeDeploy" width="781" height="265"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;&lt;em&gt;Figure1: Amazon ECS blue/green deployment with AWS CodeDeploy&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt; 
&lt;h2&gt;Why blue/green on ECS, and what was launched in July 2025&lt;/h2&gt; 
&lt;p&gt;In a blue/green deployment, two production environments are maintained: blue, the current environment, and green, the new environment. This strategy allows you to validate the new version of your environment before it receives production traffic.&lt;/p&gt; 
&lt;p&gt;The ECS service team saw an opportunity to simplify the deployment process by creating lifecycle hooks, bake time, and managed rollback directly within ECS. With this shift, the complexity of coordinating blue/green deployments through CodeDeploy is consolidated into a single service. This consolidation not only simplifies the deployment pipeline but also reduces the number of moving parts, making it easier to maintain and troubleshoot over time.&lt;/p&gt; 
&lt;p&gt;Conceptually, ECS-native blue/green provisions a replacement task set registered to a separate target group (blue target group in figure 2) behind your Elastic Load Balancing listener. When you approve the cutover, ECS performs a traffic shift to the green revision (green target group in figure 2) using your chosen strategy (all-at-once, canary, or linear), then holds both revisions during a configurable bake period before retiring blue or rolling back if alarms or hooks fail.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;img loading="lazy" class="aligncenter wp-image-25022 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/11/figure2-3.png" alt="Figure 2: Amazon ECS Native Blue Green Deployment" width="478" height="277"&gt;&lt;br&gt; &lt;strong&gt;&lt;em&gt;Figure 2: Amazon ECS Native Blue Green Deployment&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt; 
&lt;h2&gt;Two paths in CDK: ECS-native blue/green vs. CodeDeploy blue/green&lt;/h2&gt; 
&lt;p&gt;With CDK, you now have two ways to achieve blue/green on ECS. One is the ECS-native path that keeps deployment configuration on the &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs-readme.html"&gt;CDK ECS module&lt;/a&gt; and its related load balancer resources. You configure lifecycle hooks that invoke AWS Lambda functions at specific deployment stages, you set a bake time, and you optionally use a test listener or Amazon ECS Service Connect header rules to validate traffic to the green revision before production cutover. The CodeDeploy path creates a CodeDeploy application and deployment group bound to your ECS service and Application Load Balancer (ALB), lets you choose canary, linear, or all-at-once policies, and typically plugs into AWS CodePipeline for orchestration.&lt;/p&gt; 
&lt;p&gt;Both paths now offer the same traffic shifting strategies: all-at-once, canary, and linear. This means the choice between them is no longer about feature gaps, but about operational preferences and existing infrastructure.&lt;/p&gt; 
&lt;p&gt;Currently, the AWS CDK includes L2 support for ECS-native blue/green so that you can model these settings directly without custom CloudFormation or escape hatches. If your stack already uses the Deployment Controller Type. CODE_DEPLOY path, you can continue to do so; migration options exist (outlined later in this post).&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-25032 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/11/figure3-5.png" alt="Figure 3: Amazon CodeDeploy blue/green deployment traffic shift" width="716" height="463"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;&lt;em&gt;Figure 3: Amazon CodeDeploy blue/green deployment traffic shift&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt; 
&lt;h2&gt;Decision guide: choosing the right path&lt;/h2&gt; 
&lt;p&gt;&lt;strong&gt;Use ECS-native when:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;You want a simpler, service-centric model with fewer moving parts&lt;/li&gt; 
 &lt;li&gt;You prefer managing everything from ECS without additional services&lt;/li&gt; 
 &lt;li&gt;You need support for Service Connect, headless services, or multiple target groups&lt;/li&gt; 
 &lt;li&gt;You want more flexible lifecycle hooks (each Lambda invocation has a 15-minute timeout, but by returning IN_PROGRESS, a single lifecycle stage can run for up to 24 hours, compared to CodeDeploy’s 1-hour total limit)&lt;/li&gt; 
 &lt;li&gt;You’re starting a new project or greenfield deployment&lt;/li&gt; 
 &lt;li&gt;You want better alignment with existing Amazon ECS features (circuit breaker, deployment history)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Use CodeDeploy when:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;You already have complex pipelines in CodePipeline that depend on CodeDeploy&lt;/li&gt; 
 &lt;li&gt;You need to coordinate multi-service deployments across services, regions, and accounts in a single release&lt;/li&gt; 
 &lt;li&gt;You have established governance processes and audit trails around CodeDeploy&lt;/li&gt; 
 &lt;li&gt;You’re migrating from an existing CodeDeploy implementation and there’s no immediate business need to change&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;How to implement in ECS native in CDK&lt;/h2&gt; 
&lt;p&gt;To utilize ECS-native blue/green in CDK, start with an Amazon ECS service (Fargate or EC2), an Application Load Balancer, and two target groups managed by ECS during deployments. In your service definition, you’ll opt into the blue/green deployment type, set a bake time, and attach lifecycle hooks. Hooks can run Lambda functions at stages such as before scale-up or after production traffic shift, letting you run synthetic tests, warm caches, or gate on external checks. If you’re using Amazon ECS Service Connect, you can route “dark” test traffic to green by sending requests with a specific header during the pre-cutover phase.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;const service = new ecs.FargateService(this, "Service", {
  cluster,
  taskDefinition,
  desiredCount: 3,
  securityGroups: [serviceSG],
  vpcSubnets: {
    subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
  },
  deploymentStrategy: ecs.DeploymentStrategy.BLUE_GREEN,
  bakeTime: Duration.minutes(30),
  propagateTags: ecs.PropagatedTagSource.SERVICE,
  deploymentAlarms: {
    alarmNames: [
      this.stackName + "-Http-500-Blue",
      this.stackName + "-Http-500-Green",
      "Synthetics-Alarm-trivia-game-" + props.stage,
    ],
    behavior: ecs.AlarmBehavior.ROLLBACK_ON_ALARM,
  },
  minHealthyPercent: 100,
  maxHealthyPercent: 200,
});

service.addLifecycleHook(
  new ecs.DeploymentLifecycleLambdaTarget(preTrafficHook, "PreTrafficHook", {
    lifecycleStages: [
      ecs.DeploymentLifecycleStage.POST_TEST_TRAFFIC_SHIFT,
    ],
  })
);&lt;/code&gt;&lt;/pre&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;&lt;em&gt;Code Snipped: Amazon ECS service with AWS Fargate and ECS-native blue/green&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;ECS-native blue/green deployments is now the recommended option for most teams and new projects. With full support for canary and linear deployments alongside all-at-once strategies, it provides zero-downtime cutovers, lifecycle hooks, bake time, and rollback capabilities without requiring management of additional services.&lt;/p&gt; 
&lt;p&gt;CodeDeploy remains a valid option for existing customers or those with specific CodePipeline dependencies and established governance processes, but AWS’s direction is to migrate toward ECS-native for new deployments.&lt;/p&gt; 
&lt;p&gt;Bring your ECS deployments to the next level by enabling Blue/Green deployment with the strategy that best fits for your use case. For step-by-step instructions for migrating from CodeDeploy to ECS-Native, refer to the &lt;a href="https://aws.amazon.com/blogs/containers/migrating-from-aws-codedeploy-to-amazon-ecs-for-blue-green-deployments/"&gt;official migration guide&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;For more information on the new canary and linear deployment capabilities, see the &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-bluegreen.html"&gt;Amazon ECS documentation on blue/green deployments&lt;/a&gt;.&lt;/p&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/03/foto-AWS-slack.png" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Franco Abregu&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Franco Abregu is a Sr. Delivery Consultant – DevOps at AWS Professional Services based in Argentina. Franco focuses on transforming customers DevOps culture to improve developer productivity, operations, deployments and process standardization. His expertise includes CI/CD, Infrastructure as Code, software development and organizational adoption of DevOps culture.&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/11/renzochr2.jpeg" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Chris Renzo&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;is a Sr. Solution Architect within the AWS Defense and Aerospace organization. Outside of work, he enjoys a balance of warm weather and traveling.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Building a scalable code modernization solution with AWS Transform custom</title>
		<link>https://aws.amazon.com/blogs/devops/building-a-scalable-code-modernization-solution-with-aws-transform-custom/</link>
		
		<dc:creator><![CDATA[Dinesh Prabakaran]]></dc:creator>
		<pubDate>Fri, 06 Feb 2026 22:53:29 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AWS Transform]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Batch]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Java]]></category>
		<category><![CDATA[modernization]]></category>
		<category><![CDATA[Serverless]]></category>
		<guid isPermaLink="false">e04d5473b4efe7326adc3f9ff3ded78ebf7d8f12</guid>

					<description>Introduction Software maintenance and modernization is a critical challenge for enterprises managing hundreds or thousands of repositories. Whether upgrading Java versions, migrating to new AWS SDKs, or modernizing frameworks, the scale of transformation work can be overwhelming. AWS Transform custom uses agentic AI to perform large-scale modernization of software, code, libraries, and frameworks to reduce […]</description>
										<content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt; 
&lt;p&gt;Software maintenance and modernization is a critical challenge for enterprises managing hundreds or thousands of repositories. Whether upgrading Java versions, migrating to new AWS SDKs, or modernizing frameworks, the scale of transformation work can be overwhelming. &lt;a href="https://aws.amazon.com/transform/custom/"&gt;AWS Transform custom&lt;/a&gt; uses agentic AI to perform large-scale modernization of software, code, libraries, and frameworks to reduce technical debt. It handles diverse scenarios including language version upgrades, API and service migrations, framework upgrades and migrations, code refactoring, and organization-specific transformations. Through continual learning, the agent improves from every execution and developer feedback, delivering high-quality, repeatable transformations without requiring specialized automation expertise.&lt;/p&gt; 
&lt;p&gt;Organizations need to run transformations using AWS Transform custom concurrently across their entire code estate to meet aggressive modernization timelines and compliance deadlines. Running it at enterprise scale requires a solution to process repositories in parallel, in a controlled remote cloud environment, manage credentials securely, and provide visibility into transformation progress. Today, we’re introducing an open-source solution that brings production-grade scalability, reliability, and monitoring to AWS Transform custom. This infrastructure enables you to run transformations on thousands of repositories in parallel using &lt;a href="https://docs.aws.amazon.com/batch/latest/userguide/what-is-batch.html"&gt;AWS Batch&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/batch/latest/userguide/fargate.html"&gt;AWS Fargate&lt;/a&gt;, with REST API access for programmatic control and comprehensive &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html"&gt;Amazon CloudWatch&lt;/a&gt; monitoring.&lt;/p&gt; 
&lt;h2&gt;Requirements for Enterprise-Scale Code Modernization&lt;/h2&gt; 
&lt;p&gt;AWS Transform custom provides powerful AI-driven code transformation capabilities through its CLI. To effectively scale transformations across enterprise codebases, organizations need:&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Scale:&lt;/strong&gt; Ability to run transformations on 1000+ repositories concurrently rather than one-by-one&lt;br&gt; &lt;strong&gt;Infrastructure&lt;/strong&gt;: Dedicated compute resources for long-running transformations beyond developers’ laptops&lt;br&gt; &lt;strong&gt;API Access&lt;/strong&gt;: REST API for programmatic orchestration and seamless integration with CI/CD pipelines&lt;br&gt; &lt;strong&gt;Monitoring&lt;/strong&gt;: Centralized visibility into transformation progress and status across multiple repositories&lt;br&gt; &lt;strong&gt;Reliability&lt;/strong&gt;: Automatic retries, secure credential management, and built-in fault tolerance&lt;/p&gt; 
&lt;h2&gt;The Solution: Batch Infrastructure with REST API&lt;/h2&gt; 
&lt;p&gt;This solution provides complete, production-ready infrastructure that addresses these challenges:&lt;/p&gt; 
&lt;h3&gt;Core Capabilities&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Scalable Batch Processing&lt;/strong&gt; Run transformations on thousands of repositories in parallel using AWS Batch with Fargate. The default configuration (256 max vCPUs, 2 vCPUs per job) supports up to 128 concurrent jobs, with automatic queuing and resource management. The compute environment scales based on your needs and Fargate service quotas.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;REST API for Programmatic Access&lt;/strong&gt; Seven API endpoints provide complete job lifecycle management, enabling you to submit single jobs or bulk batches of thousands in one request. The API offers real-time status tracking and progress monitoring, with &lt;a href="https://aws.amazon.com/iam/"&gt;Amazon Identity and access Management&lt;/a&gt; (IAM) authentication ensuring secure access to transformation operations.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Multi-Language Container&lt;/strong&gt; The solution includes a container supporting Java (8, 11, 17, 21), Python (3.8-3.13), and Node.js (16-24) with all build tools pre-installed, including &lt;a href="https://maven.apache.org/"&gt;Maven&lt;/a&gt;, &lt;a href="https://gradle.org/"&gt;Gradle&lt;/a&gt;, &lt;a href="https://www.npmjs.com/"&gt;npm&lt;/a&gt;, and &lt;a href="https://yarnpkg.com/"&gt;yarn&lt;/a&gt;. The AWS Transform CLI and &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html"&gt;AWS CLI v2&lt;/a&gt; are bundled in. The container is fully extensible for custom requirements—you can add your own libraries, languages, or tools by customizing the Dockerfile to meet their specific needs&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Enterprise-Grade Reliability&lt;/strong&gt; Automatic IAM credential management eliminates long-lived keys, with credentials auto-refreshing every 45 minutes for jobs up to 12 hours. The system includes automatic retries for transient failures (default: 3 attempts), with configurable timeout and retry settings to match your transformation complexity.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Comprehensive Monitoring&lt;/strong&gt; A CloudWatch dashboard provides job tracking with success and failure rates, trends over time, and API and Lambda health metrics. Real-time log streaming enables you to monitor transformation progress and quickly diagnose issues.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Architecture&lt;/h2&gt; 
&lt;p&gt;The solution uses a serverless architecture built on AWS managed services:&lt;/p&gt; 
&lt;p&gt;&lt;img src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/03/aws-transform-architecture.png" alt="AWS Transform custom Batch solution architecture"&gt;&lt;br&gt; &lt;em&gt;AWS Transform custom Batch solution architecture&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Key Components:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;API Gateway:&lt;/strong&gt; REST API with IAM authentication&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Lambda Functions:&lt;/strong&gt; Job orchestration, status tracking, bulk submission&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;AWS Batch:&lt;/strong&gt; Job queue and compute environment management&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Fargate:&lt;/strong&gt; Serverless container execution (no EC2 to manage)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;S3:&lt;/strong&gt; Source code input and transformation results output&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;CloudWatch:&lt;/strong&gt; Logs, metrics, and operational dashboard&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Getting Started&lt;/h2&gt; 
&lt;h3&gt;Prerequisites&lt;/h3&gt; 
&lt;p&gt;Before deploying, ensure you have:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;AWS Account&lt;/strong&gt; with appropriate IAM permissions (ECR, S3, IAM, Batch, Lambda, API Gateway, CloudWatch)&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html"&gt;&lt;strong&gt;AWS CLI v2&lt;/strong&gt;&lt;/a&gt; configured with credentials or AWS SSO login&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;&lt;a href="https://www.docker.com/get-started/"&gt;Docker&lt;/a&gt;&lt;/strong&gt; installed and running&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;&lt;a href="https://git-scm.com/install/"&gt;Git&lt;/a&gt;&lt;/strong&gt; for cloning the repository&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;&lt;a href="https://nodejs.org/"&gt;Node.js 18+&lt;/a&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install"&gt;AWS CDK&lt;/a&gt;&lt;/strong&gt; (for CDK deployment)&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://www.python.org/downloads/"&gt;&lt;strong&gt;Python3&lt;/strong&gt;&lt;/a&gt;for testing the APIs&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Deployment Options&lt;/h3&gt; 
&lt;h4&gt;Option 1: CDK Deployment (Recommended)&lt;/h4&gt; 
&lt;p&gt;&lt;strong&gt;Step 1: Clone the Repository&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;git clone https://github.com/aws-samples/aws-transform-custom-samples.git

cd aws-transform-custom-samples/scaled-execution-containers&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Step 2: Set Environment Variables&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export CDK_DEFAULT_ACCOUNT=$AWS_ACCOUNT_ID
export CDK_DEFAULT_REGION=us-east-1&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Step 3: Verify prerequisites&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;This checks that &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; is installed and running, AWS CLI v2 is configured with credentials, Git is available, and your AWS account has the required VPC and public subnets.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;cd deployment
chmod +x *.sh
./check-prereqs.sh
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Step 4: Set up IAM Permissions (Optional, but recommended)&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Generate a least-privilege IAM policy instead of using broad permissions:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;./generate-custom-policy.sh&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;This creates &lt;code&gt;iam-custom-policy.json&lt;/code&gt; with minimum permissions scoped to your specific resources.&lt;/p&gt; 
&lt;p&gt;Create and attach the policy:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;aws iam create-policy \
  --policy-name ATXCustomDeploymentPolicy \
  --policy-document file://iam-custom-policy.json
&lt;/code&gt;&lt;/pre&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;aws iam attach-user-policy \
  --user-name YOUR_USERNAME \
  --policy-arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):policy/ATXCustomDeploymentPolicy
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you have administrator access, you can skip this step and proceed directly to deployment.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Step 5: Deploy with CDK (One Command Does Everything!)&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;cd ../cdk
chmod +x *.sh
./deploy.sh&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Time:&lt;/strong&gt; 20-25 minutes (all resources)&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;What CDK Does Automatically:&lt;/strong&gt;&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Builds Docker image from Dockerfile&lt;/li&gt; 
 &lt;li&gt;Pushes image to ECR&lt;/li&gt; 
 &lt;li&gt;Creates all AWS resources&lt;/li&gt; 
 &lt;li&gt;Configures everything&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;&lt;strong&gt;What Gets Deployed:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;ECR repository with Docker image&lt;/li&gt; 
 &lt;li&gt;S3 buckets (output, source)&lt;/li&gt; 
 &lt;li&gt;IAM roles with least-privilege&lt;/li&gt; 
 &lt;li&gt;AWS Batch infrastructure (Fargate)&lt;/li&gt; 
 &lt;li&gt;7 Lambda functions&lt;/li&gt; 
 &lt;li&gt;API Gateway REST API&lt;/li&gt; 
 &lt;li&gt;CloudWatch logs and dashboard&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;See &lt;a href="https://github.com/aws-samples/aws-transform-custom-samples/blob/main/scaled-execution-containers/cdk/README.md"&gt;cdk/README.md&lt;/a&gt; for detailed instructions and configuration options.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Step 6: Get Your API Endpoint&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;After deployment completes, retrieve the API endpoint URL:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;export API_ENDPOINT=$(aws cloudformation describe-stacks \
&amp;nbsp;&amp;nbsp;--stack-name AtxApiStack \
&amp;nbsp;&amp;nbsp;--query 'Stacks[0].Outputs[?OutputKey==`ApiEndpoint`].OutputValue' \
&amp;nbsp;&amp;nbsp;--output text)

echo "API Endpoint: $API_ENDPOINT"&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;This endpoint is used in all subsequent API calls.&lt;/p&gt; 
&lt;h4&gt;Option 2: Bash Scripts (Alternative)&lt;/h4&gt; 
&lt;p&gt;If you prefer manual control over each deployment step or need to customize individual components, use the bash script deployment. See &lt;a href="https://github.com/aws-samples/aws-transform-custom-samples/blob/main/scaled-execution-containers/deployment/README.md"&gt;deployment/README.md&lt;/a&gt; for the complete 3-step process with detailed explanations of what each script deploys.&lt;/p&gt; 
&lt;h2&gt;Using the Solution&lt;/h2&gt; 
&lt;h3&gt;Single Job Submission&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;Quick test:&lt;/strong&gt; Run &lt;code&gt;cd ../test &amp;amp;&amp;amp; ./test-apis.sh&lt;/code&gt; to validate all API endpoints (MCP, transformations, bulk jobs, campaigns).&lt;/p&gt; 
&lt;p&gt;Submit a Python version upgrade transformation:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;cd ..
python3 utilities/invoke-api.py \
  --endpoint "$API_ENDPOINT" \
  --path "/jobs" \
  --data '{
    "source": "https://github.com/venuvasu/todoapilambda",
    "command": "atx custom def exec -n AWS/python-version-upgrade -p /source/todoapilambda -c noop --configuration \"validationCommands=pytest,additionalPlanContext=The target Python version to upgrade to is Python 3.13. Python 3.13 is already installed at /usr/bin/python3.13\" -x -t"
  }'
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;This API call triggers a Python version upgrade transformation on the &lt;code&gt;todoapilambda&lt;/code&gt; public git repository. The transformation uses the &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/transform-aws-customs.html"&gt;AWS Managed transformation&lt;/a&gt; to upgrade from the current Python version to Python 3.13. The &lt;code&gt;configuration&lt;/code&gt; parameter specifies additional validation command to be run and plan context to specifies the location of python 3.13 installation in the container and the target version. The &lt;code&gt;-x&lt;/code&gt; flag is for non-interactive mode of the transformation , and &lt;code&gt;-t&lt;/code&gt; flag is to trust all tools for this transformation.&lt;/p&gt; 
&lt;p&gt;API returns a job ID for tracking. Job names are auto-generated from the source repository and transformation type.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;See &lt;a href="https://github.com/aws-samples/aws-transform-custom-samples/blob/main/scaled-execution-containers/api/README.md"&gt;api/README.md&lt;/a&gt; for complete API documentation with examples for Java, Node.js, and other transformations.&lt;/strong&gt;&lt;/p&gt; 
&lt;h3&gt;Bulk Job Submission&lt;/h3&gt; 
&lt;p&gt;Transform multiple repositories in a single API call:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;python3 utilities/invoke-api.py \
  --endpoint "$API_ENDPOINT" \
  --path "/jobs/batch" \
  --data '{
    "batchName": "codebase-analysis-2025",
    "jobs": [
      {"source": "https://github.com/spring-projects/spring-petclinic", "command": "atx custom def exec -n AWS/early-access-comprehensive-codebase-analysis -p /source/spring-petclinic -x -t"},
      {"source": "https://github.com/venuvasu/todoapilambda", "command": "atx custom def exec -n AWS/early-access-comprehensive-codebase-analysis -p /source/todoapilambda -x -t"},
      {"source": "https://github.com/venuvasu/toapilambdanode16", "command": "atx custom def exec -n AWS/early-access-comprehensive-codebase-analysis -p /source/toapilambdanode16 -x -t"}
    ]
  }'
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;This API call triggers a deep static analysis of the codebase to generate hierarchical, cross-referenced documentation for three open source repositories in parallel. The transformation uses the &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/transform-aws-customs.html"&gt;AWS Managed transformation&lt;/a&gt; to generate behavioral analysis, architectural documentation, and business intelligence extraction to create a comprehensive knowledge base organized for maximum usability and navigation.&lt;/p&gt; 
&lt;p&gt;The API submits these jobs in a async manner. i.e the API returns a batch id upon submitting these jobs to AWS Batch. Then you can monitor the progress as specified below.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;See &lt;a href="https://github.com/aws-samples/aws-transform-custom-samples/blob/main/scaled-execution-containers/api/README.md"&gt;api/README.md&lt;/a&gt; for status checking, MCP configuration, and other API endpoints.&lt;/strong&gt;&lt;/p&gt; 
&lt;h3&gt;Monitoring Progress&lt;/h3&gt; 
&lt;p&gt;Check batch status:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;python3 utilities/invoke-api.py \
  --endpoint "$API_ENDPOINT" \
  --method GET \
  --path "/jobs/batch/BATCH_ID"&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Response shows real-time progress:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-json"&gt;{
  "status": "RUNNING",
  "progress": 45.5,
  "totalJobs": 1000,
  "statusCounts": {
    "RUNNING": 195,
    "SUCCEEDED": 432,
    "FAILED": 23
  }
}&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Viewing Results&lt;/h3&gt; 
&lt;p&gt;After a job completes, the results are stored in your S3 output bucket.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;S3 Output Structure:&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Results are organized by job name and conversation ID:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-text"&gt;s3://atx-custom-output-{account-id}/
└── transformations/
    └── {job-name}/                           # e.g., guava-early-access-comprehensive-codebase-analysis
        └── {timestamp}{conversation-id}/     # e.g., 20251227_051626_8f344f5f
            ├── code/                         # Full source code + transformed changes
            └── logs/                         # Execution logs and artifacts
                └── custom/
                    └── {timestamp}{conversation-id}/
                        └── artifacts/
                            └── validation_summary.md&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;strong&gt;Validation Summary:&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;AWS Transform CLI generates a validation summary showing all changes made:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-taxt"&gt;s3://atx-custom-output-{account-id}/transformations/{job-name}/{timestamp}{conversation-id}/logs/custom/{timestamp}{conversation-id}/artifacts/validation_summary.md&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;This file contains:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Summary of all code changes&lt;/li&gt; 
 &lt;li&gt;Files modified, added, or deleted&lt;/li&gt; 
 &lt;li&gt;Validation results&lt;/li&gt; 
 &lt;li&gt;Transformation statistics&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Download Results:&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;# Download all results for a specific job
aws s3 sync s3://atx-custom-output-{account-id}/transformations/{job-name}/{timestamp}{conversation-id}/ ./local-results/

# Download just the validation summary
aws s3 cp s3://atx-custom-output-{account-id}/transformations/{job-name}/{timestamp}{conversation-id}/logs/custom/{timestamp}{conversation-id}/artifacts/validation_summary.md ./

# Download transformed code only
aws s3 sync s3://atx-custom-output-{account-id}/transformations/{job-name}/{timestamp}{conversation-id}/code/ ./transformed-code/
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Monitoring and Observability&lt;/h3&gt; 
&lt;p&gt;The solution includes a CloudWatch dashboard with operational metrics:&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Job Tracking:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Completion rate with hourly trends (completed vs failed)&lt;/li&gt; 
 &lt;li&gt;Recent jobs table showing job name, timestamp, last message, and log stream&lt;/li&gt; 
 &lt;li&gt;Real-time visibility into job execution&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;img src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/06/job.png" alt="CloudWatch Dashboard screenshot for Job tracking"&gt;&lt;br&gt; &lt;em&gt;CloudWatch Dashboard screenshot for Job tracking&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;API and Lambda Health:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;API Gateway request counts and error rates&lt;/li&gt; 
 &lt;li&gt;Lambda invocation metrics per function&lt;/li&gt; 
 &lt;li&gt;Performance monitoring (duration by function)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;img src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/02/06/apilambda.png" alt="CloudWatch Dashboard screenshot for API and Lambda Health"&gt;&lt;br&gt; &lt;em&gt;CloudWatch Dashboard screenshot for API and Lambda Health&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;CloudWatch Logs:&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;All logs are centralized in CloudWatch Logs (&lt;code&gt;/aws/batch/atx-transform&lt;/code&gt;) with real-time streaming.&lt;/p&gt; 
&lt;p&gt;View logs via AWS CLI:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;aws logs tail /aws/batch/atx-transform --follow --region us-east-1&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Or use the included utility:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;python3 utilities/tail-logs.py JOB_ID --region us-east-1&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;View in AWS Console: CloudWatch → Log Groups → &lt;code&gt;/aws/batch/atx-transform&lt;/code&gt;&lt;/p&gt; 
&lt;h2&gt;Model Context Protocol (MCP) Integration&lt;/h2&gt; 
&lt;p&gt;AWS Transform custom supports Model Context Protocol (MCP) servers to extend the AI agent with additional tools. Configure MCP servers via API:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;python3 utilities/invoke-api.py \
  --endpoint "$API_ENDPOINT" \
  --path "/mcp-config" \
  --data '{
    "mcpConfig": {
      "mcpServers": {
        "github": {"command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"]},
        "fetch": {"command": "uvx", "args": ["mcp-server-fetch"]}
      }
    }
  }'
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The configuration is stored in S3 and automatically available to all transformations. Test with &lt;code&gt;atx mcp tools&lt;/code&gt; to list configured servers.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;See &lt;a href="https://github.com/aws-samples/aws-transform-custom-samples/blob/main/scaled-execution-containers/api/README.md"&gt;api/README.md&lt;/a&gt; for status checking, MCP configuration, and other API endpoints.&lt;/strong&gt;&lt;/p&gt; 
&lt;h2&gt;Customization for Private Repositories&lt;/h2&gt; 
&lt;p&gt;You may need to access private repositories and artifact registries. Extend the base container to add credentials:&lt;/p&gt; 
&lt;p&gt;To access your private Git repositories or artifact registries during transformations:&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Two approaches:&lt;/strong&gt;&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;AWS Secrets Manager (RECOMMENDED)&lt;/strong&gt; – Credentials fetched at runtime, never stored in image&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Hardcode in Dockerfile (NOT RECOMMENDED)&lt;/strong&gt; – For testing only&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt;&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Uncomment placeholders in &lt;code&gt;container/entrypoint.sh&lt;/code&gt; (Secrets Manager) or &lt;code&gt;container/Dockerfile&lt;/code&gt; (hardcoded)&lt;/li&gt; 
 &lt;li&gt;Redeploy container (see below)&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;&lt;strong&gt;See &lt;a href="https://github.com/aws-samples/aws-transform-custom-samples/blob/main/scaled-execution-containers/container/README.md"&gt;container/README.md&lt;/a&gt; for complete setup instructions, examples, and security best practices.&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Redeploying after customization:&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;&lt;em&gt;If using CDK:&lt;/em&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;cd cdk &amp;amp;&amp;amp; ./deploy.sh&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;CDK automatically detects Dockerfile changes and rebuilds. If changes aren’t detected, force rebuild:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;cd cdk &amp;amp;&amp;amp; ./deploy.sh —force&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;em&gt;If using bash scripts:&lt;/em&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;cd deployment
./1-build-and-push.sh --rebuild
./2-deploy-infrastructure.sh&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The infrastructure will use your custom container with private repository access. You can also customize the container to add support for additional language versions or entirely new languages based on their specific requirements.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;See &lt;a href="https://github.com/aws-samples/aws-transform-custom-samples/blob/main/scaled-execution-containers/container/README.md"&gt;container/README.md&lt;/a&gt; for complete examples.&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; For automated PR creation and pushing changes back to remote repositories after transformation, you have two options: (1) extend &lt;code&gt;container/entrypoint.sh&lt;/code&gt; with git commands using your private credentials (see commented placeholder in the script), or (2) use a custom Transformation definition with MCP configured to connect to GitHub/GitLab for more sophisticated PR workflows.&lt;/p&gt; 
&lt;h2&gt;Campaigns&lt;/h2&gt; 
&lt;p&gt;Central platform teams can create campaigns through the AWS Transform web interface to manage enterprise-wide migration and modernization projects. For instance, to upgrade all repositories from Java 8 to Java 21, teams create a campaign with the Java upgrade transformation definition and target repository list. As developers execute transformations, repositories automatically register with the campaign, enabling you to track progress and monitor across your organization.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Creating a Campaign&lt;/strong&gt;&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/transform-setup.html"&gt;Setup Users&lt;/a&gt; and Login to &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-get-started.html#custom-web-application"&gt;AWS Transform web application&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;Create a Workspace and Create a Job&lt;/li&gt; 
 &lt;li&gt;In the chat, specify the type of the job . For example , “I would like comprehensive code analysis on multiple repos”&lt;/li&gt; 
 &lt;li&gt;Based on your request, AWS Transform will display the list of transformation that matches the criteria, in this case “&lt;strong&gt;AWS/early-access-comprehensive-codebase-analysis&lt;/strong&gt; (Early Access)”&lt;/li&gt; 
 &lt;li&gt;Once you confirm the transformation, AWS Transform will create a campaign and a command to execute for the transformation. You can just copy that command and execute via the API as described below replacing the repo details.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;atx custom def exec \
--code-repository-path &amp;lt;path-to-repo&amp;gt; \
--non-interactive \
--trust-all-tools \
--campaign 0d0c7e9f-5cb2-4569-8c81-7878def8e49e \
--repo-name &amp;lt;repo-name&amp;gt; \
--add-repo
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Executing the Transformation in a Campaign&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;python3 utilities/invoke-api.py \
  --endpoint "$API_ENDPOINT" \
  --path "/jobs" \
  --data '{
    "source": "https://github.com/spring-projects/spring-petclinic",
    "command": "atx custom def exec --code-repository-path /source/spring-petclinic --non-interactive --trust-all-tools --campaign 0d0c7e9f-5cb2-4569-8c81-7878def8e49e --repo-name spring-petclinic --add-repo"
  }'
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Once this transformation Job is successful, you can view the results and dashboard in Web application as well.&lt;/p&gt; 
&lt;h2&gt;Cleanup&lt;/h2&gt; 
&lt;p&gt;To remove all deployed resources:&lt;/p&gt; 
&lt;h3&gt;CDK Cleanup (Recommended)&lt;/h3&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;cd cdk ./destroy.sh&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Bash Scripts Cleanup (Alternate)&lt;/h3&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;cd deployment ./cleanup.sh&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;This script deletes:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;AWS Batch resources (compute environment, job queue, job definitions)&lt;/li&gt; 
 &lt;li&gt;Lambda functions and API Gateway&lt;/li&gt; 
 &lt;li&gt;IAM roles&lt;/li&gt; 
 &lt;li&gt;S3 buckets (after emptying)&lt;/li&gt; 
 &lt;li&gt;CloudWatch logs and dashboard&lt;/li&gt; 
 &lt;li&gt;ECR repository&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;Enterprise software modernization requires infrastructure that can operate at scale with reliability and observability. This solution provides a production-ready platform for running AWS Transform custom transformations on thousands of repositories concurrently.&lt;/p&gt; 
&lt;p&gt;By combining AWS Batch’s scalability, Fargate’s serverless compute, and a REST API for programmatic access, you can:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Accelerate modernization initiatives&lt;/li&gt; 
 &lt;li&gt;Reduce manual effort and human error&lt;/li&gt; 
 &lt;li&gt;Gain visibility into transformation progress&lt;/li&gt; 
 &lt;li&gt;Integrate with existing DevOps workflows&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;The &lt;a href="https://github.com/aws-samples/sample-aws-transform-custom-container"&gt;code repository&lt;/a&gt; is open-source, fully automated, and ready for you to deploy in your AWS account today.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Get started today with&lt;/strong&gt; &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom.html"&gt;&lt;strong&gt;AWS Transform custom&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;h3&gt;About the authors&lt;/h3&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2024/05/08/vasudeve-150x150.png" alt="Profile image for Venugopalan Vasudevan" width="142" height="142"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Venugopalan Vasudevan&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Venugopalan Vasudevan (Venu) is a Senior Specialist Solutions Architect at AWS, where he leads Generative AI initiatives focused on Amazon Q Developer, Kiro, and AWS Transform. He helps customers adopt and scale AI-powered developer and modernization solutions to accelerate innovation and business outcomes.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/04/08/dbalaaji.jpeg" alt="Profile image for Dinesh Balaaji Prabakaran" width="142" height="142"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Dinesh Balaaji Prabakaran&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Dinesh is a Enterprise Support Lead at AWS who specializes in supporting Independent Software Vendors (ISVs) on their cloud journey. With expertise in AWS Generative AI Services, he helps customers leverage Amazon Q Developer, Kiro, and AWS Transform to accelerate application development and modernization through AI-powered assistance.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2024/04/09/author1.jpg" alt="Profile image for Brent Everman" width="142" height="142"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Brent Everman&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Brent Everman is a Senior Technical Account Manager with AWS, based out of Pittsburgh. He has over 17 years of experience working with enterprise and startup customers. He is passionate about improving the software development experience and specializes in the AWS Next Generation Developer Experience services.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>AWS Transform custom: AI-driven Java modernization to reduce tech debt</title>
		<link>https://aws.amazon.com/blogs/devops/aws-transform-custom-ai-driven-java-modernization-to-reduce-tech-debt/</link>
		
		<dc:creator><![CDATA[Dinesh Prabakaran]]></dc:creator>
		<pubDate>Thu, 05 Feb 2026 03:00:42 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AWS Transform]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Java]]></category>
		<category><![CDATA[modernization]]></category>
		<guid isPermaLink="false">7da96452a2a428dcbd7a3b5ad65ce834cde6f58e</guid>

					<description>In today’s rapidly evolving software landscape, maintaining and modernizing Java applications is a critical challenge for many organizations. As new Java versions are released and best practices evolve, the need for efficient code transformation becomes increasingly important. Organizations today face significant challenges when modernizing their Java applications. Legacy codebases often contain outdated patterns, deprecated APIs, […]</description>
										<content:encoded>&lt;p&gt;In today’s rapidly evolving software landscape, maintaining and modernizing Java applications is a critical challenge for many organizations. As new Java versions are released and best practices evolve, the need for efficient code transformation becomes increasingly important. Organizations today face significant challenges when modernizing their Java applications. Legacy codebases often contain outdated patterns, deprecated APIs, and inefficient implementations that hinder performance and maintainability. Traditional manual refactoring approaches are time-consuming, error-prone, and difficult to scale across large codebases. In addition, as developers spend more time on new development and deployment, the volume of technical debt continues to rise, requiring transformation of legacy code at-scale.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://aws.amazon.com/transform/custom/"&gt;AWS Transform custom&lt;/a&gt; addresses these challenges through intelligent automation, providing &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/transform-aws-customs.html"&gt;AWS-managed transformations&lt;/a&gt; – standardized transformation packages for common scenarios like Java version upgrades. These transformations enable teams to achieve quick wins through standardized, tested transformation patterns that can be executed at scale, bringing significant time and cost savings to customers. In addition, customers can create custom &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-workflows.html#custom-create-custom-transformations"&gt;user defined transformations&lt;/a&gt; to address technical debt with code transformations across languages, frameworks, and more.&lt;/p&gt; 
&lt;p&gt;This post explores how to leverage AWS Transform custom’s out-of-the-box transformation for Java upgrades. By the end of this post, you’ll understand how to use these standardized transformations to modernize your Java applications efficiently while maintaining full control over the transformation process.&lt;/p&gt; 
&lt;h2&gt;Introduction to AWS Transform custom&lt;/h2&gt; 
&lt;p&gt;AWS Transform custom uses agentic AI to automate large-scale code modernization, handling language version upgrades, API migrations, framework updates, and organization-specific transformations. Through continual learning, the agent improves from every execution and developer feedback, delivering high-quality, repeatable transformations without requiring specialized automation expertise.&lt;/p&gt; 
&lt;h2&gt;Prerequisites&lt;/h2&gt; 
&lt;p&gt;Before starting your Java modernization journey with AWS Transform custom, ensure you have the necessary development environment, build tools, and AWS Transform custom command line interface (CLI) installed. For detailed prerequisites and setup instructions, refer to the &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-get-started.html#custom-prerequisites"&gt;AWS Transform custom Prerequisites Guide&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Understanding the Modernization Scenario&lt;/h2&gt; 
&lt;p&gt;We’ll demonstrate AWS Transform custom using a &lt;a href="https://github.com/aws-samples/aws-appconfig-java-sample/tree/aws-appconfig-java-sample-gradle"&gt;Movie Service application&lt;/a&gt; – a Spring Boot REST API built on Java 8 with Gradle. This represents a typical enterprise modernization challenge with legacy dependencies, outdated patterns, and technical debt.&lt;/p&gt; 
&lt;h2&gt;Leveraging AWS-Managed Transformations&lt;/h2&gt; 
&lt;p&gt;AWS Transform custom focuses on leveraging AWS-managed transformations designed for common modernization tasks like Java version upgrades.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;AWS-managed transformations&lt;/strong&gt; are pre-built, AWS-vetted transformations for common use cases that are ready to use without any additional setup. These transformations provide immediate value with minimal configuration, making them ideal for standard upgrade scenarios.&lt;/p&gt; 
&lt;h3&gt;Understanding AWS Transform custom CLI Capabilities&lt;/h3&gt; 
&lt;p&gt;AWS Transform custom provides a comprehensive command-line interface that enables both interactive and automated transformation workflows:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;atx --version                    # Display ATX version
atx --help                       # Display general help
atx custom def list              # List transformation packages
atx                              # Start interactive conversation
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;For detailed information on all available commands, refer to the &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-command-reference.html"&gt;AWS Transform custom command reference&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;&lt;img src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/23/atx_custom_java_fig1.png" alt="Screenshot of AWS Transform custom interactive mode."&gt;&lt;br&gt; &lt;em&gt;Figure 1:AWS Transform custom interactive mode&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;Discovering Available Transformations&lt;/h3&gt; 
&lt;p&gt;Use &lt;code&gt;atx custom def list&lt;/code&gt; to view all available transformations, including AWS-managed transformations and custom-defined (user-defined) transformations created by your organization. Key AWS-managed transformations include Java/Python/Node.js version upgrades and AWS SDK migrations.&lt;/p&gt; 
&lt;p&gt;&lt;img src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/23/atx_custom_java_fig2.png" alt="Screenshot of AWS Transform interface displaying categorized lists of AWS Managed transformations and user-defined custom
transformations."&gt;&lt;br&gt; &lt;em&gt;Figure 2: AWS Transform custom lists available AWS Managed and custom-defined (user-defined) transformations&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;Applying AWS-Managed Transformations&lt;/h3&gt; 
&lt;p&gt;Before applying any transformations, ensure your project is initialized with Git and that all build and test cases are working correctly with Java 8 (Test cases can be skipped when necessary by using the appropriate build command option and instructions to the agent). For Gradle projects, verify that &lt;code&gt;./gradlew build&lt;/code&gt; executes successfully.&lt;/p&gt; 
&lt;p&gt;The transformation process follows a structured approach:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Ensure Clean Git State:&lt;/strong&gt;&lt;/li&gt; 
&lt;/ol&gt; 
&lt;pre&gt;&lt;code&gt;git status
git add .
git commit -m "Baseline before Java 21 transformation"
&lt;/code&gt;&lt;/pre&gt; 
&lt;ol start="2"&gt; 
 &lt;li&gt;&lt;strong&gt;Apply Transformation:&lt;/strong&gt; You can apply transformations using either interactive or direct command modes. In this blog, we will use the interactive mode to walk through the process step-by-step:&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;&lt;strong&gt;Interactive Mode (used in this blog):&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;First, create a &lt;code&gt;config.json&lt;/code&gt; file with your transformation configuration:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;{
  "codeRepositoryPath": "/path/to/your/aws-appconfig-java-sample-gradle",
  "transformationName": "AWS/java-version-upgrade",
  "buildCommand": "./gradlew clean build",
  "validationCommands": "build and validate using \"./gradlew clean build\" after transformation to test with java 21",
  "additionalPlanContext": "This is a Java 8 to 21 transformation of a gradle app , also include all dependency migration as well. Use java path /path/to/your/java-8/bin/java and /path/to/your/java-21/bin/java when building before and after transformation. We are using gradle wrapper gradlew, update it if needed for java 21 upgrade. Check for deprecated methods and dependencies and update them."
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Update the &lt;code&gt;codeRepositoryPath&lt;/code&gt; in your config.json to point to your local project directory and update the Java path in &lt;code&gt;additionalPlanContext&lt;/code&gt; to match your Java 8 and 21 installation. For more details about configuration files and their parameters, refer to &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-workflows.html"&gt;Using Configuration Files&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;Then execute the transformation in interactive mode (used for this blog walkthrough):&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;For Java version upgrade with Gradle using config.json (Interactive Mode):
atx custom def exec -t --configuration file://config.json
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;img src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/23/atx_custom_java_fig3.png" alt="Screenshot of AWS Transform interface displaying categorized lists of AWS Managed transformations and user-defined custom transformations."&gt;&lt;br&gt; &lt;em&gt;Figure 3: AWS Transform custom execute a gradle transformation in interactive mode with transformation configuration supplied from config.json&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;Direct Command Mode (alternative approach): Use this mode for automated CI/CD pipelines or when you want to execute transformations without interactive prompts.&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;atx custom def exec -x -t --configuration “file://config.json”
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Parameter explanation:&lt;/strong&gt; –&lt;/p&gt; 
&lt;p&gt;&lt;code&gt;--configuration&lt;/code&gt;: Specifies the configuration file for interactive mode&lt;/p&gt; 
&lt;p&gt;– &lt;code&gt;-x&lt;/code&gt;: Executes the transformation automatically without interactive prompts (direct mode)&lt;/p&gt; 
&lt;p&gt;– &lt;code&gt;-t&lt;/code&gt;: Enables test mode for validation during execution (direct mode)&lt;/p&gt; 
&lt;p&gt;&lt;code&gt;--configuration&lt;/code&gt;: Specifies the configuration file path with file://prefix (direct mode)&lt;/p&gt; 
&lt;p&gt;ATX Execution Command &lt;em&gt;Executing AWS managed Java version upgrade transformation for Gradle project using ATX CLI&lt;/em&gt;&lt;/p&gt; 
&lt;ol start="3"&gt; 
 &lt;li&gt;&lt;strong&gt;Review Transformation Plan:&lt;/strong&gt; AWS Transform custom analyzes your project based on the configuration provided in &lt;code&gt;config.json&lt;/code&gt; and generates a comprehensive transformation plan. This plan details all proposed changes, including:&lt;/li&gt; 
&lt;/ol&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Java version updates&lt;/strong&gt;: Migration from Java 8 to Java 21 configurations&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;API migration patterns&lt;/strong&gt;: Automatic updates for deprecated APIs and modern alternatives&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Framework modernization&lt;/strong&gt;: Spring Boot version upgrades and compatibility updates&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Dependency modifications&lt;/strong&gt;: Updated library versions compatible with Java 21&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Build system updates&lt;/strong&gt;: Gradle configuration and plugin changes for Java 21 compatibility&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Code pattern improvements&lt;/strong&gt;: Implementation of modern Java features and best practices&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;We recommend doing a thorough review of the transformation plan to ensure it encompasses all expected updates. The &lt;code&gt;additionalPlanContext&lt;/code&gt; in your &lt;code&gt;config.json&lt;/code&gt; helps guide the transformation to include dependency migrations and Gradle wrapper updates. If adjustments are needed, provide feedback through the CLI interface.&lt;/p&gt; 
&lt;p&gt;Additionally, if you want to customize the transformation to target additional changes, for example upgrading additional legacy dependencies contained in the project, you can provide this as feedback when reviewing the transformation plan. AWS Transform custom incorporates all feedback provided to refine the transformation plan before proceeding.&lt;/p&gt; 
&lt;ol start="4"&gt; 
 &lt;li&gt;&lt;strong&gt;Apply the Transformation:&lt;/strong&gt; After confirming the transformation plan meets your requirements, type &lt;code&gt;proceed&lt;/code&gt; and press Enter. AWS Transform custom executes the transformation according to the approved plan.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;The transformation process automatically:&lt;/p&gt; 
&lt;p&gt;– Creates a new branch and commits the transformed changes in there&lt;/p&gt; 
&lt;p&gt;– Updates Gradle configuration for Java 21&lt;/p&gt; 
&lt;p&gt;– Migrates Java EE to Jakarta EE packages (if applicable)&lt;/p&gt; 
&lt;p&gt;– Updates framework dependencies for Java 21 compatibility&lt;/p&gt; 
&lt;p&gt;– Applies all necessary code changes&lt;/p&gt; 
&lt;p&gt;– Updates test cases and testing frameworks for Java 21 compatibility&lt;/p&gt; 
&lt;p&gt;– Runs comprehensive validation builds&lt;/p&gt; 
&lt;h3&gt;Results of AWS-Managed Transformations&lt;/h3&gt; 
&lt;p&gt;After applying the Java version upgrade transformation, below changes are observed:&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Configuration Updates:&lt;/strong&gt; – Java version: 1.8 → 21 – Spring Boot version upgrades – Gradle plugin and configuration updates – Dependency version modernization&lt;/p&gt; 
&lt;p&gt;To validate the changes, switch to Java 21 and run &lt;code&gt;./gradlew build&lt;/code&gt; to ensure the transformation was successful, then test the application functionality.&lt;/p&gt; 
&lt;p&gt;&lt;img src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/23/atx_custom_java_fig4.png" alt="Figure 4: displaying Updated Gradle Configuration Gradle build.gradle showing updated Java 21 configuration and modernized
dependencies"&gt;&lt;br&gt; &lt;em&gt;Figure 4: Updated Gradle Configuration Gradle build.gradle showing updated Java 21 configuration and modernized dependencies&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;&lt;img src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/23/atx_custom_java_fig5.png" alt="Figure 5: Updated code where legacy pattern of raw types usage is transformed to generics"&gt;&lt;br&gt; &lt;em&gt;Figure 5: Updated code where legacy pattern of raw types usage is transformed to generics&lt;/em&gt;&lt;br&gt; &lt;img src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/23/atx_custom_java_fig6.png" alt="Figure 6: Second example of updated code where legacy pattern of raw types usage is transformed to generics"&gt;&lt;br&gt; &lt;em&gt;Figure 6: Second example of updated code where legacy pattern of raw types usage is transformed to generics&lt;/em&gt;&lt;br&gt; &lt;img src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/23/atx_custom_java_fig7.png" alt="Figure 7: Updated dependency from javax.security to java.security and uses CertificateFactory to get X509Certificate"&gt;&lt;br&gt; &lt;em&gt;Figure 7: Updated&lt;/em&gt; &lt;em&gt;dependency from&amp;nbsp;javax.security&amp;nbsp;to&amp;nbsp;java.security and uses CertificateFactory to get X509Certificate&lt;/em&gt;&lt;br&gt; &lt;img src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/23/atx_custom_java_fig8.png" alt="Figure 8: Updated Test cases from junit 4 to junit5"&gt;&lt;br&gt; &lt;em&gt;Figure 8: Updated&lt;/em&gt; &lt;em&gt;Test cases from junit 4 to junit5&lt;/em&gt;&lt;/p&gt; 
&lt;h2&gt;Beyond AWS-Managed Transformations: Custom-Defined Transformations&lt;/h2&gt; 
&lt;p&gt;While AWS-managed transformations provide excellent coverage for standard Java modernization scenarios, there are cases where the available AWS-managed transformations may not address your specific transformation requirements. In such situations, AWS Transform custom enables users to create and test their own organization specific, custom-defined transformation definitions.&lt;/p&gt; 
&lt;h3&gt;When to Create Custom-Defined Transformations&lt;/h3&gt; 
&lt;p&gt;Custom-defined transformations become necessary when AWS-managed transformations don’t cover your specific needs, such as proprietary frameworks, organization-specific coding standards, or complex multi-step migration scenarios.&lt;/p&gt; 
&lt;h3&gt;Creating Custom-Defined Transformations&lt;/h3&gt; 
&lt;p&gt;AWS Transform custom enables teams to develop custom transformations using transformation rules and configuration files. This allows organizations to define transformation logic specific to their requirements, test on sample code, and share validated transformations across teams.&lt;/p&gt; 
&lt;p&gt;AWS Transform custom’s interactive mode (&lt;code&gt;atx&lt;/code&gt;) is particularly beneficial for creating Custom-defined transformations, enabling conversational interaction to iteratively refine requirements and get real-time feedback. Custom-defined transformations provide flexibility to extend AWS Transform custom’s capabilities when AWS-managed transformations don’t meet specific modernization needs.&lt;/p&gt; 
&lt;h3&gt;Continual Learning and Knowledge Items&lt;/h3&gt; 
&lt;p&gt;AWS Transform custom automatically learns from each transformation execution to improve future results. For Java upgrades specifically, the service captures patterns like successful refactoring strategies, common dependency conflicts, and framework compatibility matrices across Java versions. This knowledge feeds back into future transformations, making them more accurate at predicting successful upgrade paths and reducing manual intervention.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-workflows.html#custom-continual-learning"&gt;Knowledge items&lt;/a&gt; are account-specific and remain within your AWS account boundaries. Users can enable or disable continual learning, providing full control over this preference.&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;In this blog, we demonstrated how AWS Transform custom enables efficient Java application modernization using AWS managed transformations. Starting with a legacy Movie Service application running Java 8 and Spring Boot 2.x, we successfully transformed it to Java 21 with modern dependencies and patterns.&lt;/p&gt; 
&lt;p&gt;The step-by-step process showed how to establish a baseline, discover available transformations using &lt;code&gt;atx custom def list&lt;/code&gt;, and apply transformations through AWS Transform custom’s CLI. The result was a fully modernized application with updated Java versions, Spring Boot upgrades, and modern Java features like enhanced switch expressions and local variable type inference – all achieved in minutes rather than weeks of manual refactoring.&lt;/p&gt; 
&lt;h3&gt;Beyond Java Modernization&lt;/h3&gt; 
&lt;p&gt;Beyond Java modernization, AWS Transform custom’s transformation capabilities extend to other programming languages and frameworks, making it a versatile solution for comprehensive application portfolio modernization across diverse technology stacks. The agent supports diverse transformation use cases including:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Version upgrades for Java, Python, and Node.js&lt;/li&gt; 
 &lt;li&gt;Runtime and API migrations (AWS SDK v1→v2, Boto2→Boto3)&lt;/li&gt; 
 &lt;li&gt;Framework transitions and upgrades&lt;/li&gt; 
 &lt;li&gt;Language translations and architectural changes&lt;/li&gt; 
 &lt;li&gt;Organization-specific, custom-defined transformations&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Through its &lt;strong&gt;define once, transform everywhere&lt;/strong&gt; approach, AWS Transform custom enables organizations to capture and amplify transformation knowledge by defining transformations once and executing repeatable tasks across the entire organization. This reduces knowledge silos and ensures consistent quality regardless of team or project scope.&lt;/p&gt; 
&lt;h3&gt;Getting Started&lt;/h3&gt; 
&lt;p&gt;Ready to modernize your Java applications with AWS Transform custom? Here’s how to begin:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Install AWS Transform custom CLI (atx)&lt;/strong&gt; using the installation script and verify your environment&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Configure AWS credentials&lt;/strong&gt; with &lt;code&gt;transform-custom:*&lt;/code&gt; permissions&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Explore available AWS-managed transformations&lt;/strong&gt; using &lt;code&gt;atx custom def list&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Apply transformations&lt;/strong&gt; to your Java applications using direct execution mode (&lt;code&gt;atx custom def exec&lt;/code&gt;) or interactive mode (&lt;code&gt;atx&lt;/code&gt;)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Validate results&lt;/strong&gt; through comprehensive testing with &lt;code&gt;./gradlew build&lt;/code&gt; for Gradle projects&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Scale across your application portfolio&lt;/strong&gt; for consistent modernization&lt;/li&gt; 
&lt;/ol&gt; 
&lt;h3&gt;Additional Resources&lt;/h3&gt; 
&lt;p&gt;For detailed setup instructions and documentation, visit:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-get-started.html"&gt;AWS Transform custom Getting Started Guide&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/transform/custom/"&gt;AWS Transform custom Product Page&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom.html"&gt;AWS Transform custom Documentation&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-command-reference.html"&gt;AWS Transform custom Command Reference&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Start your modernization journey today and experience the power of AI-driven code transformation at scale.&lt;/p&gt; 
&lt;h3&gt;About the authors&lt;/h3&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2024/05/08/vasudeve-150x150.png" alt="Profile image for Venugopalan Vasudevan" width="142" height="142"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Venugopalan Vasudevan&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Venugopalan Vasudevan (Venu) is a Senior Specialist Solutions Architect at AWS, where he leads Generative AI initiatives focused on Amazon Q Developer, Kiro, and AWS Transform. He helps customers adopt and scale AI-powered developer and modernization solutions to accelerate innovation and business outcomes.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/04/08/dbalaaji.jpeg" alt="Profile image for Dinesh Balaaji Prabakaran" width="142" height="142"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Dinesh Balaaji Prabakaran&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Dinesh is a Enterprise Support Lead at AWS who specializes in supporting Independent Software Vendors (ISVs) on their cloud journey. With expertise in AWS Generative AI Services, he helps customers leverage Amazon Q Developer, Kiro, and AWS Transform to accelerate application development and modernization through AI-powered assistance.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/06/12/SureshPhoto-275x300.jpg" alt="Profile image for Sureshkumar Natarajan" width="142" height="142"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Sureshkumar Natarajan&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Sureshkumar Natarajan is a Senior Technical Account Manager at AWS, where he supports Enterprise customers in their cloud journey with a focus on Generative AI initiatives. He guides organizations in leveraging Amazon Q Developer, Kiro, and AWS Transform to unlock new capabilities, streamline development workflows, and achieve transformative business results.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/23/anjancd-1-300x300.jpeg" alt="Profile image for Anjan Dave" width="142" height="142"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Anjan Dave&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Anjan Dave is a Principal Solutions Architect at AWS with over 25 years of IT experience. He specializes in generative AI application modernization, infrastructure scalability, and developer productivity initiatives.&lt;br&gt; Anjan leads GenAI and modernization strategies across global projects, influencing technology roadmaps for HCM providers through event-driven and microservices architectures. He advocates for integrating Generative AI into the software development lifecycle to automate routine tasks, enabling engineering teams to focus on high-value architectural work.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Best Practices for Deploying AWS DevOps Agent in Production</title>
		<link>https://aws.amazon.com/blogs/devops/best-practices-for-deploying-aws-devops-agent-in-production/</link>
		
		<dc:creator><![CDATA[Greg Eppel]]></dc:creator>
		<pubDate>Wed, 28 Jan 2026 22:36:18 +0000</pubDate>
				<category><![CDATA[Best Practices]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Management & Governance]]></category>
		<category><![CDATA[Monitoring and observability]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Observability]]></category>
		<guid isPermaLink="false">a5d2879ca910c2c13e203abcae85918ada88d9d4</guid>

					<description>Root cause analysis during incidents is one of the most time-consuming and stressful parts of operating cloud applications. Engineers must quickly correlate telemetry data across multiple services, review deployment history, and understand complex application dependencies—all while under pressure to restore service. AWS DevOps Agent changes this paradigm by bringing autonomous investigation capabilities to your operations […]</description>
										<content:encoded>&lt;p&gt;Root cause analysis during incidents is one of the most time-consuming and stressful parts of operating cloud applications. Engineers must quickly correlate telemetry data across multiple services, review deployment history, and understand complex application dependencies—all while under pressure to restore service. AWS DevOps Agent changes this paradigm by bringing autonomous investigation capabilities to your operations team, reducing mean time to resolution (MTTR) from hours to minutes.&lt;/p&gt; 
&lt;p&gt;However, the effectiveness of AWS DevOps Agent depends heavily on how you configure your Agent Spaces which control resource access boundaries. An Agent Space that’s too narrow misses critical context during investigations. One that’s too broad introduces performance overhead and complexity. This post provides best practices for setting up Agent Spaces that balance investigation capability with operational efficiency, drawing from our experience onboarding early customers and using DevOps agent across our own teams.&lt;/p&gt; 
&lt;p&gt;By the end of this post, you’ll understand how to structure Agent Spaces for optimal investigation accuracy, determine the right scope of resource access, and use Infrastructure as Code (IaC) to streamline deployment. Let’s start by understanding the foundational concept that makes all of this possible: the Agent Space itself.&lt;/p&gt; 
&lt;h2&gt;&lt;span style="text-decoration: underline"&gt;What is an Agent Space and Why Does It Matter?&lt;/span&gt;&lt;/h2&gt; 
&lt;p&gt;An Agent Space is a logical container that defines what AWS DevOps Agent can access and investigate. Think of it as the agent’s operational boundary—it determines which cloud accounts the agent can query, which third-party integrations are available, and who can interact with investigations.&lt;/p&gt; 
&lt;p&gt;Agent Spaces are critical because AWS DevOps Agent needs sufficient context to perform accurate root cause analysis.&lt;/p&gt; 
&lt;p&gt;When an incident occurs, the agent:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Learns your resources and their relationships across accounts&lt;/li&gt; 
 &lt;li&gt;Correlates telemetry data from logs, metrics, and traces&lt;/li&gt; 
 &lt;li&gt;Reviews recent changes including deployments and configuration updates&lt;/li&gt; 
 &lt;li&gt;Generates and tests hypotheses by querying additional data sources&lt;/li&gt; 
&lt;/ol&gt; 
&lt;div id="attachment_24923" style="width: 1350px" class="wp-caption alignnone"&gt;
 &lt;img aria-describedby="caption-attachment-24923" loading="lazy" class="wp-image-24923 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/28/AgentSpaceTopology.png" alt="This view shows the key resources, entities, and relationships DevOps Agent has selected as a foundation for performing it's task efficently." width="1340" height="1044"&gt;
 &lt;p id="caption-attachment-24923" class="wp-caption-text"&gt;Figure 1: Agent Space Topology&lt;/p&gt;
&lt;/div&gt; 
&lt;p&gt;If the Agent Space doesn’t include access to a critical account or integration, the agent might miss the root cause entirely. Conversely, an overly broad Agent Space introduces performance challenges as the agent considers more resource permutations during investigations.&lt;/p&gt; 
&lt;p&gt;Understanding these trade-offs between scope and performance is essential. The question becomes: how do you determine the right boundaries for your specific organization and operational model?”&lt;/p&gt; 
&lt;h2&gt;&lt;span style="text-decoration: underline"&gt;Part 1: Design your Agent Space architecture&lt;/span&gt;&lt;/h2&gt; 
&lt;p&gt;We recommend thinking about Agent Space boundaries the same way you think about on-call responsibilities: grant access to accounts relevant to the application, but separate production from non-production environments.&lt;/p&gt; 
&lt;p&gt;This approach provides several benefits:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Familiar mental model – Operations teams already understand on-call boundaries&lt;/li&gt; 
 &lt;li&gt;Appropriate investigation scope – Mirrors how human engineers would investigate incidents&lt;/li&gt; 
 &lt;li&gt;Two-way door decision – You can expand or narrow Agent Space scope as needs evolve&lt;/li&gt; 
 &lt;li&gt;Performance balance – Provides sufficient context without overwhelming the agent&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;&lt;span style="text-decoration: underline"&gt;&lt;strong&gt;Determine Your Agent Space Boundaries&lt;/strong&gt;&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;Start by mapping your application architecture to Agent Space boundaries and consider the following questions:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;What defines a logical application?&lt;/strong&gt; 
  &lt;ul&gt; 
   &lt;li&gt;Does your team own multiple independent applications? If so, create separate Agent Spaces. However if the applications are tightly coupled (e.g. micro services dependent on each other) and map to a single resolver group, assigned group for on-call, then consider a single Agent Space per group.&lt;/li&gt; 
   &lt;li&gt;Is it a monolith spanning multiple accounts? Then one Agent Space with cross-account access makes sense.&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;How do you organize on-call rotations?&lt;/strong&gt; 
  &lt;ul&gt; 
   &lt;li&gt;Separate teams for production versus non-production suggests separate Agent Spaces.&lt;/li&gt; 
   &lt;li&gt;One team handling all environments might work with one Agent Space per application.&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;What are your investigation patterns?&lt;/strong&gt; 
  &lt;ul&gt; 
   &lt;li&gt;Do production incidents require querying dependent services in other accounts? Include those accounts.&lt;/li&gt; 
   &lt;li&gt;Are environments completely isolated? Keep Agent Spaces separate.&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Example decision tree:&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;&lt;code&gt;Application: E-commerce Platform&lt;/code&gt;&lt;br&gt; &lt;code&gt;├── Production environment&lt;/code&gt;&lt;br&gt; &lt;code&gt;│ ├── Account 111111111111 (Frontend)&lt;/code&gt;&lt;br&gt; &lt;code&gt;│ ├── Account 222222222222 (API Gateway + Lambda)&lt;/code&gt;&lt;br&gt; &lt;code&gt;│ └── Account 333333333333 (RDS + DynamoDB)&lt;/code&gt;&lt;br&gt; &lt;code&gt;├── Staging environment&lt;/code&gt;&lt;br&gt; &lt;code&gt;│ └── Account 444444444444 (All resources)&lt;/code&gt;&lt;br&gt; &lt;code&gt;└── Development environment&lt;/code&gt;&lt;br&gt; &lt;code&gt;└── Account 555555555555 (All resources)&lt;/code&gt;&lt;/p&gt; 
&lt;p&gt;&lt;code&gt;Recommended Agent Spaces:&lt;/code&gt;&lt;br&gt; &lt;code&gt;→ "EcommerceProd" (accounts 111111111111, 222222222222, 333333333333)&lt;/code&gt;&lt;br&gt; &lt;code&gt;→ "EcommerceNonProd" (accounts 444444444444, 555555555555)&lt;/code&gt;&lt;/p&gt; 
&lt;div id="attachment_24863" style="width: 1034px" class="wp-caption alignnone"&gt;
 &lt;img aria-describedby="caption-attachment-24863" loading="lazy" class="wp-image-24863 size-large" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/27/DevOpsAgentOncall-1024x435.png" alt="Create one Agent Space per oncall team. The Production Oncall team manages the &amp;quot;EcommerceProd&amp;quot; Agent Space covering production accounts. The Non-Prod Oncall team manages the &amp;quot;EcommerceNonProd&amp;quot; Agent Space covering development and staging accounts. This 1:1 mapping provides operations teams with a familiar mental model where Agent Space boundaries match their existing oncall responsibilities." width="1024" height="435"&gt;
 &lt;p id="caption-attachment-24863" class="wp-caption-text"&gt;Figure 2: Agent Space boundaries mirror on-call team responsibilities&lt;/p&gt;
&lt;/div&gt; 
&lt;h3&gt;&lt;span style="text-decoration: underline"&gt;&lt;strong&gt;Common Agent Space Patterns and Decision Points&lt;/strong&gt;&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;Beyond the basic single-application pattern, organizations encounter more complex scenarios that require careful consideration. Here are critical patterns to address these scenarios that we’ve seen customers successfully adopt:&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Pattern 1: Investigations Spanning Multiple Teams. &lt;/strong&gt;Large organizations with multiple teams (example: 3 teams managing 100+ production accounts) encounter situations where an issue originates in Team A’s infrastructure but the root cause lies in Team B’s services. The question becomes: how do you enable collaboration across Agent Spaces?&lt;/p&gt; 
&lt;p&gt;&lt;span style="text-decoration: underline"&gt;Recommended approach:&lt;/span&gt; Create application-specific Agent Spaces that include read-only access to shared resource accounts (e.g. dependencies). Establish clear on-call escalation procedures and add them as &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/about-aws-devops-agent-devops-agent-runbooks.html"&gt;runbooks&lt;/a&gt; when investigations identify cross-team root causes for efficient communication (e.g. via chat in &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/configuring-capabilities-for-aws-devops-agent-connecting-to-ticketing-and-chat-connecting-slack.html"&gt;Slack&lt;/a&gt;). Configure the shared service team’s resources with tags identifying which applications use them (example: &lt;code&gt;app-id: ecommerce-frontend&lt;/code&gt;). Following a consistent tagging strategy provides investigation context for shared resources while maintaining clear resource ownership.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Pattern 2: Shared Services and Network Operations Center (NOC) Teams. &lt;/strong&gt;Some organizations have centralized teams that provide and support shared infrastructure services (databases, networking, monitoring, security) used by multiple applications across the organization. These NOC or central operations teams need visibility into their services without requiring access to every application’s Agent Space.&lt;/p&gt; 
&lt;p&gt;&lt;span style="text-decoration: underline"&gt;Recommended approach:&lt;/span&gt; Create a dedicated Agent Space for the shared service team and configure an Agent Space scoped to the shared service team’s infrastructure and operational responsibilities:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Include AWS accounts containing shared databases, network infrastructure, centralized logging, and monitoring systems.&lt;/li&gt; 
 &lt;li&gt;Configure IAM roles that provide read-only access to the specific resources the team supports&lt;/li&gt; 
 &lt;li&gt;Include runbooks and operational procedures specific to the shared services&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;This follows the same principle as application-specific Agent Spaces: one Agent Space per on-call team, even when that Agent Space’s scope spans multiple applications.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Pattern 3: Central Operations Teams Managing Many Applications. &lt;/strong&gt;While shared services teams manage specific infrastructure domains, SRE teams often face an even larger challenge: operational responsibility for hundreds or thousands of applications at enterprise scale. Central operations teams responsible for operational tooling across hundreds or thousands of applications can efficiently manage Agent Spaces at scale using Infrastructure as Code.&lt;/p&gt; 
&lt;p&gt;&lt;span style="text-decoration: underline"&gt;Recommended approach: &lt;/span&gt;Use the AWS &lt;a href="https://github.com/aws-samples/sample-aws-devops-agent-cdk"&gt;CDK&lt;/a&gt; or &lt;a href="https://github.com/aws-samples/sample-aws-devops-agent-terraform"&gt;Terraform&lt;/a&gt; samples available as starting points. These samples enable teams to:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Define a standardized Agent Space template with your organization’s required IAM roles, integrations, and resource boundaries&lt;/li&gt; 
 &lt;li&gt;Deploy Agent Spaces programmatically as part of application onboarding workflows&lt;/li&gt; 
 &lt;li&gt;Enforce compliance through AWS Config rules or service control policies&lt;/li&gt; 
 &lt;li&gt;Track all Agent Spaces through consolidated billing and tagging (application-id, team, cost-center, environment)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Central operations teams manage the templates and governance policies, while application teams operate within those guardrails. This approach scales to thousands of applications with consistent configuration and automated deployment. AWS DevOps agent allows &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/aws-devops-agent-security-limiting-agent-access-in-an-aws-account.html"&gt;limiting agent access in an AWS account&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/aws-devops-agent-security-setting-up-iam-identity-center-authentication.html"&gt;controlling access&lt;/a&gt; for users to the operator console for teams to manage Agent Space access at scale.&lt;/p&gt; 
&lt;div id="attachment_24884" style="width: 557px" class="wp-caption alignnone"&gt;
 &lt;img aria-describedby="caption-attachment-24884" loading="lazy" class="wp-image-24884 " src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/28/DevOpsAgent-Pattern3-Updated.png" alt="A small platform team (a few engineers) manages 1,000+ Agent Spaces by maintaining standardized IaC templates (AWS CDK and Terraform). When new applications are registered, a CI/CD pipeline automatically deploys an Agent Space for that application team. This distributed pattern (one Agent Space per app team) scales to many applications without manual intervention, while maintaining investigation accuracy by avoiding a centralized &amp;quot;monitoring account&amp;quot; that would bias toward its primary application." width="547" height="708"&gt;
 &lt;p id="caption-attachment-24884" class="wp-caption-text"&gt;Figure 3: Enterprise scale pattern using Infrastructure as Code&lt;/p&gt;
&lt;/div&gt; 
&lt;p&gt;Now that you understand how to design Agent Space boundaries aligned with your team structure and scale requirements, let’s walk through the practical implementation steps to bring these architectural patterns to life.&lt;/p&gt; 
&lt;h2&gt;&lt;span style="text-decoration: underline"&gt;Part 2: Implement your Agent Space architecture&lt;br&gt; &lt;/span&gt;&lt;/h2&gt; 
&lt;p&gt;This section walks you through the practical steps of creating your first Agent Space—from verifying prerequisites and configuring IAM roles across accounts to integrating observability tools, setting up access controls, and testing your configuration to ensure investigations have the context they need.&lt;/p&gt; 
&lt;h3&gt;&lt;span style="text-decoration: underline"&gt;Step 1: Agent Space Prerequisites&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;Before setting up your first Agent Space, ensure you have:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;AWS accounts&lt;/strong&gt; – At least one AWS account where your application resources run&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;IAM permissions&lt;/strong&gt; – Sufficient access to create IAM roles and policies across accounts. AWS DevOps Agent requires two distinct sets of IAM permissions: 
  &lt;ul&gt; 
   &lt;li&gt;&lt;strong&gt;Agent Space role permissions&lt;/strong&gt; – The IAM role that AWS DevOps Agent assumes to query your AWS resources, access CloudWatch Logs, and discover topology. This role requires the &lt;code&gt;AIOpsAssistantPolicy&lt;/code&gt; managed policy plus additional permissions for AWS Support and expanded capabilities. See the &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/getting-started-with-aws-devops-agent-cli-onboarding-guide.html"&gt;CLI onboarding guide&lt;/a&gt; for the complete role configuration.&lt;/li&gt; 
   &lt;li&gt;&lt;strong&gt;Operator app role permissions&lt;/strong&gt; – The IAM role that controls what human operators can do in the AWS DevOps Agent web application, such as starting investigations, viewing results, and creating AWS Support cases. This role is separate from the agent’s investigation permissions.&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Service Control Policies (SCPs)&lt;/strong&gt; – Verify that your organization’s SCPs allow AWS DevOps Agent API actions. Common issue: Teams complete Agent Space setup but investigations fail because SCPs block &lt;code&gt;aidevops:*&lt;/code&gt; actions or &lt;code&gt;bedrock:InvokeModel&lt;/code&gt; actions. Review your AWS Organization’s SCPs and add exceptions for DevOps Agent if needed. Note that DevOps Agent and Amazon Bedrock inference are not impacted by policies that restrict customer content to specific AWS regions—Bedrock may use US regions other than US East (N. Virginia) for stateless inference.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Observability tools&lt;/strong&gt; – At minimum, Amazon CloudWatch (automatically available via IAM roles) and Amazon CloudTrail. For comprehensive investigations, integrate Application Performance Monitoring tools like Datadog, Dynatrace, New Relic, Grafana, or Splunk. See &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/configuring-capabilities-for-aws-devops-agent-connecting-telemetry-sources-index.html"&gt;Connecting telemetry sources&lt;/a&gt; for supported integrations.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Understanding third-party integration configuration&lt;/strong&gt; – Some third-party tools require a two-step configuration process: 
  &lt;ul&gt; 
   &lt;li&gt;&lt;strong&gt;Account-level registration&lt;/strong&gt; – Tools that use OAuth (like GitHub, Dynatrace) must first be registered at the AWS account level through the DevOps Agent console. This establishes OAuth credentials that are shared across all Agent Spaces in your account.&lt;/li&gt; 
   &lt;li&gt;&lt;strong&gt;Agent Space-level association&lt;/strong&gt; – After registration, each Agent Space individually specifies which resources from that tool to use. For example, after registering GitHub once, Agent Space “EcommerceProd” can associate only production repositories while Agent Space “EcommerceNonProd” associates development repositories. Other tools like Datadog, New Relic, and Splunk can be directly associated with an Agent Space using API keys or tokens without separate account-level registration. CloudWatch requires no additional configuration beyond IAM roles.&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Source control&lt;/strong&gt; – GitHub or GitLab repository access for code context and deployment correlation (optional but highly recommended)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;IaC tooling&lt;/strong&gt; – AWS CDK (TypeScript/Python), Terraform, AWS CLI, or AWS Management Console for Agent Space deployment&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;With prerequisites verified, you’re ready to create your Agent Space and establish the IAM trust relationships that enable investigations.&lt;/p&gt; 
&lt;h3&gt;&lt;span style="text-decoration: underline"&gt;Step 2: Create an Agent Space&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;AWS DevOps Agent requires IAM roles in each AWS account within the Agent Space boundary. The agent assumes these roles to query CloudWatch Logs, describe resources, and build application topology.&lt;/p&gt; 
&lt;p&gt;The AWS DevOps Agent is designed to retrieve operational data from multiple AWS Regions across all AWS accounts that you grant access to within the configured Agent Space, enabling comprehensive visibility into distributed infrastructure and applications regardless of their geographic deployment, while supporting multiple accounts through a configuration process that involves creating IAM roles with appropriate trust policies and permissions in secondary accounts&lt;/p&gt; 
&lt;p&gt;&lt;span style="text-decoration: underline"&gt;&lt;strong&gt;Option A: Use the AWS Console wizard&lt;br&gt; &lt;/strong&gt;&lt;/span&gt;Navigate to the &lt;a href="https://console.aws.amazon.com/devops-agent/"&gt;AWS DevOps Agent console&lt;/a&gt; and choose Create Agent Space and follow the guided setup to create IAM roles in each target account.&lt;/p&gt; 
&lt;div id="attachment_24878" style="width: 1034px" class="wp-caption alignnone"&gt;
 &lt;img aria-describedby="caption-attachment-24878" loading="lazy" class="wp-image-24878 size-large" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/27/DevOpsAgentSpaceSetup-1024x587.png" alt="The Create an Agent Space setup wizard in the AWS Management Console showing Agent Space Details." width="1024" height="587"&gt;
 &lt;p id="caption-attachment-24878" class="wp-caption-text"&gt;Figure 4: Creating an Agent Space in the Console&lt;/p&gt;
&lt;/div&gt; 
&lt;p&gt;The setup wizard helps in configuring cross-account trust relationships.&lt;/p&gt; 
&lt;div id="attachment_24881" style="width: 1034px" class="wp-caption alignnone"&gt;
 &lt;img aria-describedby="caption-attachment-24881" loading="lazy" class="wp-image-24881 size-large" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/27/DevOpsAgent-MultiAccount-1024x701.png" alt="Shows the Agent Space Management Console and in particular the capability to configure your Agent Space to access multiple accounts." width="1024" height="701"&gt;
 &lt;p id="caption-attachment-24881" class="wp-caption-text"&gt;Figure 5: Multiple account configuration for your Agent Space&lt;/p&gt;
&lt;/div&gt; 
&lt;p&gt;&lt;span style="text-decoration: underline"&gt;&lt;strong&gt;Option B: Use Infrastructure as Code (Recommended)&lt;br&gt; &lt;/strong&gt;&lt;/span&gt;We provide sample &lt;a href="https://github.com/aws-samples/sample-aws-devops-agent-cdk"&gt;CDK&lt;/a&gt; and &lt;a href="https://github.com/aws-samples/sample-aws-devops-agent-terraform"&gt;Terraform&lt;/a&gt; templates that automate Agent Space creation and IAM role deployment across multiple accounts.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;AWS CDK example (TypeScript):&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ts"&gt;//If you have many accounts, use a loop:

const accounts = [
  { id: '111111111111', name: 'Prod', role: prodRole, stage: 'prod' },
  { id: '222222222222', name: 'Dev', role: devRole, stage: 'dev' },
  { id: '333333333333', name: 'Test', role: testRole, stage: 'test' },
];

accounts.forEach(account =&amp;gt; {
  const association = new devopsagent.CfnAssociation(this, `${account.name}Association`, {
    agentSpaceId: agentSpace.ref,
    serviceId: 'aws',
    configuration: {
      aws: {
        assumableRoleArn: account.role.roleArn,
        accountId: account.id,
        accountType: 'monitor'
      }
    }
  });

  association.addDependency(agentSpace);
  cdk.Tags.of(association).add('stage', account.stage);
});&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;For detailed instructions on setting up IAM roles and permissions across accounts, see the &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/getting-started-with-aws-devops-agent-cli-onboarding-guide.html"&gt;CLI Onboarding Guide&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;Once your Agent Space exists and has access to AWS accounts, the next critical step is connecting the observability and development tools that provide investigation context beyond AWS native services.&lt;/p&gt; 
&lt;h3&gt;&lt;span style="text-decoration: underline"&gt;Step 3: Configure Integrations&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;AWS DevOps Agent investigates incidents by correlating data from multiple sources. The more context available, the more accurate the root cause analysis.&lt;/p&gt; 
&lt;p&gt;Recommended integrations by priority:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Amazon CloudWatch&lt;/strong&gt; – Provides logs, metrics, and traces from AWS services. The agent queries CloudWatch Logs Insights automatically during investigations. No additional configuration is needed if IAM roles are properly configured.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Observability tools&lt;/strong&gt; – Datadog, Dynatrace, New Relic, and Splunk provide distributed tracing, logs, metrics, and application-level context. Configure via Agent Space integrations in the AWS Console.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Code repositories&lt;/strong&gt; – GitHub or GitLab integration enables the agent to review recent deployments and code changes. Requires OAuth or personal access token.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;CI/CD pipelines&lt;/strong&gt; – GitHub Actions or GitLab workflows help the agent correlate incidents with deployment timing. Configured alongside code repository integration.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Communication Channels&lt;/strong&gt; – Slack and ServiceNow integration enables DevOps Agent to post real-time investigation updates to team channels and automatically update incident tickets with findings, root cause analysis, and recommended mitigation steps throughout the investigation lifecycle.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;h3&gt;Advanced Integrations&lt;/h3&gt; 
&lt;p&gt;Beyond built-in integrations, AWS DevOps Agent supports webhook triggered investigations and custom MCP (Model Context Protocol) servers so you can bring-your-own observability tools.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Webhook configuration for investigation triggers&lt;br&gt; &lt;/strong&gt;Webhooks allow external systems (Grafana, Prometheus, PagerDuty, custom monitoring tools) to automatically trigger DevOps Agent investigations when incidents occur. Each Agent Space receives a unique webhook URL that accepts JSON payloads describing the incident.&lt;/p&gt; 
&lt;p&gt;&lt;span style="text-decoration: underline"&gt;Common configuration pitfalls:&lt;/span&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Webhook authentication:&lt;/strong&gt; Webhooks use HMAC signatures for security. Store the webhook secret in AWS Secrets Manager and rotate it according to your security policies.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Payload format:&lt;/strong&gt; Ensure your monitoring tool sends incident context including timestamps, affected resources, and symptom descriptions. Richer context enables more accurate investigations.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;For detailed webhook setup, see &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/configuring-capabilities-for-aws-devops-agent-invoking-devops-agent-through-webhook.html"&gt;Invoking DevOps Agent through Webhook&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Bring-your-own MCP servers&lt;br&gt; &lt;/strong&gt;If you use observability tools beyond the built-in integrations (Grafana, Prometheus, custom telemetry systems), you can connect them via MCP servers. MCP servers expose your tool’s data through a standardized protocol that DevOps Agent queries during investigations.&lt;/p&gt; 
&lt;p&gt;&lt;span style="text-decoration: underline"&gt;Key requirements for MCP servers:&lt;/span&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Publicly accessible HTTPS endpoint&lt;/strong&gt;: MCP servers must be reachable from the public internet. VPC-hosted servers are not currently supported.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Read-only tools only&lt;/strong&gt;: For security, only expose MCP tools that perform read operations. Write operations introduce prompt injection risks.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Tool allowlisting&lt;/strong&gt;: Register MCP servers at the account level, then selectively enable specific tools per Agent Space. Don’t grant access to all tools—choose only those relevant to investigations.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;span style="text-decoration: underline"&gt;Common MCP setup errors:&lt;/span&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Authentication misconfiguration&lt;/strong&gt;: MCP servers support OAuth 2.0 or API key authentication. Verify your OAuth client credentials are correct and that token exchange URLs are accessible from AWS infrastructure.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Tool name length&lt;/strong&gt;: MCP tool names have a maximum length of 64 characters. Longer names will fail registration.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Endpoint URL format&lt;/strong&gt;: Use the full HTTPS URL including path. Example: &lt;code&gt;https://mcp.example.com/v1/mcp&lt;/code&gt; not just &lt;code&gt;mcp.example.com&lt;/code&gt;.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;For comprehensive MCP server setup including authentication configuration, see &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/configuring-capabilities-for-aws-devops-agent-connecting-mcp-servers.html"&gt;Connecting MCP Servers&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;&lt;span style="text-decoration: underline"&gt;Testing your integrations&lt;br&gt; &lt;/span&gt;&lt;/strong&gt;After configuring webhooks or MCP servers, trigger a test investigation to verify connectivity:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;For webhooks: Send a test payload from your monitoring tool and verify the investigation starts in the DevOps Agent web app&lt;/li&gt; 
 &lt;li&gt;For MCP servers: Start an investigation manually and check the agent journal to confirm it successfully called your MCP tools&lt;/li&gt; 
 &lt;li&gt;Review any errors in AWS CloudTrail logs which capture all DevOps Agent API calls including integration attempts&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;With your data sources connected, you now need to ensure the right people have appropriate access to investigations while maintaining security boundaries.&lt;/p&gt; 
&lt;h3&gt;&lt;span style="text-decoration: underline"&gt;Step 4: Configure Access Controls&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;Agent Spaces support fine-grained access controls to ensure only authorized team members can interact with investigations.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Access control considerations:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Who should view investigations? Typically on-call engineers, SREs, and DevOps engineers. Consider including security teams for security-related incidents.&lt;/li&gt; 
 &lt;li&gt;Who should create AWS Support cases? Typically on-call leads and senior engineers. Restrict this permission to prevent excessive case creation.&lt;/li&gt; 
 &lt;li&gt;Who should modify Agent Space configuration? Typically central operations or infrastructure teams. Separate this from day-to-day investigation access.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;IAM-based access control:&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;AWS DevOps Agent uses IAM policies to control access to Agent Spaces. Attach policies to IAM users, groups, or roles:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-JSON"&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "devopsagent:GetAgentSpace",
        "devopsagent:StartInvestigation",
        "devopsagent:GetInvestigation",
        "devopsagent:ListInvestigations"
      ],
      "Resource": "arn:aws:devopsagent:us-east-1:123456789012:agentspace/EcommerceProd"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;AWS DevOps Agent operates within your AWS environment with privileged access to operational data across multiple accounts. While general security foundations apply, Agent Space configuration introduces specific considerations. For comprehensive security guidance, see the &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/aws-devops-agent-security.html"&gt;AWS DevOps Agent Security&lt;/a&gt; documentation.&lt;/p&gt; 
&lt;p&gt;Access controls are in place—now it’s time to validate that your Agent Space configuration provides the investigation coverage you need.&lt;/p&gt; 
&lt;h3&gt;&lt;span style="text-decoration: underline"&gt;Step 5: Test and Iterate&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;Agent Space configuration is a two-way door decision. Start with a focused scope and expand based on investigation results.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Testing your Agent Space:&amp;nbsp;&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Trigger a test investigation using the AWS DevOps Agent web app.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Start an investigation and provide symptoms such as “High latency on /api/checkout endpoint”.&lt;/li&gt; 
 &lt;li&gt;Observe which resources the agent queries.&lt;/li&gt; 
 &lt;li&gt;Review investigation completeness. Did the agent identify the root cause?&lt;/li&gt; 
 &lt;li&gt;Were any accounts or services missing from the investigation?&lt;/li&gt; 
 &lt;li&gt;Did the agent have sufficient telemetry data?&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Adjust Agent Space boundaries based on results.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Add accounts if investigations lack context.&lt;/li&gt; 
 &lt;li&gt;Add integrations if telemetry gaps exist.&lt;/li&gt; 
 &lt;li&gt;Narrow scope if performance degrades.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;&lt;span style="text-decoration: underline"&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/span&gt;&lt;/h2&gt; 
&lt;p&gt;AWS DevOps Agent transforms incident response from a manual, time-consuming process into an autonomous, data-driven investigation. However, the agent’s effectiveness depends on proper Agent Space configuration. By following the on-call based approach—granting access to accounts relevant to your application while separating production from non-production environments—you provide sufficient context for accurate root cause analysis without introducing unnecessary complexity.&lt;/p&gt; 
&lt;p&gt;Key takeaways:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Think on-call boundaries – Agent Space scope should mirror how your team investigates incidents&lt;/li&gt; 
 &lt;li&gt;Use Infrastructure as Code – CDK and Terraform templates ensure consistent, repeatable deployments&lt;/li&gt; 
 &lt;li&gt;Integrate observability tools – More data sources equals more accurate investigations&lt;/li&gt; 
 &lt;li&gt;Iterate based on results – Expand or narrow Agent Space scope as investigation patterns emerge&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Next steps:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Create your first &lt;a href="https://console.aws.amazon.com/devops-agent/"&gt;Agent Space&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;We’re committed to making AWS DevOps Agent easier to adopt and more accurate in solving customer problems. Your&lt;br&gt; Agent Space setup is the foundation for achieving fast, reliable incident resolution. Have questions or feedback? Leave a comment below.&lt;/p&gt; 
&lt;h2&gt;Authors&lt;/h2&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/27/tqquresh.jpg" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Tipu Qureshi&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Tipu Qureshi is a Senior Principal Technologist in AWS Agentic AI, focusing on operational excellence and incident response automation. He works with AWS customers to design resilient, observable cloud applications and autonomous operational systems.&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/27/billfine.jpg" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Bill Fine&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Bill Fine is a Product Management Leader for Agentic AI at AWS, where he leads product strategy and customer engagement for AWS DevOps Agent.&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/27/geppel.jpg" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Greg Eppel&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Greg Eppel is a Principal Specialist for DevOps Agent and has spent the last several years focused on Cloud Operations and helping AWS customers on their cloud journey.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>AWS CloudFormation 2025 Year In Review</title>
		<link>https://aws.amazon.com/blogs/devops/aws-cloudformation-2025-year-in-review/</link>
		
		<dc:creator><![CDATA[Idriss Laouali Abdou]]></dc:creator>
		<pubDate>Wed, 28 Jan 2026 01:08:08 +0000</pubDate>
				<category><![CDATA[AWS Cloud Development Kit]]></category>
		<category><![CDATA[AWS CloudFormation]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[AWS CDK]]></category>
		<guid isPermaLink="false">0d5f160576b658fcef5ae90436cacd205ef49379</guid>

					<description>AWS CloudFormation enables you to model and provision your cloud application infrastructure as code-base templates. Whether you prefer writing templates directly in JSON or YAML, or using programming languages like Python, Java, and TypeScript with the AWS Cloud Development Kit (CDK), CloudFormation and CDK provide the flexibility you need. For organizations adopting multi-account strategies, CloudFormation […]</description>
										<content:encoded>&lt;p&gt;&lt;a href="https://aws.amazon.com/cloudformation/"&gt;AWS CloudFormation&lt;/a&gt; enables you to model and provision your cloud application infrastructure as code-base templates. Whether you prefer writing templates directly in JSON or YAML, or using programming languages like Python, Java, and TypeScript with the &lt;a href="https://aws.amazon.com/cdk/"&gt;AWS Cloud Development Kit (CDK)&lt;/a&gt;, CloudFormation and CDK provide the flexibility you need. For organizations adopting &lt;a href="https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/organizing-your-aws-environment.html"&gt;multi-account strategies&lt;/a&gt;, CloudFormation &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html"&gt;StackSets&lt;/a&gt; offers a powerful capability to deploy resources across multiple regions and accounts in parallel.&lt;/p&gt; 
&lt;p&gt;In 2025, we delivered a comprehensive set of major enhancements focused on three core areas: reducing dev-test cycle through early validation, improving deployment safety with improved configuration drift management, and integrating IaC context to AI-powered development tools.&lt;/p&gt; 
&lt;p&gt;These launches address common pain points in infrastructure development workflows, from catching deployment errors before resource provisioning to managing configuration drift systematically. The features span the entire development lifecycle, from template authoring in your IDE to multi-account deployments at scale.&lt;/p&gt; 
&lt;p&gt;This blog provides an overview of the key capabilities we launched in 2025 and how they improve your infrastructure development workflow.&lt;/p&gt; 
&lt;h1&gt;Accelerating Development Cycles&lt;/h1&gt; 
&lt;h2&gt;Early Validation &amp;amp; Enhanced Troubleshooting: Pre-Deployment Error Detection&lt;/h2&gt; 
&lt;p&gt;CloudFormation now validates your templates during &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html"&gt;change set&lt;/a&gt; (preview of infrastructure changes before deployment) creation, catching common deployment errors before resource provisioning begins. The validation checks for invalid property syntax, resource name conflicts with existing resources in your account, and S3 bucket emptiness constraints on delete operations.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24249" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/18/CloudFormation-Pre-Deployment-validation-view.png" alt="Figure 1: Pre-deployment validations view" width="1170" height="765"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 1: Pre-deployment validations view&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;When validation fails, the change set status shows ‘FAILED’ with detailed information about each issue, including the property path where problems occur. This early feedback helps you fix issues faster rather than waiting for deployment failures.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24282" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/18/CloudFormation-Validation-of-Invalid-ENUM-value-for-nested-property.png" alt="Figure 2: CloudFormation Validation of Invalid ENUM value for nested property" width="2440" height="1850"&gt;&lt;br&gt; &lt;strong&gt;Figure 2: Validation of Invalid ENUM value for nested property&lt;/strong&gt;&lt;/p&gt; 
&lt;h2&gt;Improved Deployment troubleshooting&lt;/h2&gt; 
&lt;p&gt;For runtime errors that occur during deployment, every stack operation now receives a unique operation ID. You can filter stack events by operation ID to quickly identify root causes, reducing troubleshooting time from minutes to seconds. The new &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/cloudformation/describe-events.html"&gt;describe-events&lt;/a&gt; API provides grouped access to events. You can query events for a specific operation, filter to FAILED status events, and extract the root cause without parsing through the entire stack event history.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24289" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/18/CloudFormation-Stack-Info-page-showing-new-operation-IDs-4.png" alt="Figure 3: New CloudFormation stack operation page" width="3424" height="1758"&gt;&lt;strong&gt;Figure 3: New CloudFormation stack operation page&lt;/strong&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24262" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/18/CloudFormation-Filter-operation-failure-root-causes.png" alt="Figure 4: Filter operation failure root causes" width="3378" height="1206"&gt;&lt;br&gt; &lt;strong&gt;Figure 4: Filter operation failure root causes&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Learn more&lt;/strong&gt;:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2025/11/cloudformation-dev-test-cycle-validation-troubleshooting/"&gt;What’s New Post&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/devops/accelerate-infrastructure-development-with-cloudformation-pre-deployment-validation-and-simplified-troubleshooting/"&gt;Detailed Blog Post&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;CloudFormation IDE Experience: Language Server Protocol Integration&lt;/h2&gt; 
&lt;p&gt;We launched the &lt;a href="https://aws.amazon.com/about-aws/whats-new/2025/11/aws-cloudformation-intelligent-authoring-ides/"&gt;AWS CloudFormation Language Server&lt;/a&gt;, bringing end-to-end infrastructure development directly into your IDE. Available through the AWS Toolkit for Visual Studio Code, Kiro, and other compatible IDEs, this capability transforms how you author CloudFormation templates.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24262" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/18/cfn_init_1-1-1.gif" alt="Figure 1: Filter operation failure root causes" width="3378" height="1206"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 1: Initializing a CloudFormation project with environment configuration&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;The Language Server provides context-aware auto-completion that understands CloudFormation semantics. When you define resources, it suggests only required properties automatically, while optional properties appear on hover. Built-in validation catches issues before deployment integrating early validation capabilities, flagging invalid resource properties, missing IAM permissions, and security policy violations using CloudFormation Guard.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 2: Hover information displaying optional properties and their documentation&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;The drift-aware deployment view highlights differences between your template and deployed infrastructure, helping you spot configuration changes made outside CloudFormation. The Language Server also provides semantic navigation features, go-to-definition for logical IDs, find-all-references for resource dependencies, and hover documentation that pulls from the CloudFormation resource specification. These features work across intrinsic functions like !Ref, !GetAtt, and !Sub, understanding the CloudFormation template structure. By integrating validation and real-time feedback directly into your authoring experience, the Language Server keeps you in flow state, reducing context switching between your IDE, AWS Console, and documentation.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 3: Type-aware completions for intrinsic functions like !GetAtt &amp;amp; !Ref&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Learn more:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2025/11/aws-cloudformation-intelligent-authoring-ides/"&gt;What’s New Post&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/devops/announcing-cloudformation-ide-experience-end-to-end-development-in-your-ide/"&gt;Detailed Blog Post&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3 style="text-align: left"&gt;&lt;strong&gt;Stack Refactoring: Adapt your infrastructure to your organization evolution&lt;/strong&gt;&lt;/h3&gt; 
&lt;p&gt;Stack Refactoring enables you to reorganize your CloudFormation and CDK infrastructure without disrupting deployed resources. You can move resources between stacks, rename logical IDs, and decompose monolithic stacks into focused components while maintaining resource stability and operational state.&lt;/p&gt; 
&lt;p&gt;Whether you’re modernizing legacy stacks, aligning infrastructure with evolving architectural patterns, or improving long-term maintainability, Stack Refactoring adapts your CloudFormation and CDK organization to changing requirements. The console and CDK experience, launched this year, extends the earlier CLI capability, making refactoring accessible through your preferred interface.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24262" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/21/blogpicture1-1024x423.png" alt="Provide a description to help you identify your stack refactor." width="3378" height="1206"&gt;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Learn more:&lt;/strong&gt;&lt;br&gt; – &lt;a href="https://aws.amazon.com/blogs/devops/introducing-aws-cloudformation-stack-refactoring-reorganize-your-infrastructure-without-disruption/"&gt;Blog Post – Refactor CloudFormation from Console&lt;/a&gt;&lt;br&gt; – &lt;a href="https://aws.amazon.com/blogs/devops/aws-cloud-development-kit-cdk-launches-refactor/"&gt;Blog Post – Refactor CDK&lt;/a&gt;&lt;/p&gt; 
&lt;h1&gt;Safer Deployments&lt;/h1&gt; 
&lt;h2&gt;Drift-Aware Change Sets&lt;/h2&gt; 
&lt;p&gt;Configuration drift occurs when infrastructure managed by CloudFormation is modified through the AWS Console, SDK, or CLI. Drift-aware change sets address this challenge by providing a three-way comparison between your new template, last-deployed template, and actual infrastructure state.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24262" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/19/figure5-memory-reduction-1-2-1024x762.png" alt="Examine the drift-aware change set to see the dangerous memory reduction that would occur" width="3378" height="1206"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24262" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/19/figure6-view-drift-1-2-1024x749.png" alt="Examine the drift-aware change set to see the dangerous memory reduction that would occur" width="3378" height="1206"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 4: Examine the drift-aware change set to see the dangerous memory reduction that would occur&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;This capability helps you prevent unexpected overwrites of drift. If your change set preview shows unintended changes, you can update your template values and recreate the change set before deployment. During execution, CloudFormation matches resource properties with template values and recreates resources deleted outside of CloudFormation.&lt;/p&gt; 
&lt;p&gt;Drift-aware change sets enable you to systematically revert drift and keep infrastructure in sync with templates, strengthening reproducibility for testing and disaster recovery while maintaining your security posture.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Learn more: &lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2025/11/configuration-drift-enhanced-cloudformation-sets/"&gt;What’s New Post&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/devops/safely-handle-configuration-drift-with-cloudformation-drift-aware-change-sets/"&gt;Detailed Blog Post&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h1&gt;Enforcing Proactive Controls&lt;/h1&gt; 
&lt;h2&gt;CloudFormation Hooks: Control Catalog with Hooks&lt;/h2&gt; 
&lt;p&gt;AWS CloudFormation Hooks now supports managed proactive controls, enabling customers to validate resource configurations against AWS best practices without writing custom Hooks logic. Customers can select controls from the AWS Control Tower Controls Catalog and apply them during CloudFormation operations. When using CloudFormation, customers can configure these controls to run in warn mode, allowing teams to test controls without blocking deployments and giving them the flexibility to evaluate control behavior before enforcing policies in production. This significantly reduces setup time, eliminates manual errors, and ensures comprehensive governance coverage across your infrastructure.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24903" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/28/Control-Catalog-with-Hooks.png" alt="" width="2152" height="922"&gt;&lt;/p&gt; 
&lt;p&gt;AWS also introduced a new Hooks Invocation Summary page in the CloudFormation console. This centralized view provides a complete historical record of Hooks activity, showing which controls were invoked, their execution details, and outcomes such as pass, warn, or fail. This simplifies compliance reporting issues faster.&lt;/p&gt; 
&lt;p&gt;With this launch, customers can now leverage AWS-managed controls as part of their provisioning workflows, eliminating the overhead of writing and maintaining custom logic. These controls are curated by AWS and aligned with industry best practices, helping teams enforce consistent policies across all environments. The new summary page delivers essential visibility into Hook invocation history, enabling faster issue resolution and streamlined compliance reporting.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Learn more:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cloudformation-cli/latest/hooks-userguide/proactive-controls-hooks.html"&gt;AWS CloudFormation Proactive Control Hooks&amp;nbsp;&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h1&gt;Scaling Multi-Account Infrastructure&lt;/h1&gt; 
&lt;h2&gt;StackSets Deployment Ordering&lt;/h2&gt; 
&lt;p style="text-align: center"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24262" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/17/Image-1.png" alt="Figure : Example of a multi-region AWS CloudFormation StackSet architecture with an administrative account and target accounts" width="3378" height="1206"&gt;&lt;/p&gt; 
&lt;p&gt;CloudFormation StackSets now supports deployment ordering for auto-deployment mode, enabling you to define the sequence in which stack instances automatically deploy across accounts and regions. This capability coordinates complex multi-stack deployments where foundational infrastructure must be provisioned before dependent application components.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24262" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/17/image-25.png" alt="Figure : CloudFormation StackSets Console – Auto-deployment options view" width="3378" height="1206"&gt;&lt;/p&gt; 
&lt;p&gt;When creating or updating a StackSet, you can specify up to 10 dependencies per stack instance using the DependsOn parameter in the AutoDeployment configuration. StackSets automatically orchestrates deployments based on your defined relationships. For example, you can ensure networking and security stack instances complete deployment before application stack instances begin, preventing deployment failures due to missing dependencies.&lt;/p&gt; 
&lt;p&gt;StackSets includes built-in cycle detection to prevent circular dependencies and provides error messages to help resolve configuration issues. This feature is available at no additional cost in all AWS Regions where CloudFormation StackSets is available.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Learn more:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2025/11/configuration-drift-enhanced-cloudformation-sets/"&gt;What’s New Post&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/devops/safely-handle-configuration-drift-with-cloudformation-drift-aware-change-sets/"&gt;Detailed Blog Post&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h1&gt;AI-Powered Infrastructure Development&lt;/h1&gt; 
&lt;h2&gt;AWS IaC Server&lt;/h2&gt; 
&lt;p&gt;We introduced the &lt;a href="https://awslabs.github.io/mcp/servers/aws-iac-mcp-server"&gt;AWS Infrastructure-as-Code (IaC) MCP Server&lt;/a&gt;, bridging AI assistants with your AWS infrastructure development workflow. Built on the Model Context Protocol (MCP), this server enables AI assistants like &lt;a href="https://kiro.dev/cli/"&gt;Kiro CLI &lt;/a&gt;to help you search CloudFormation and CDK documentation, validate templates, troubleshoot deployments, and follow best practices, all while maintaining the security of local execution.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24608" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/Figure-1-Kiro-CLI-with-AWS-IaC-MCP-server-.png" alt="Figure 1: Kiro-CLI with AWS IaC MCP server " width="638" height="752"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 1: Kiro-CLI with AWS IaC MCP server&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;The IaC MCP Server provides nine specialized tools organized into two categories. Remote documentation search tools connect to AWS knowledge bases to retrieve up-to-date information about CloudFormation resources, CDK APIs, and implementation guidance. Local validation and troubleshooting tools run entirely on your machine, performing syntax validation with cfn-lint, security checks with CloudFormation Guard, and deployment failure analysis with integrated CloudTrail events.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24621" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/Figure-4-Validate-my-CloudFormation-template-with-AWS-IaC-MCP-Server-1.png" alt="Figure 4: Validate my CloudFormation template with AWS IaC MCP Server" width="639" height="972"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 4: Validate my CloudFormation template with AWS IaC MCP Server&lt;/strong&gt;&lt;/p&gt; 
&lt;h2&gt;Key Use Cases&lt;/h2&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Intelligent Documentation Assistant&lt;/strong&gt;&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;Instead of manually searching through documentation, ask your AI assistant natural language questions:&lt;/p&gt; 
&lt;p&gt;“How do I create an S3 bucket with encryption enabled in CDK?”&lt;/p&gt; 
&lt;p&gt;The server searches CDK best practice and samples, returning relevant code examples and explanations.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;2. Proactive Template Validation&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Before deploying infrastructure changes:&lt;/p&gt; 
&lt;p&gt;User: “Validate my CloudFormation template and check for security issues”&lt;/p&gt; 
&lt;p&gt;AI Agent: [Uses validate_cloudformation_template and check_cloudformation_template_compliance]&lt;/p&gt; 
&lt;p&gt;“Found 2 issues: Missing encryption on EBS volumes,&lt;/p&gt; 
&lt;p&gt;and S3 bucket lacks public access block configuration”&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt; &amp;nbsp;3. Rapid Deployment Troubleshooting&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;When a stack deployment fails:&lt;/p&gt; 
&lt;p&gt;User: “My stack ‘stack_03’ in us-east-1 failed to deploy. What happened?”&lt;/p&gt; 
&lt;p&gt;AI Agent: [Uses troubleshoot_stack_deployment with CloudTrail integration]&lt;/p&gt; 
&lt;p&gt;“The deployment failed due to insufficient IAM permissions.&lt;/p&gt; 
&lt;p&gt;CloudTrail shows AccessDenied for ec2:CreateVpc.&lt;/p&gt; 
&lt;p&gt;You need to add VPC permissions to your deployment role.”&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;4. Learning and Exploration&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;New to AWS CDK? The server helps you discover constructs and patterns:&lt;/p&gt; 
&lt;p&gt;User: “Show me how to build a serverless API”&lt;/p&gt; 
&lt;p&gt;AI Agent: [Searches CDK constructs and samples]&lt;/p&gt; 
&lt;p&gt;“Here are three approaches using API Gateway + Lambda…”&lt;/p&gt; 
&lt;p&gt;Learn more: &lt;a href="https://aws.amazon.com/blogs/devops/safely-handle-configuration-drift-with-cloudformation-drift-aware-change-sets/"&gt;Detailed Blog Post&lt;/a&gt;&lt;/p&gt; 
&lt;h1&gt;Learn more&lt;/h1&gt; 
&lt;p&gt;Here are some resources to help you get started learning and using CloudFormation to manage your cloud infrastructure:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://youtu.be/_4hvWns9ICY?si=WELIHRgUpdgvuM9P"&gt;Watch our re:Invent 2025 session on CloudFormation and CDK&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://catalog.workshops.aws/cfn101/en-US"&gt;AWS CloudFormation Workshop&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html"&gt;AWS CloudFormation Troubleshooting Guide&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h1&gt;Conclusion&lt;/h1&gt; 
&lt;p&gt;As we begin 2026, our focus remains on making infrastructure deployment faster, safer, and more manageable. The launches in 2025 reflect our commitment to solving real customer challenges and improving the CloudFormation developer experience. From intelligent IDE integrations to AI-powered assistance, these capabilities help you build infrastructure with greater confidence and efficiency.&lt;/p&gt; 
&lt;p&gt;We encourage you to try these features and share your feedback. For detailed information about any of these launches, visit our &lt;a href="https://docs.aws.amazon.com/cloudformation/"&gt;documentation&lt;/a&gt; or check out the &lt;a href="https://aws.amazon.com/blogs/devops/"&gt;AWS DevOps Blog&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Blog Author Bio:&lt;/h2&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/10/08/idriss-profile-cut-scaled.jpg" alt="" width="127" height="127"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Idriss Laouali Abdou&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Idriss is a Sr. Product Manager Technical on the AWS Infrastructure-as-Code team based in Seattle. He focuses on improving developer productivity through AWS CloudFormation and StackSets Infrastructure provisioning experiences. Outside of work, you can find him creating educational content for thousands of students, cooking, or dancing.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>From AI agent prototype to product: Lessons from building AWS DevOps Agent</title>
		<link>https://aws.amazon.com/blogs/devops/from-ai-agent-prototype-to-product-lessons-from-building-aws-devops-agent/</link>
		
		<dc:creator><![CDATA[Efe Karakus]]></dc:creator>
		<pubDate>Thu, 15 Jan 2026 17:03:50 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Technical How-to]]></category>
		<guid isPermaLink="false">1b9a84dd97340ed3ff9493ec0810e87f15a528ec</guid>

					<description>At re:Invent 2025, Matt Garman announced AWS DevOps Agent, a frontier agent that resolves and proactively prevents incidents, continuously improving reliability and performance. As a member of the DevOps Agent team, we’ve focused heavily on making sure that the “incident response” capability of the DevOps Agent generates useful findings and observations. In particular, we’ve been […]</description>
										<content:encoded>&lt;p&gt;At re:Invent 2025, Matt Garman announced &lt;a href="https://aws.amazon.com/devops-agent/"&gt;AWS DevOps Agent&lt;/a&gt;, a frontier agent that resolves and proactively prevents incidents, continuously improving reliability and performance. As a member of the DevOps Agent team, we’ve focused heavily on making sure that the &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/devops-agent-incident-response.html"&gt;“incident response”&lt;/a&gt; capability of the DevOps Agent generates useful findings and observations. In particular, we’ve been working on making root cause analysis for native AWS applications accurate and performant. Under the hood, DevOps Agent has a multi-agent architecture where a lead agent acts as an incident commander: it understands the symptom, creates an investigation plan, and delegates individual tasks to specialized sub-agents when those tasks benefit from context compression. A sub-agent executes its task with a pristine context window and reports compressed results back to the lead agent. For example, when examining high-volume log records, a sub-agent filters through the noise to surface only relevant messages to the lead agent.&lt;/p&gt; 
&lt;p&gt;In this blog post, we want to focus on the mechanisms one needs to develop to build an agentic product that works. Building a prototype with large language models (LLMs) has a low barrier to entry – you can showcase something that works fairly quickly. However, graduating that prototype into a product that performs reliably across diverse customer environments is a different challenge entirely, and one that is frequently underestimated. This post shares what we learned building AWS DevOps Agent so you can apply these lessons to your own agent development.&lt;/p&gt; 
&lt;p&gt;In our experience, there are five mechanisms necessary to continuously improve agent quality and bridge the gap from prototype to production. First, you need &lt;b&gt;evaluations (evals)&lt;/b&gt; to identify where your agent fails and where it can improve, while establishing a quality baseline for the types of scenarios your agent handles well. Second, you need a &lt;b&gt;visualization tool&lt;/b&gt; to debug agent trajectories and understand where exactly the agent went wrong. Third, you need a &lt;b&gt;fast feedback loop&lt;/b&gt; with the ability to rerun those failing scenarios locally to iterate. Fourth, you need to make &lt;b&gt;intentional changes&lt;/b&gt;: establishing success criteria before modifying your system to avoid confirmation bias. Finally, you need to &lt;b&gt;read production samples&lt;/b&gt; regularly to understand actual customer experience and discover new scenarios your evals don’t yet cover.&lt;/p&gt; 
&lt;h2&gt;Evaluations&lt;/h2&gt; 
&lt;p&gt;Evals are the machine learning equivalent of a test suite in traditional software engineering. Just like building any other software product, a collection of good test cases builds confidence in quality. Iterating on agent quality is similar to test-driven development (TDD): you have an eval scenario that the agent fails (the test is red), you make changes until the agent passes (the test is green). A passing eval means the agent arrived at an accurate, useful output through correct reasoning.&lt;/p&gt; 
&lt;p&gt;For AWS DevOps Agent, the size of an individual eval scenario is similar to an end-to-end test in the traditional software engineering &lt;a href="https://martinfowler.com/bliki/TestPyramid.html"&gt;testing pyramid&lt;/a&gt;. Looking through the lens of &lt;a href="https://martinfowler.com/bliki/GivenWhenThen.html"&gt;“Given-When-Then”&lt;/a&gt; style tests:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;b&gt;Given&lt;/b&gt; – The test setup portion tends to be the most time-consuming to author. For the AWS DevOps Agent, an example eval scenario includes an application running on &lt;a href="https://aws.amazon.com/eks/"&gt;Amazon Elastic Kubernetes Service&lt;/a&gt; composed of several microservices fronted by &lt;a href="https://aws.amazon.com/elasticloadbalancing/application-load-balancer/"&gt;Application Load Balancers&lt;/a&gt;, reading and writing from data stores such as &lt;a href="https://aws.amazon.com/rds/"&gt;Amazon Relational Database Service&lt;/a&gt; databases and &lt;a href="https://aws.amazon.com/s3/"&gt;Amazon Simple Storage Service&lt;/a&gt; buckets, with &lt;a href="https://aws.amazon.com/lambda/"&gt;AWS Lambda&lt;/a&gt; functions doing data transformations. We inject a fault by deploying a code change that accidentally removes a key &lt;a href="https://aws.amazon.com/iam/"&gt;AWS Identity and Access Management&lt;/a&gt; (IAM) permission to write to S3 deep in the dependency chain.&lt;/li&gt; 
 &lt;li&gt;&lt;b&gt;When&lt;/b&gt; – Once the fault is injected, an alarm fires, and this triggers the AWS DevOps Agent to start its investigation. The eval framework polls the records that the Agent generates, just like how the &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/userguide-what-is-a-devops-agent-web-app.html"&gt;DevOps Agent web application&lt;/a&gt; renders them. This section isn’t fundamentally different from defining the action in an integration or end-to-end test.&lt;/li&gt; 
 &lt;li&gt;&lt;b&gt;Then&lt;/b&gt; – This asserts and reports on multiple metrics. Fundamentally, there’s a single “PASS” (1) or “FAIL” (0) metric for quality. For the DevOps Agent’s incident response capability, a “PASS” means the right root cause surfaced to the customer – in our example, this means identifying the faulty deployment as the root cause and tracing the dependency chain to surface the impacted resources and observations that reveal the missing S3 write permission; otherwise “FAIL”. We define this as a &lt;i&gt;rubric&lt;/i&gt;: not just “did the agent find the root cause?” but “did the agent arrive at the root cause through the correct reasoning with the right supporting evidence?”The ground truth (the “expected” or “wanted” in software testing parlance) is compared to the system response (the “actual”) via an LLM Judge – an LLM that receives both the ground truth and the agent’s actual output, then emits its reasoning and a verdict on whether they match. We use an LLM for comparison because the agent’s output is non-deterministic: the agent follows an overall output format but generates the actual text freely, so each run may use different words or sentence structures while conveying the same semantic meaning. We don’t want to strictly search for keywords in the final root cause analysis report but rather evaluate whether the essence of the rubric is met.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;The evaluation report is structured with scenarios as rows and metrics as columns. Key metrics that we keep track of are capability (pass@k – whether the agent passed at least once in k attempts), reliability (pass^k – how many times the agent passed across k attempts, e.g., 0.33 means passed 1 out of 3 times for k=3), latency, and token usage.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-24734 size-large" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/13/scenarios-1-1024x119.png" alt="Evaluation results table with two scenario rows. Headers: Scenario, Pass@3, Pass^3, Avg. E2E Latency, Avg. Time-To-First-Observation, and Avg. Total tokens. Lambda throttle scenario shows Pass@3 of 1 and Pass^3 of 1 (highlighted green). SQS permission removal scenario shows Pass@3 of 1 and Pass^3 of 0.33 (highlighted red), indicating it passed only 1 of 3 attempts." width="1024" height="119"&gt;&lt;/p&gt; 
&lt;h3&gt;Why are evals important?&lt;/h3&gt; 
&lt;p&gt;There are several benefits to having evals:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Red scenarios provide obvious investigation points for the agent development team to increase product quality.&lt;/li&gt; 
 &lt;li&gt;Over time, green scenarios act as regression tests, notifying us when changes to the system degrade the existing customer experience.&lt;/li&gt; 
 &lt;li&gt;Once pass rates are green, we can improve customer experience along additional metrics. For example, reducing end-to-end latency and/or optimizing cost (proxied by token usage) while maintaining the quality bar.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;What makes evals challenging?&lt;/h3&gt; 
&lt;blockquote&gt;
 &lt;p&gt;Fast feedback loops help developers know whether code works (is it correct, performant, secure) and whether ideas are good (do they improve key business metrics). This may seem obvious, but far too often, teams and organizations tolerate slow feedback loops […] — Nicole Forsgren and Abi Noda&lt;i&gt;, &lt;a href="https://www.amazon.com/Frictionless-Remove-Barriers-Outpace-Competition/dp/1662966377#"&gt;Frictionless: 7 Steps to Remove Barriers, Unlock Value, and Outpace Your Competition in the AI Era&lt;/a&gt;&lt;/i&gt;&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;p&gt;There are several challenges with evals. In decreasing order of difficulty:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;b&gt;Realistic and diverse scenarios are hard to author&lt;/b&gt;. Coming up with realistic applications and fault scenarios is difficult. Authoring high fidelity microservice applications and faults is significant work that requires prior industry experience. What we’ve found effective: we author a few “environments” (based on real application architectures) but create &lt;i&gt;many&lt;/i&gt; failure scenarios on top of them. The environment is the expensive portion of the evaluation setup, so we maximize reuse across multiple scenarios.&lt;/li&gt; 
 &lt;li&gt;&lt;b&gt;Slow feedback loops.&lt;/b&gt; If the “Given” takes 20 minutes to deploy for an eval scenario and then the “When” takes another 10-20 minutes for complex investigations to complete, agent developers won’t thoroughly test their changes. Instead, they’ll be satisfied with a single passing eval, then release to production, potentially introducing regressions until the comprehensive eval report is generated. Additionally, slow feedback loops encourages batching multiple changes together rather than small incremental experiments, making it harder to understand which change actually moved the needle. We’ve found three mechanisms effective for speeding up feedback loops: 
  &lt;ol&gt; 
   &lt;li&gt;&lt;b&gt;Long-running environments&lt;/b&gt; for eval scenarios. The application and its healthy state are created once and kept running. Fault injection happens periodically (e.g., over weekends), and developers point their agent credentials at the faulty environment, completely skipping the “Given” portion of the test.&lt;/li&gt; 
   &lt;li&gt;&lt;b&gt;Isolated testing&lt;/b&gt; of only the agent surface area that matters. In our multi-agent system, developers can trigger a specific sub-agent directly with a prompt from a past eval run rather than running the entire end-to-end flow. Additionally, we built a “fork” feature: developers can initialize any agent with the conversation history from a failing run up to a specific checkpoint message, then iterate only on the remaining trajectory. Both of these approaches significantly lowers the wait time of the “When” portion.&lt;/li&gt; 
   &lt;li&gt;&lt;b&gt;Local development &lt;/b&gt;of the agentic system. If developers must merge changes and release to a cloud environment before testing, the loop is too slow. Running locally enables rapid iteration.&lt;/li&gt; 
  &lt;/ol&gt; &lt;/li&gt; 
&lt;/ol&gt; 
&lt;h2&gt;Visualize trajectories&lt;/h2&gt; 
&lt;p&gt;When an agent fails an eval or a production run, where do you start investigating? The most productive method is &lt;a href="https://hamel.dev/blog/posts/field-guide/#the-error-analysis-process"&gt;error analysis&lt;/a&gt;. Visualize the agent’s complete trajectory, every user-assistant message exchange including sub-agent trajectories, and annotate each step as “PASS” or “FAIL” with notes on what went wrong. This process is tedious but effective.&lt;/p&gt; 
&lt;p&gt;For AWS DevOps Agent, agent trajectories map to OpenTelemetry traces and you can use tools like &lt;a href="https://www.jaegertracing.io/docs/2.13/getting-started/"&gt;Jaeger&lt;/a&gt; to visualize them. Software development kits like &lt;a href="https://strandsagents.com/latest/documentation/docs/user-guide/observability-evaluation/traces/#visualization-and-analysis"&gt;Strands&lt;/a&gt; provide tracing integration with minimal setup.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-24735 size-large" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/13/jaeger-1024x666.png" alt="Jaeger UI showing a distributed trace for strands-agents with trace ID 941e3b7. The trace spans 25 minutes 17 seconds with 454 total spans across 7 depth levels. The left panel shows a hierarchical tree of service operations including execute_event_loop_cycle, chat, and execute_tool calls for current_time, write_scratchpad, and use_aws. The right panel displays a timeline visualization with horizontal bars representing span durations. The bottom detail panel shows metadata for a selected execute_tool use_aws span, including tags, process information, and logs with gen_ai event data." width="1024" height="666"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;i&gt;Figure 1 – A sample trace from AWS DevOps Agent.&lt;/i&gt;&lt;/p&gt; 
&lt;p&gt;Each span contains user-assistant message pairs. We annotate each span’s quality in a table such as the following:&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-24736 size-large" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/13/span-1024x185.png" alt="Error analysis table showing Step 3 with Span ID f182abb7c94a4713. The Description column shows Title: GetQueryResults(Surveying logs) with a JSON content snippet. Duration is T seconds. The Verdict column shows FAIL highlighted in red. The Reasoning column recommends removing @ptr fields, noting the retrieved logs are X tokens and removing @ptr fields from CloudWatch log records can reduce token usage by half." width="1024" height="185"&gt;&lt;/p&gt; 
&lt;p&gt;This low-level analysis consistently surfaces multiple improvements, not just one. For a single failing eval, one will typically identify many concrete changes spanning accuracy, performance, and cost.&lt;/p&gt; 
&lt;h2&gt;Intentional changes&lt;/h2&gt; 
&lt;blockquote&gt;
 &lt;p&gt;I had learned from my dad the importance of intentionality — knowing what it is you’re trying to do, and making sure everything you do is in service of that goal. — Will Guidara, &lt;i&gt;&lt;a href="https://www.amazon.com/Unreasonable-Hospitality-Remarkable-Giving-People/dp/0593418573"&gt;Unreasonable Hospitality: The Remarkable Power of Giving People More Than They Expect&lt;/a&gt;&lt;/i&gt;&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;p&gt;You’ve identified failing scenarios and diagnosed the issues through trajectory analysis. Now it’s time to modify the system.&lt;/p&gt; 
&lt;p&gt;The biggest fallacy we’ve observed at this stage: &lt;b&gt;confirmation bias &lt;/b&gt;leading to&lt;b&gt; overfitting&lt;/b&gt;. Given the eval challenges mentioned earlier (slow feedback loops and the impracticality of comprehensive test suites) developers typically test only the few specific failing scenarios locally until they pass. One modifies the context (system prompt, tool specifications, tool implementations, etc.) until one or two scenarios pass, without considering broader impact. When changes don’t follow context engineering best practices, they likely have negative effects that we can’t capture through limited evals.&lt;/p&gt; 
&lt;p&gt;You need both diligence and judgment: establish success criteria through available evals and reusable past production scenarios, but also educate yourself on context engineering best practices to guide your changes. We’ve found Anthropic’s &lt;a href="https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/claude-4-best-practices"&gt;prompting best practices&lt;/a&gt; and &lt;a href="https://www.anthropic.com/engineering"&gt;engineering blog&lt;/a&gt;, Drew Breunig’s &lt;a href="https://www.dbreunig.com/2025/06/22/how-contexts-fail-and-how-to-fix-them.html"&gt;how long contexts fail&lt;/a&gt;, and &lt;a href="https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-Manus"&gt;lessons from building Manus&lt;/a&gt; particularly helpful resources.&lt;/p&gt; 
&lt;h3&gt;Establish success criteria first&lt;/h3&gt; 
&lt;p&gt;Before making any change, define what success looks like:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;b&gt;Baseline.&lt;/b&gt; Fix specific git commit IDs for the current system. Think deliberately about which metrics would improve both the &lt;a href="https://sketch.dev/blog/ax"&gt;agent’s experience&lt;/a&gt; and the customer’s experience, then gather those metrics for the baseline.&lt;/li&gt; 
 &lt;li&gt;&lt;b&gt;Test scenarios.&lt;/b&gt; Which evals will measure your change’s impact? How many times will you rerun these evals? Convince yourself this set represents broader customer patterns, not just the one failure you’re investigating.&lt;/li&gt; 
 &lt;li&gt;&lt;b&gt;Comparison.&lt;/b&gt; Measure your changes against the baseline using the same metrics.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;This intentional framing protects against confirmation bias (interpreting results favorably) and sunk cost fallacy (accepting changes simply because you invested time). If your modifications don’t move the metrics as expected, reject them.&lt;/p&gt; 
&lt;p&gt;For example, when optimizing a &lt;u&gt;sub-agent&lt;/u&gt; within AWS DevOps Agent, we establish a baseline by fixing git commit IDs and running the same scenario seven times. This reveals both typical performance and variance.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-24737 size-large" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2026/01/13/sub-agent-metrics-1024x159.png" alt="Baseline metrics table comparing multiple runs of a sub-agent. Headers: Run, Correct observations, Irrelevant observations, Latency, Sub-agent Tokens, and Lead-agent Tokens. Run 1 shows 4 out of 6 correct observations with 1 irrelevant observation. Run 7 shows 5 out of 6 correct observations with 0 irrelevant observations, demonstrating variance across repeated runs of the same scenario." width="1024" height="159"&gt;&lt;/p&gt; 
&lt;p&gt;Each metric measures a different dimension:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Correct observations – How many &lt;i&gt;relevant&lt;/i&gt; signals (log records, metric data, code snippets, etc.) that are directly related to the incident did the sub-agent surface?&lt;/li&gt; 
 &lt;li&gt;Irrelevant observations – How much &lt;i&gt;noise&lt;/i&gt; did the sub-agent introduce to the lead agent? This counts signals that are unrelated to the incident and could distract the agent’s investigation.&lt;/li&gt; 
 &lt;li&gt;Latency – How long did the sub-agent take (measured in minutes and seconds)?&lt;/li&gt; 
 &lt;li&gt;Sub-agent tokens – How many tokens did the sub-agent to accomplish its task? This serves as a proxy for the cost of running the sub-agent.&lt;/li&gt; 
 &lt;li&gt;Lead-agent tokens – How much of the lead agent’s context window is the sub-agent’s input and output consuming? This gives us a tangible way to identify optimization opportunities for the sub-agent tool: can we compress the instructions to the sub-agent or the results it returns?&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;After establishing the baseline, we compare these metrics against the same measurements with our proposed changes. This makes it clear whether the change is an actual improvement.&lt;/p&gt; 
&lt;h2&gt;Read production samples&lt;/h2&gt; 
&lt;p&gt;We’ve been fortunate to have several Amazon teams adopt AWS DevOps Agent early. A DevOps agent team member on rotation regularly samples real production runs using our trajectory visualization tool (similar to the OpenTelemetry-based visualization discussed earlier, but customized to render DevOps Agent-specific artifacts like root cause analysis reports and observations), marking whether the agent’s output was accurate and identifying failure points. Production samples are irreplaceable; they reveal the actual customer experience. Additionally, reviewing samples continuously refines your intuition of what the agent is good and bad at. When production runs aren’t satisfactory, you have real-world scenarios to iterate against: modify your agent locally, then rerun it against the same production environment until the desired outcome is reached. Establishing rapport with a few critical early adopter teams willing to partner in this way is invaluable. They provide ground truth for rapid iteration and create opportunities to identify new eval scenarios. This tight feedback loop with production data works in conjunction with eval-driven development to form a comprehensive test suite.&lt;/p&gt; 
&lt;h2&gt;Closing thoughts&lt;/h2&gt; 
&lt;p&gt;Building an agent prototype that demonstrates the feasibility of solving a real business problem is an exciting first step. The harder work is graduating that prototype into a product that performs reliably across diverse customer environments and tasks. In this post, we’ve shared five mechanisms that form the foundation for systematically improving agent quality: evals with realistic and diverse scenarios, fast feedback loops, trajectory visualization, intentional changes, and production sampling.&lt;/p&gt; 
&lt;p&gt;If you’re building an agentic application, start building your eval suite today. Even starting with a handful critical scenarios will establish the quality baseline needed to measure and improve systematically. To see how AWS DevOps Agent applies these principles to incident response, check out our &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/getting-started-with-aws-devops-agent.html"&gt;getting started guide&lt;/a&gt;.&lt;/p&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/fe2ef495a1152561572949784c16bf23abb28057/2026/01/12/efe.jpg" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Efe Karakus&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Efe Karakus is a Sr. Software Engineer on the AWS DevOps Agent team, primarily focusing on agent development.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Automating AWS SDK for Java v1 to v2 Upgrades with AWS Transform</title>
		<link>https://aws.amazon.com/blogs/devops/automating-aws-sdk-for-java-v1-to-v2-upgrades-with-aws-transform/</link>
		
		<dc:creator><![CDATA[Brent Everman]]></dc:creator>
		<pubDate>Thu, 08 Jan 2026 20:40:13 +0000</pubDate>
				<category><![CDATA[AWS Transform]]></category>
		<category><![CDATA[Technical How-to]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Java]]></category>
		<category><![CDATA[modernization]]></category>
		<guid isPermaLink="false">fe6172be505c405cc383d7d96349332ab10902cb</guid>

					<description>The AWS SDK for Java v2&amp;nbsp;represents a fundamental shift in how Java applications interact with AWS services,&amp;nbsp;addressing critical security requirements while delivering measurable performance improvements.&amp;nbsp;For organizations still operating on v1, this transition extends beyond a routine version upgrade—it’s a strategic imperative for maintaining secure, efficient cloud operations. With v1 reaching end-of-support on December 31, 2025, […]</description>
										<content:encoded>&lt;p&gt;The AWS SDK for Java v2&amp;nbsp;represents a fundamental shift in how Java applications interact with AWS services,&amp;nbsp;addressing critical security requirements while delivering measurable performance improvements.&amp;nbsp;For organizations still operating on v1, this transition extends beyond a routine version upgrade—it’s a strategic imperative for maintaining secure, efficient cloud operations. With v1 reaching end-of-support on December 31, 2025, organizations face a hard deadline where security vulnerabilities will no longer receive patches, potentially violating compliance frameworks that require current, supported software versions.&lt;/p&gt; 
&lt;p&gt;Security enhancements alone justify the migration, with v2 implementing advanced credential management, modernized encryption clients, and comprehensive TLS security protocols that v1’s architecture cannot accommodate. Beyond security, v2 delivers architectural improvements through non-blocking I/O operations and modular service clients that reduce application footprint while improving response times.&lt;/p&gt; 
&lt;p&gt;This blog post demonstrates how to automate AWS SDK for Java v1 to v2 upgrades using &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom.html"&gt;AWS Transform custom&lt;/a&gt;, enabling organizations to modernize their Java applications efficiently while minimizing manual intervention and potential errors.&lt;/p&gt; 
&lt;p&gt;AWS Transform custom uses agentic AI to perform large-scale modernization of software, code, libraries, and frameworks to reduce technical debt. It handles diverse scenarios including language version upgrades, API and service migrations, framework upgrades and migrations, code refactoring, and organization-specific transformations.&lt;/p&gt; 
&lt;h2&gt;Prerequisites&lt;/h2&gt; 
&lt;p&gt;Before beginning the transformation process, verify the following requirements:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-get-started.html#custom-installation"&gt;AWS Transform CLI&lt;/a&gt; installed and configured in your development environment&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-get-started.html#custom-authentication"&gt;Authentication configured&lt;/a&gt; to execute an AWS Transform custom transformation&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Solution Overview&lt;/h2&gt; 
&lt;p&gt;The AWS Transform CLI provides AWS-Managed Transformations that are pre-built, AWS-vetted transformations for common use cases that are ready to use without any additional setup. The &lt;code&gt;AWS/java-aws-sdk-v1-to-v2&lt;/code&gt; transformation enables you to upgrade the AWS SDK from v1 to v2 for Java projects. We will use this verified transformation definition to upgrade a sample Java application from AWS SDK for Java v1 to v2.&lt;/p&gt; 
&lt;h3&gt;Step 1: Prepare the Sample Project&lt;/h3&gt; 
&lt;p&gt;Clone the AWS Java sample repository to your local environment:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;git clone https://github.com/aws-samples/aws-java-sample
cd aws-java-sample&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Step 2: Execute the Transformation&lt;/h3&gt; 
&lt;p&gt;The&amp;nbsp;AWS Transform CLI’s &lt;code&gt;def exec&lt;/code&gt; command provides multiple parameters for customizing transformations. View all available options using&amp;nbsp;&lt;code&gt;atx custom def exec --help&lt;/code&gt;.&amp;nbsp;For this transformation, execute the following command (replace &lt;code&gt;&amp;lt;path_to_project&amp;gt;&lt;/code&gt; with your actual project path):&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;atx custom def exec -n AWS/java-aws-sdk-v1-to-v2 -p &amp;lt;path_to_project&amp;gt; -t -c "mvn package"&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Parameter breakdown:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;code&gt;-p&lt;/code&gt;: Path to the code repository to transform&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;-n&lt;/code&gt;: Name of the transformation definition in the registry&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;-t&lt;/code&gt;: Trusts all tools (no tool prompts)&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;-c&lt;/code&gt;: Command to run when building repository&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Step 3: Provide Additional Guidance&lt;/h3&gt; 
&lt;p&gt;Before AWS Transform generates the transformation plan, it asks if there is specific guidance to take into consideration when generating the plan. For example, some organizations may require or have approved the use of specific versions of libraries. If there are specific requirements like this or guidance you would like to provide, please add it here. For this sample, enter the following prompt that demonstrates how you can specify a specific version of a library that may be needed to meet organizational requirements:&lt;/p&gt; 
&lt;p&gt;&lt;code&gt;Please utilize version 2.34.0 of&amp;nbsp;software.amazon.awssdk&lt;/code&gt;&lt;/p&gt; 
&lt;h3&gt;Step 4: Review the Transformation Plan&lt;/h3&gt; 
&lt;p&gt;AWS Transform analyzes your project and generates a comprehensive transformation plan. This plan details all proposed changes, including:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Update Maven dependencies&lt;/li&gt; 
 &lt;li&gt;API migration patterns&lt;/li&gt; 
 &lt;li&gt;Builder pattern implementations&lt;/li&gt; 
 &lt;li&gt;Update exception handling&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;The transformation plan will be outlined in a &lt;code&gt;plan.json&lt;/code&gt; file in the specified directory in the output (Figure 1). We suggest doing a thorough review of the transformation plan to ensure it encapsulates all expected updates. If adjustments are needed, feedback can be provided through the CLI interface. AWS Transform custom&amp;nbsp;incorporates all feedback provided to refine the transformation plan before proceeding.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-24713" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/29/Figure1blog.png" alt="Transformation plan output" width="600" height="411"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Figure 1: Transformation plan output&lt;/p&gt; 
&lt;h3&gt;Step 5: Apply the Transformation&lt;/h3&gt; 
&lt;p&gt;After confirming the transformation plan meets your requirements, type &lt;code&gt;proceed&lt;/code&gt; and press Enter. AWS Transform custom proceeds to the next step and executes the transformation according to the approved plan.&lt;/p&gt; 
&lt;h3&gt;Step 6: Verify Changes&lt;/h3&gt; 
&lt;p&gt;Once the transformation is complete, you can review the validation summary that was written to the &lt;code&gt;validation_summary.md&lt;/code&gt; file in the specified directory. After reviewing the summary, we will use our IDE (VS Code in this case, you can use your preferred mechanism) to examine the transformation results.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;POM.xml Updates:&lt;/strong&gt; The AWS SDK dependency upgrades from version 1.9.6 to 2.34.0 (Figure 2), reflecting the version that was specified during the planning phase.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-24715 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/29/Figure2blog.png" alt="POM.xml Updates" width="1207" height="453"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Figure 2: POM.xml Updates&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Code Pattern Updates:&lt;/strong&gt; The S3Sample.java file shown in Figure 3 demonstrates v2’s builder pattern implementation.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-24719 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/29/Figure3blog.png" alt="Builder pattern updates" width="1234" height="252"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Figure 3: Builder pattern updates&lt;/p&gt; 
&lt;h3&gt;Step 7: Build the application&lt;/h3&gt; 
&lt;p&gt;Since the build command was passed as part of the &lt;code&gt;-c&lt;/code&gt; parameter, AWS Transform custom will have already verified that the application builds as expected. We will also validate the transformation by building the application via the following command:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;mvn clean package&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The application should build successfully and you should see a &lt;strong&gt;BUILD SUCCESS&lt;/strong&gt; message.&lt;/p&gt; 
&lt;h3&gt;Step 7: Test the application&lt;/h3&gt; 
&lt;p&gt;Next, we will verify that the application’s functionality is working as expected after the transformation. Configure your AWS credentials and run the application:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;aws configure
mvn clean compile exec:java&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The application should execute successfully (as seen in Figure 4), demonstrating that core functionality remains intact after the transformation.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-24720 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/29/Figure4blog.png" alt="Successful application execution logs" width="1016" height="550"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Figure 4: Successful application execution&lt;/p&gt; 
&lt;h2&gt;Custom Transformation Definition&lt;/h2&gt; 
&lt;p&gt;While AWS provides managed transformations for modernizing legacy projects, these standard solutions may not always meet an organization’s unique requirements. Although AWS allows customization of these managed transformations through plan context and feedback mechanisms, some scenarios demand custom solutions. This is particularly true when organizations need to:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Upgrade proprietary internal frameworks&lt;/li&gt; 
 &lt;li&gt;Update custom libraries&lt;/li&gt; 
 &lt;li&gt;Manage complex SDK version upgrades&lt;/li&gt; 
 &lt;li&gt;Handle organization-specific code patterns&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;AWS Transform custom addresses these needs by enabling organizations to &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom-workflows.html#custom-create-custom-transformations"&gt;create and maintain their own transformation definitions&lt;/a&gt;. This capability offers several advantages:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Allows organizations to codify their specific modernization requirements&lt;/li&gt; 
 &lt;li&gt;Creates reusable transformation patterns&lt;/li&gt; 
 &lt;li&gt;Enables consistent application of organizational best practices&lt;/li&gt; 
 &lt;li&gt;Facilitates scalable modernization efforts across multiple projects&lt;/li&gt; 
 &lt;li&gt;Preserves and leverages institutional knowledge through documented transformations&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;By defining these custom transformations once, organizations can efficiently execute standardized modernization tasks across their entire codebase, ensuring consistency and reducing manual effort.&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;The automated transformation from AWS SDK for Java v1 to v2 using the AWS Transform CLI demonstrates how organizations can modernize their Java applications efficiently while maintaining code quality and functionality. This approach eliminates the manual effort traditionally required for SDK migrations, reducing both time investment and the risk of introducing errors during the upgrade process.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Key benefits realized through this automation:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Accelerated migration timeline&lt;/strong&gt; – What typically requires weeks of manual refactoring completes in minutes&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Consistent transformation patterns&lt;/strong&gt; – Verified transformations ensure uniform code updates across your entire codebase&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Reduced human error&lt;/strong&gt; – Automated pattern recognition and replacement eliminates common migration mistakes&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Immediate security improvements&lt;/strong&gt; – Applications gain V2’s enhanced security features without extensive manual intervention&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;As AWS continues to innovate and enhance&amp;nbsp;AWS SDK for Java v2 with new features and optimizations, maintaining current SDK versions becomes increasingly important. By automating this critical upgrade process, development teams can focus on delivering business value while ensuring their applications leverage the latest AWS capabilities and security enhancements. Get started with &lt;a href="https://docs.aws.amazon.com/transform/latest/userguide/custom.html"&gt;AWS Transform custom&lt;/a&gt; today to begin your modernization journey, explore additional AWS-Managed Transformations to address other modernization use cases, or learn how to &lt;a href="https://aws.amazon.com/blogs/devops/building-a-scalable-code-modernization-solution-with-aws-transform-custom/"&gt;run transformations at enterprise scale across thousands of repositories&lt;/a&gt; using a batch solution.&lt;/p&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>How Kaltura Accelerates CI/CD Using AWS CodeBuild-hosted Runners</title>
		<link>https://aws.amazon.com/blogs/devops/how-kaltura-accelerates-ci-cd-using-aws-codebuild-hosted-runners/</link>
		
		<dc:creator><![CDATA[Michael Shapira]]></dc:creator>
		<pubDate>Thu, 18 Dec 2025 14:54:45 +0000</pubDate>
				<category><![CDATA[Technical How-to]]></category>
		<category><![CDATA[AWS CodeBuild]]></category>
		<guid isPermaLink="false">ac81fc0ea51753b9a800f7a58e26b13747e0dddb</guid>

					<description>This post was contributed by Adi Ziv, Senior Platform Engineer at Kaltura, with collaboration from AWS. Kaltura, a leading AI video expirience cloud and corporate communications technology provider, transformed CI/CD infrastructure by migrating to AWS CodeBuild-hosted runners for GitHub Actions. This migration reduced DevOps operational overhead by 90%, accelerated build queue times by 66%, and […]</description>
										<content:encoded>&lt;p&gt;This post was contributed by Adi Ziv, Senior Platform Engineer at Kaltura, with collaboration from AWS.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://corp.kaltura.com/"&gt;Kaltura&lt;/a&gt;, a leading AI video expirience cloud and corporate communications technology provider, transformed CI/CD infrastructure by migrating to &lt;a href="https://aws.amazon.com/codebuild/"&gt;AWS CodeBuild&lt;/a&gt;-hosted runners for GitHub Actions. This migration reduced DevOps operational overhead by 90%, accelerated build queue times by 66%, and cut infrastructure costs by 60%. Most importantly, the migration achieved these results while supporting Kaltura’s scale: over 1,000 repositories, 100+ distinct workflow types, and 1,300 daily builds across multiple development teams.&lt;/p&gt; 
&lt;p&gt;As organizations scale their engineering operations, maintaining efficient CI/CD infrastructure becomes increasingly critical. While tools like &lt;a href="https://github.com/features/actions"&gt;GitHub Actions&lt;/a&gt; simplify pipeline creation, managing the underlying infrastructure can become a significant burden for engineering teams, particularly when dealing with security requirements and private network access needs. For Kaltura, this challenge became acute as the company rapidly grew its engineering teams and onboarded new microservices weekly.&lt;/p&gt; 
&lt;p&gt;In this post, you’ll see how Kaltura modernized CI/CD infrastructure by migrating from self-managed &lt;a href="https://aws.amazon.com/eks/"&gt;Amazon Elastic Kubernetes Service&lt;/a&gt; (Amazon&amp;nbsp;EKS) runners to CodeBuild-hosted runners, implementing enhanced security features while dramatically improving performance and reducing operational overhead.&lt;/p&gt; 
&lt;h1&gt;Overview of Challenge and Solution&lt;/h1&gt; 
&lt;h2&gt;Understanding Self-Hosted Runners&lt;/h2&gt; 
&lt;p&gt;GitHub-hosted runners offer zero operational overhead, automatic scaling, and a clean slate for each job, making them an excellent choice for many development teams. However, for enterprises like Kaltura with specific security and operational requirements, self-hosted runners provided a better fit. GitHub-hosted runners operate in a shared environment that, while secure, doesn’t offer the same level of granular control that enterprises may need for sensitive workloads. By moving to self-hosted runners on AWS, Kaltura gained access to robust security controls like&lt;a href="https://aws.amazon.com/vpc/"&gt; Amazon Virtual Private Cloud&lt;/a&gt; (Amazon VPC) isolation, &lt;a href="https://aws.amazon.com/iam"&gt;AWS Identity and Access Management&lt;/a&gt; (IAM) policies, and fine-grained access management. Additionally, self-hosted runners allowed Kaltura to customize hardware configurations for their specialized needs, optimize costs for their specific usage patterns, and maintain direct access to private network resources essential for their operations.&lt;/p&gt; 
&lt;p&gt;Self-hosted runners, which were initially implemented, offered the control Kaltura needed. By deploying runners within Amazon VPC, Kaltura gained crucial capabilities for enterprise-scale operations. The implementation enabled direct access to internal resources while implementing granular permissions through IAM roles. Using Amazon endpoints allowed Kaltura to avoid public API requests, ensuring all traffic remained within the organization’s secure private network.&lt;/p&gt; 
&lt;h2&gt;The initial solution based on Amazon EKS&lt;/h2&gt; 
&lt;p&gt;Kaltura’s initial solution deployed self-hosted GitHub Actions runners on Amazon EKS, using &lt;a href="https://karpenter.sh/"&gt;Karpenter&lt;/a&gt; for node auto-scaling. Kaltura implemented a custom controller that would poll the GitHub API for queued workflows and spin up necessary runners. While this solution provided the security and control Kaltura needed, it introduced substantial operational challenges.&lt;/p&gt; 
&lt;p&gt;The heart of the problems stemmed from Kaltura’s polling mechanism. As the solution’s scale grew, Kaltura frequently hit GitHub’s API rate limits, forcing a reduction of polling frequency to two-minute intervals. These circumstances created a cascading effect of operational issues. The DevOps teams spent considerable time maintaining runner images, infrastructure, and scaling mechanisms. Each new repository required manual configuration updates, creating bottlenecks in the development process. To meet performance SLAs, Kaltura maintained warm runner pools, significantly increasing infrastructure costs.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/04/KalturaBlog1.drawio-1.png"&gt;&lt;img loading="lazy" class="aligncenter wp-image-24193 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/04/KalturaBlog1.drawio-1.png" alt="Architecture diagram showing Kaltura's initial CI/CD solution with GitHub repositories triggering workflows that are polled by a custom controller, which provisions GitHub Actions runners on Amazon EKS with Karpenter for auto-scaling, all operating within an Amazon VPC for secure access to internal resources" width="956" height="531"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Figure 1: The initial solution was based on Amazon EKS and Karpenter spinning up GitHub Runners.&lt;/p&gt; 
&lt;p&gt;The impact on development teams was substantial. Every workflow execution faced a minimum two-minute delay between queuing and execution. These delays accumulated throughout the day, severely impacting developer productivity. The DevOps team found themselves constantly pulled away from other initiatives to handle infrastructure maintenance tasks. The situation became increasingly untenable as Kaltura continued to scale.&lt;/p&gt; 
&lt;h2&gt;Kaltura’s Solution – AWS CodeBuild-hosted Runners&lt;/h2&gt; 
&lt;p&gt;After evaluating several options, Kaltura chose CodeBuild-hosted runners to resolve infrastructure challenges while maintaining the security and control benefits of self-hosted solution. This new architecture fundamentally changed how the CI/CD solution operated, moving from a poll-based to a webhook-based system.&lt;/p&gt; 
&lt;div id="attachment_24195" style="width: 946px" class="wp-caption aligncenter"&gt;
 &lt;a href="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/04/Kaltura2Blog.png"&gt;&lt;img aria-describedby="caption-attachment-24195" loading="lazy" class="wp-image-24195 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/04/Kaltura2Blog.png" alt="Architecture diagram showing Kaltura's modernized CI/CD solution using AWS CodeBuild-hosted runners, where GitHub repositories send webhook notifications through AWS CodeConnections to trigger CodeBuild, which provisions runners within an Amazon VPC with IAM role-based access to AWS services for executing GitHub Actions workflows." width="936" height="542"&gt;&lt;/a&gt;
 &lt;p id="caption-attachment-24195" class="wp-caption-text"&gt;Figure 2: The solution based on AWS CodeBuild is fully managed and is based on Webhooks.&lt;/p&gt;
&lt;/div&gt; 
&lt;p&gt;The new architecture operates through a straightforward but powerful flow. When developers push code to GitHub, GitHub sends an immediate webhook notification to &lt;a href="https://docs.aws.amazon.com/codeconnections/latest/APIReference/Welcome.html"&gt;AWS CodeConnections&lt;/a&gt;. This triggers CodeBuild, which provisions a runner within Kaltura’s Amazon VPC. The GitHub Actions workflow then executes on this CodeBuild runner, leveraging fine-grained IAM roles that follow the principle of least privilege to access AWS services.&lt;/p&gt; 
&lt;h2&gt;Key Architectural Components&lt;/h2&gt; 
&lt;p&gt;The webhook-based architecture eliminates previous polling challenges entirely. Instead of waiting for a periodic check, workflows begin executing immediately when triggered. CodeBuild and CodeConnections use a GitHub App with webhooks, configurable at the repository, organization, or enterprise level. This integration allows true CI/CD auto-discovery, a significant advancement from previous manual configuration requirements.&lt;/p&gt; 
&lt;p&gt;Security remains one of the major components of the new architecture. Each runner operates within Amazon VPC, maintaining strict network security requirements. Kaltura implemented fine-grained access control through IAM roles, ensuring runners access only the specific AWS services they need, such as &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html"&gt;AWS Systems Manager Parameter Store&lt;/a&gt;, &lt;a href="https://aws.amazon.com/cloudwatch/"&gt;Amazon CloudWatch&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/secrets-manager/"&gt;AWS Secrets Manager&lt;/a&gt;. This maintains security posture while simplifying access management.&lt;/p&gt; 
&lt;h2&gt;Infrastructure Management&lt;/h2&gt; 
&lt;p&gt;CodeBuild’s serverless nature transformed the infrastructure management approach. Rather than maintaining a complex Amazon EKS cluster with custom controllers and scaling logic, Kaltura now leverages AWS’s managed service. This shift eliminated the need to patch runner images, maintain infrastructure, or optimize scaling mechanisms.&lt;/p&gt; 
&lt;p&gt;The system’s flexibility proved particularly valuable for diverse workflow requirements. CodeBuild supports various compute configurations, from standard instances to multi-architecture builds and specialized ARM and GPU runners. Kaltura can easily match compute resources to workflow needs through simple label configurations, without managing different runner pools or maintaining separate infrastructure stacks.&lt;/p&gt; 
&lt;h2&gt;Docker Workflow Improvements&lt;/h2&gt; 
&lt;p&gt;One unexpected benefit emerged in Docker build processes. Previous Amazon EKS-based solutions required complex &lt;a href="https://www.docker.com/resources/docker-in-docker-containerized-ci-workflows-dockercon-2023/"&gt;Docker-in-Docker&lt;/a&gt; (DinD) configurations or alternative tools like &lt;a href="https://github.com/GoogleContainerTools/kaniko"&gt;Kaniko&lt;/a&gt; for container builds. CodeBuild’s native Docker support eliminated these complications. The service provides isolated build environments where Docker can run directly, with built-in layer caching capabilities that significantly improve build performance.&lt;/p&gt; 
&lt;h2&gt;Auto-Discovery and Self-Service&lt;/h2&gt; 
&lt;p&gt;A key benefit of the new architecture is its self-service capability. When development teams create new repositories or modify existing ones, no manual DevOps intervention is required. The system automatically provisions appropriate runners based on predefined configurations and the workflow’s runs-on label. This self-service approach has dramatically reduced Kaltura’s operational overhead while improving developer productivity.&lt;/p&gt; 
&lt;p&gt;Here’s a typical workflow configuration demonstrating new approach:&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;name: Hello World&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;on: [push]&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;jobs:&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp; Hello-World-Job:&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; runs-on:&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - codebuild-myProject-${{ github.run_id }}-${{ github.run_attempt }}&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - image:${{ matrix.os }}&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - instance-size:${{ matrix.size }}&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - fleet:myFleet&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - buildspec-override:true&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; strategy:&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; matrix:&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; include:&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - os: arm-3.0&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; size: small&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - os: linux-5.0&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; size: large&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; steps:&lt;/code&gt;&lt;/p&gt; 
&lt;p style="text-align: left"&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - run: echo "Hello World!"&lt;/code&gt;&lt;/p&gt; 
&lt;p&gt;This configuration shows how Kaltura leverages CodeBuild’s flexibility while maintaining simple, declarative workflow definitions. Teams can specify their compute needs through labels, and the system handles all the underlying provisioning and management.&lt;/p&gt; 
&lt;h2&gt;Migration Approach&lt;/h2&gt; 
&lt;p&gt;The migration to CodeBuild runners involved a seamless transition with minimal workflow changes. The key to its successful migration was its simplicity – most workflows required only a single change to the &lt;code&gt;runs-on&lt;/code&gt; label:&lt;/p&gt; 
&lt;p&gt;&lt;code&gt;runs-on: codebuild-myProject-${{ github.run_id }}-${{ github.run_attempt }}&amp;nbsp;&lt;/code&gt;&lt;/p&gt; 
&lt;p&gt;Because of the 1-by-1 compatibility, it meant existing workflows continued to function without further modification.&lt;/p&gt; 
&lt;h1&gt;Results&lt;/h1&gt; 
&lt;p&gt;The new architecture successfully handles over 1,300 daily builds across more than 1,000 repositories and 100 workflow types while serving multiple development teams with varying security requirements. The results of the migration to CodeBuild-hosted runners delivered significant improvements across all key metrics:&lt;/p&gt; 
&lt;p&gt;Operational impact:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;90% reduction in DevOps operational overhead&lt;/li&gt; 
 &lt;li&gt;66% decrease in build queue times&lt;/li&gt; 
 &lt;li&gt;60% reduction in infrastructure costs&lt;/li&gt; 
 &lt;li&gt;30 minutes of daily time savings per developer&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Most importantly, developer satisfaction has improved due to faster builds, reduced friction, and consistent performance. The self-service nature of the system has eliminated onboarding bottlenecks and accelerated the development lifecycle.&lt;/p&gt; 
&lt;h1&gt;Conclusion&lt;/h1&gt; 
&lt;p&gt;The transformation of Kaltura’s CI/CD infrastructure through CodeBuild-hosted runners demonstrates how modern cloud services solves complex enterprise-scale development challenges. The journey from managing self-hosted runners on Amazon EKS to leveraging AWS managed services delivered a 90% reduction in operational overhead, 66% faster build queues, and 60% cost savings while maintaining enterprise-grade security requirements.&lt;/p&gt; 
&lt;p&gt;For organizations considering a similar path, we recommend starting with a pilot program using non-critical repositories. Focus on understanding your workflow requirements, security needs, and performance bottlenecks to shape an effective migration strategy. Implement cost allocation tags and monitoring early to ensure visibility into the migration’s impact and demonstrate ROI to stakeholders.&lt;/p&gt; 
&lt;h1&gt;Additional Resources:&lt;/h1&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/codebuild/"&gt;AWS CodeBuild product page for features and pricing&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/codebuild/latest/userguide/getting-started-overview.html"&gt;AWS CodeBuild User Guide for implementation instructions&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://github.com/aws-samples/aws-codebuild-samples"&gt;Open-source examples and configurations on GitHub&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=25w9uJPt0SA"&gt;AWS re:Invent 2023 session on Continuous Integration and Delivery for AWS&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=25w9uJPt0SA"&gt;AWS re:Invent 2024 session on Continuous integration and continuous delivery (CI/CD) for AWS&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h1&gt;About the Authors&lt;/h1&gt; 
&lt;table style="height: 288px" width="804"&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style="width: 125px;height: 125px;vertical-align: top"&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/19/generatedImage.png"&gt;&lt;img loading="lazy" class="alignnone wp-image-24360" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/19/generatedImage.png" alt="" width="119" height="132"&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td style="margin-left: 20px;padding-left: 40px;text-align: left;vertical-align: top"&gt;&lt;strong&gt; Adi Ziv&amp;nbsp;&lt;/strong&gt;is a Senior Platform Engineer at Kaltura with over a decade of experience designing and building scalable, resilient, and optimized cloud-native applications and infrastructure. He specializes in serverless, containerized, and event-driven architectures.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="width: 125px;height: 125px;vertical-align: top"&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/05/SCR-20251105-jixh.png"&gt;&lt;img loading="lazy" class="wp-image-24199 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/05/SCR-20251105-jixh.png" alt="MIchael Shapira photo" width="118" height="135"&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td style="margin-left: 20px;padding-left: 40px"&gt;&lt;strong&gt;Michael Shapira&lt;/strong&gt; is a Senior Solution Architect at AWS specializing in Machine Learning and Generative AI solutions. With 19 years of software development experience, he is passionate about leveraging cutting-edge AI technologies to help customers transform their businesses and accelerate their cloud adoption journey. Michael is also an active member of the AWS Machine Learning community, where he contributes to innovation and knowledge sharing while helping customers scale their AI and cloud infrastructure at enterprise level. When he’s not architecting cloud solutions, Michael enjoys capturing the world through his camera lens as an avid photographer.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="vertical-align: top"&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/05/SCR-20251105-jocx.png"&gt;&lt;img loading="lazy" class="size-full wp-image-24204 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/05/SCR-20251105-jocx.png" alt="" width="296" height="342"&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td style="vertical-align: top;padding-left: 40px"&gt;&lt;strong&gt;Maya Morav Freiman &lt;/strong&gt;is a Technical Account Manager at AWS helping customers maximize value from AWS services and achieve their operational and business objectives. She is part of the AWS Serverless community and has 10 years experience as a DevOps engineer.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.&lt;/p&gt; 
&lt;p&gt;&lt;code&gt; &lt;/code&gt;&lt;/p&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Resolve and prevent operational incidents with AWS DevOps Agent and New Relic</title>
		<link>https://aws.amazon.com/blogs/devops/resolve-and-prevent-operational-incidents-with-aws-devops-agent-and-new-relic/</link>
		
		<dc:creator><![CDATA[Nava Ajay Kanth Kota]]></dc:creator>
		<pubDate>Wed, 10 Dec 2025 22:03:04 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AI/ML]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Observability]]></category>
		<guid isPermaLink="false">a5d84f2fd23beddc88802068ecd9ac5ee7ca2f63</guid>

					<description>This post was co-written with Muthuvelan Swaminathan (Principal Partner Engineer) and Ruchika Bakolia (Software Engineer) from New Relic. Modern distributed systems that generate massive volumes of metrics, traces, and logs are inherently complex. The process of correlating logs, comparing configurations and switching between tools during incident management makes manual root cause analysis a bottleneck that […]</description>
										<content:encoded>&lt;p&gt;&lt;em&gt;This post was co-written with Muthuvelan Swaminathan (Principal Partner Engineer) and Ruchika Bakolia (Software Engineer) from New Relic.&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;Modern distributed systems that generate massive volumes of metrics, traces, and logs are inherently complex. The process of correlating logs, comparing configurations and switching between tools during incident management makes manual root cause analysis a bottleneck that dramatically increases the mean time to detect and resolve. Instead of manually sifting through mountains of data, Site Reliability Engineers (SREs) and DevOps teams can leverage Agentic AI to automate and enhance the incident resolution process.&lt;/p&gt; 
&lt;p&gt;To address these challenges, &lt;a href="https://partners.amazonaws.com/partners/001E000000Rl12lIAB/New%20Relic"&gt;New Relic&lt;/a&gt; partnered with AWS to integrate the &lt;a href="https://docs.newrelic.com/docs/agentic-ai/mcp/overview/"&gt;New Relic Model Context Protocol (MCP)&lt;/a&gt; server with &lt;a href="https://aws.amazon.com/devops-agent/"&gt;AWS DevOps Agent&lt;/a&gt; to access telemetry data providing automated root cause analysis and recommendations with cutting-edge artificial intelligence. AWS DevOps Agent is a &lt;a href="https://aws.amazon.com/ai/frontier-agents"&gt;frontier agent&lt;/a&gt; that resolves and proactively prevents incidents, continuously improving reliability and performance of applications in AWS, multi-cloud, and hybrid environments.&lt;/p&gt; 
&lt;p&gt;In this blog, we’ll explore the key features of both services, how to configure them and an example that shows how operation teams can correlate telemetry data, predict system anomalies and initiate remediation actions to significantly accelerate MTTR (Mean Time to Resolution).&lt;/p&gt; 
&lt;h1&gt;&lt;strong&gt;New Relic AI MCP Server&lt;/strong&gt;&lt;/h1&gt; 
&lt;p&gt;The New Relic MCP Server is a standardized gateway that connects external AI agents such as AWS DevOps Agent to New Relic’s observability data and functions. It enables autonomous agents to query live data and execute actions without requiring custom API integrations.&lt;/p&gt; 
&lt;p&gt;As customers and partners build their own AI tools, there is no longer a need to maintain a bespoke API integration. MCP enables AI agents to seamlessly interact with their telemetry data on New Relic platform through an MCP client to leverage its capabilities and enhance their workflows.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;AWS DevOps Agent&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;AWS DevOps Agent is a frontier agent that resolves and proactively prevents incidents, continuously improving reliability and performance. AWS DevOps Agent investigates incidents and identifies operational improvements as an experienced DevOps engineer would: by learning your resources and their relationships, working with your observability tools, runbooks, code repositories, and CI/CD pipelines, and correlating telemetry, code, and deployment data across all of them to understand the relationships between your application resources.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Key benefits for organizations&lt;/strong&gt;&lt;strong&gt;&amp;nbsp;&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;The integration of in-depth observability with AWS DevOps Agent capabilities is designed to quickly resolve issues when they arise and prevent incidents for SRE and DevOps engineers. Here are few benefits:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Automated investigations: &lt;/strong&gt;AWS DevOps Agent integrates with ticketing and alarming systems like ServiceNow to automatically launch investigations from incident tickets, accelerating incident response within your existing workflows to reduce meant time to resolution (MTTR).&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Incident coordination: &lt;/strong&gt;You can also initiate and guide investigations using interactive chat. AWS DevOps Agent acts as a member of your operations team, working directly within your collaboration tools like ServiceNow and Slack to share findings and coordinate responses.&lt;strong&gt;&amp;nbsp;&lt;/strong&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Root cause analysis: &lt;/strong&gt;AWS DevOps Agent integrates with observability tools, code repositories, and CI/CD pipelines to correlate and analyze telemetry, code, and deployment data, sharing its explored hypotheses, observations, Through systematic investigations, AWS DevOps Agent identifies root cause of issues stemming from system changes, input anomalies, resource limits, component failures, and dependency issues across your entire environment.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Detailed mitigation plans: &lt;/strong&gt;Once AWS DevOps Agent has identified the root cause, it provides detailed mitigations plans, which include actions to resolve the incident, validate success, and revert a change if needed. AWS DevOps Agent also provides agent-ready instructions that can be implemented by another frontier agent, for example, code improvements that can be implemented by &lt;a href="https://kiro.dev/"&gt;Kiro&lt;/a&gt; autonomous agent.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Proactively future incidents: &lt;/strong&gt;AWS DevOps Agent analyzes patterns across historical incidents to provide actionable recommendations that strengthen four key areas: observability, infrastructure optimization, deployment pipeline enhancement, and application resilience.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Onboarding&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;The onboarding process involves setting up an Agent Space and registering your existing New Relic servers. Onboarding does not require any new implementation.&lt;/p&gt; 
&lt;p&gt;Here are the high-level steps to create an AWS DevOps Agent Space and connect it to the New Relic MCP Server using an API-Key.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Setup Agent Space in AWS DevOps Agent&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;To create Agent Spaces, navigate to the AWS DevOps Agent page within the AWS Management Console. An Agent Space establishes the boundaries for the AWS DevOps Agent when accessing resources within a specific AWS account. To get started, click the create Agent Space button at the top right of the screen and enter the name, description and IAM roles.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24658" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/09/Picture1-3.png" alt="Screen shot displaying orange Create Agent Space button in the AWS Console" width="936" height="238"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;AWS DevOps Agent creating agent space&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Creating a New Relic association&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;&amp;nbsp;&lt;/strong&gt;Navigate to the &lt;strong&gt;capabilities&lt;/strong&gt; tab in the Agent Space&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24659" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/09/Navigating-to-the-capabilities-tab-in-the-Agent-space.png" alt="Screen shot with a red square around the tab for Capabilities in the AWS DevOps Agent -&gt; AgentSpaces view in the AWS Console" width="1430" height="849"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Navigating to the capabilities tab in the Agent space&lt;/p&gt; 
&lt;p&gt;Go to the Telemetry section, select &lt;strong&gt;Add&lt;/strong&gt;, then choose &lt;strong&gt;New Relic&lt;/strong&gt; and click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24660" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/09/Associating-New-Relic-as-the-Telemetry-provider-in-the-Agent-space.png" alt="Screen shot with Add a new source radio button selected and Select source to add has radio button New Relic selected" width="1430" height="1093"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Associating New Relic as the Telemetry provider in the Agent space&lt;/p&gt; 
&lt;p&gt;Upon successful registration of New Relic as a source, AWS DevOps Agent automatically generates a webhook URL. This URL is then used to receive alert notifications and trigger automated investigations.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24662" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/10/AWS-DevOps-Agent-Webhook-URL-and-Bearer-secret-key.png" alt="Screen shot for Configure Webhook Connection displaying the Webhook URL and Webhook Secret, both are redacted by a black bar." width="1430" height="675"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;AWS DevOps Agent Webhook URL and Bearer secret key&lt;/p&gt; 
&lt;p style="margin: 3.75pt 3.75pt 3.75pt 0in"&gt;&lt;span style="font-family: 'Arial',sans-serif;color: #242424"&gt;The AWS DevOps Agent webhook requires a Bearer token to be included in the HTTP header for authentication purposes. This ensures that only authorized requests are processed. In New Relic, set up &lt;/span&gt;&lt;a href="https://aws.amazon.com/eventbridge/"&gt;&lt;span style="font-family: 'Arial',sans-serif"&gt;Amazon EventBridge&lt;/span&gt;&lt;/a&gt;&lt;span style="font-family: 'Arial',sans-serif;color: #242424"&gt; as the alert destination. This configuration will trigger an &lt;/span&gt;&lt;a href="https://aws.amazon.com/lambda/"&gt;&lt;span style="font-family: 'Arial',sans-serif"&gt;AWS Lambda&lt;/span&gt;&lt;/a&gt;&lt;span style="font-family: 'Arial',sans-serif;color: #242424"&gt; function that adds the Bearer token to the HTTP header and posts the alert payload to the AWS DevOps Agent webhook URL.&lt;/span&gt;&lt;/p&gt; 
&lt;h2&gt;&lt;strong&gt;Use Case Walkthrough: Retail Chain – High Latency in shopping cart service resolution&lt;/strong&gt;&lt;/h2&gt; 
&lt;p&gt;This use case demonstrates how the integration of AWS DevOps Agent and New Relic MCP server empowers SRE and DevOps teams to access the untapped insights in your data to reduce MTTR and drive operational excellence.&lt;/p&gt; 
&lt;p&gt;Consider the following scenario: AWS DevOps Agent gets paged when the online boutique retail store application cart is experiencing P95 latency &amp;gt; 500ms for more than 2 minutes. This latency spike is critical and far exceeds the normal 5ms threshold, impacting the ability for customers to make purchases. In a typical scenario, the operations team would spend the first 15-30 minutes manually checking dependent services, alerts dashboard, and logs. This manual effort can be significantly reduced by configuring the New Relic observability platform with AWS DevOps Agent to automatically correlate telemetry data and surface the root cause faster.&lt;/p&gt; 
&lt;p&gt;To automatically remediate this issue, the online boutique application’s microservices are configured with New Relic’s APM agents that collect relevant metrics and send them to New Relic. When the latency exceeds a predefined threshold, an alert condition is triggered within New Relic. The triggered alert sends a notification to EventBridge, which in turn executes the Lambda function. The Lambda transforms the incoming payload into the required AWS DevOps Agent payload template. It then generates an HMAC signature to verify the message’s integrity and authenticity before dispatching it to the AWS DevOps Agent webhook endpoint.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24663" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/10/Alert-policy-notifications-in-New-Relic.png" alt="Screen shot displaying new relic logo in the top left corner. The screen is divided into a navigation bar on the left, with Alerts selected. In the pane to the right, Alerts / Alerts Policies is displayed at the top, and below that a title Online Boutique High Latency appears. The notifications tab below that is selected." width="1430" height="715"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;Alert policy notifications in New Relic&lt;/p&gt; 
&lt;p&gt;The AWS DevOps Agent webhook triggers the agent to begin an automated investigation.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24664" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/10/AWS-DevOps-Agent-Incident-response-page.png" alt="Screen shot displaying AWS DevOps Agent / GoldenPath_App in the title bar with Incident Response tab selected. Below that a heading is displayed for Online Boutique All Alerts followed by a timeline displaying User Request then Assistant Response" width="1430" height="650"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;AWS DevOps Agent Incident response page&lt;/p&gt; 
&lt;p&gt;The New Relic MCP is first queried by the AWS DevOps Agent to retrieve telemetry data for the cart service GUID. Following this, the AWS DevOps Agent makes a second request to the New Relic MCP to formulate an investigation plan, which includes a list of related entities, their key metrics, and any associated change events for those dependencies.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24666" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/10/AWS-DevOps-Agent-and-New-Relic-MCP-interaction-to-list-entities-and-related-change-events.png" alt="Zoomed in screen shot of the previous screen with red boxes highlighting two areas in the timeline which say NewRelic MCP list related entities and NewRelic MCP list change events. Each shots the detail for the tool call." width="1430" height="983"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;AWS DevOps Agent and New Relic MCP interaction to list entities and related change events&lt;/p&gt; 
&lt;p&gt;Next, data gathering tasks are executed using New Relic MCP, following the investigation plan.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24667" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/10/AWS-DevOps-Agent-and-New-Relic-MCP-interaction-to-explore-and-analyze-traces.png" alt="Time line screen similar to the previous screen shot with a red box around the timeline entry for Explore traces." width="1430" height="562"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;AWS DevOps Agent and New Relic MCP interaction to explore and analyze traces&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24668" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/10/AWS-DevOps-Agent-and-New-Relic-MCP-interaction-to-explore-and-analyze-logs-and-metrics.png" alt="Time line screen similar to the previous screen shot with a red box around the timeline entries for NewRelic MCP analyze entity logs and NewRelic MCP analyze golden metrics" width="1430" height="473"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;AWS DevOps Agent and New Relic MCP interaction to explore and analyze logs and metrics&lt;/p&gt; 
&lt;p&gt;Continuing its analysis, the agent leverages New Relic’s MCP to examine entity logs, golden metrics, and traces, ultimately identifying the root cause for the latency spike.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24669" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/10/AWS-DevOps-Agent-Root-Cause-Analysis.png" alt="Screen shot displaying AWS DevOps Agent / GoldenPath_App in the title bar with Incident Response tab selected. Below that a heading is displayed for Online Boutique All Alerts followed by a timeline displaying Update, Finding, and then Root cause. Root cause has a red box outlining it to draw attention." width="1430" height="1002"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;AWS DevOps Agent Root Cause Analysis&lt;/p&gt; 
&lt;p&gt;You can review AWS DevOps Agent’s findings and the suggested root cause. The Site Reliability Engineer (SRE) can interact with the AWS DevOps Agent (side panel) in the chat panel to gain clarification on the steps of the ongoing investigation, enabling more effective monitoring and troubleshooting.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24670" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/10/AWS-DevOps-Agent-Chat-interface.png" alt="creen shot displaying AWS DevOps Agent / GoldenPath_App in the title bar with Incident Response tab selected. A timeline is visible on the lift and a chat window has been expanded on the right. The chat window contains a question and response." width="1430" height="858"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;AWS DevOps Agent Chat interface&lt;/p&gt; 
&lt;p&gt;You can review AWS DevOps Agent’s findings and the suggested root cause. If necessary, the SRE then executes the appropriate mitigation plan.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;By integrating the New Relic MCP server with AWS DevOps Agent, organizations can quickly resolve issues when they arise and proactively prevent future incidents. This collaboration reduces Mean Time to Resolution (MTTR) and accelerates SREs and DevOps teams beyond manual, time-consuming investigations. It ensures rapid remediation of technical disruptions to minimize impact to the business. Ultimately, AWS DevOps Agent, the new frontier agent drives operational excellence, working in conjunction with the New Relic One Observability platform.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;About New Relic&lt;/strong&gt;&lt;br&gt; &lt;a href="https://newrelic.com/"&gt;The New Relic Intelligent Observability Platform&lt;/a&gt;&amp;nbsp;helps businesses eliminate interruptions in digital experiences. New Relic is an AI-strengthened platform that unifies and pairs telemetry data to provide clarity over your entire digital estate for proactive and predictive problem solving. That’s why businesses around the world run on New Relic to drive innovation, improve reliability, and deliver exceptional customer experiences to fuel growth.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Authors&lt;/strong&gt;&lt;/p&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/10/Muthu.jpg" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Muthuvelan Swaminathan&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Muthuvelan Swaminathan is a Principal Partner Architect at New Relic partnership organization building technical integrations with leading cloud providers and strategic partners. Through partner enablement, solution engineering and ecosystem alignment Muthuvelan helps drive product innovation at New Relic to ensure enterprises eliminate disruptions in their digital experiences for their customers.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/10/Ruchika.png" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Ruchika Bakolia&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Ruchika Bakolia is a Software Engineer at New Relic. She is passionate about the intersection of AI and Cloud technologies, with extensive experience building and integrating solutions primarily on AWS. Ruchika enjoys traveling, reading, and exploring creative pursuits like pottery, always seeking out new experiences and challenges.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/12/10/Ajay.png" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Nava Ajay Kanth Kota&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Ajay Kota is a Senior Partner Solutions Architect at AWS, currently serving on the Amazon Partner Organization (APO) team collaborating closely with ISV Partners. With over 23 years of experience in enterprise computing infrastructure, Ajay brings deep expertise in cloud architecture, storage, backup, and cloud solutions. Before joining AWS, he led Storage, Backup, and Cloud teams, where he was responsible for developing Managed Services offerings across these domains.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Accelerate autonomous incident resolutions using the Datadog MCP server and AWS DevOps agent (in preview)</title>
		<link>https://aws.amazon.com/blogs/devops/accelerate-autonomous-incident-resolutions-using-the-datadog-mcp-server-and-aws-devops-agent-in-preview/</link>
		
		<dc:creator><![CDATA[Nina Chen]]></dc:creator>
		<pubDate>Thu, 04 Dec 2025 16:55:21 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[DevOps]]></category>
		<guid isPermaLink="false">07ed4b51c27c9df844ddb13bc4c1a6eac89f19ed</guid>

					<description>This post was co-written with Omri Sass (Director of Product Management), Cansu Berkem (Director of Product Management), and Mohammad Jama (Product Marketing Manager) from Datadog. On-call engineers spend hours manually investigating incidents across multiple observability tools, logs, and monitoring systems. This process delays incident resolution and impacts business operations, especially when teams need to correlate […]</description>
										<content:encoded>&lt;p&gt;&lt;em&gt;This post was co-written with Omri Sass (Director of Product Management), Cansu Berkem (Director of Product Management), and Mohammad Jama (Product Marketing Manager) from Datadog.&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;On-call engineers spend hours manually investigating incidents across multiple observability tools, logs, and monitoring systems. This process delays incident resolution and impacts business operations, especially when teams need to correlate data across different monitoring platforms. &lt;a href="https://aws.amazon.com/devops-agent/"&gt;AWS DevOps Agent&lt;/a&gt; (in preview) is a frontier agent that resolves and proactively prevents incidents, continuously improving reliability and performance of applications in AWS, multicloud, and hybrid environments. Frontier agents represent a new class of AI agents that are autonomous, massively scalable, and work for hours or days without constant intervention. AWS DevOps Agent offers built-in integration with &lt;a href="https://docs.datadoghq.com/bits_ai/mcp_server/"&gt;Datadog Model Context Protocol (MCP) Server&lt;/a&gt;, enabling you to access the untapped insights in your data by connecting directly to Datadog’s monitoring solutions. DevOps Agent maps your application resources and correlates telemetry, code, and deployment data to reduce MTTR (Mean Time To Resolution) and drive operational excellence.&lt;/p&gt; 
&lt;p&gt;You can use this integration to collect and analyze Datadog logs, metrics, and traces, correlating this data across AWS services. When incidents occur, AWS DevOps Agent identifies issues and provides mitigation plans which engineers can then implement. Engineers can monitor automated investigations through a central dashboard and engage with the agent through interactive chat at any time. Using this integration, engineers are able to reduce mean time to resolution (MTTR) from hours to minutes, while maintaining full visibility into automated actions.&lt;/p&gt; 
&lt;h2&gt;How Datadog MCP and AWS DevOps Agent work together&lt;/h2&gt; 
&lt;p&gt;The integration between Datadog MCP Server and AWS DevOps Agent connects your monitoring data with automated incident response. Datadog MCP Server acts as a central access point for your monitoring data. It securely connects to Datadog through a standardized protocol, allowing AWS DevOps Agent to query logs, metrics, and traces during investigations. The service uses OAuth 2.0 authentication and supports multiple regions to help maintain data sovereignty requirements.&lt;/p&gt; 
&lt;p&gt;AWS DevOps Agent learns your resources and relationships while correlating data from both AWS services and Datadog. It analyzes Amazon CloudWatch logs and metrics, deployment data, and code alongside Datadog telemetry to build a complete picture of the incident. This combined view helps identify root causes faster than examining each data source separately. Security considerations are built into every interaction. All interactions between AWS DevOps Agent and Datadog MCP Server uses authentication, authorization, encryption, and logging for audit purposes. While the service currently only runs in us-east-1, it can monitor and analyze applications deployed across any AWS Region in customer accounts globally.&lt;/p&gt; 
&lt;h2&gt;Setting up and using AWS DevOps Agent with Datadog&lt;/h2&gt; 
&lt;p&gt;In this section, we will guide you through the steps required to enable Datadog MCP Server in your AWS DevOps Agent account and configure it for incident resolution.&lt;/p&gt; 
&lt;h3&gt;Pre-requisites&lt;/h3&gt; 
&lt;p&gt;For this walkthrough, you should have access to and understanding of the following:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;An &lt;a href="https://signin.aws.amazon.com/signin?redirect_uri=https%3A%2F%2Fportal.aws.amazon.com%2Fbilling%2Fsignup%2Fresume&amp;amp;client_id=signup"&gt;AWS account&lt;/a&gt; with permissions to create AWS IAM (Identity and Access Management) roles: 
  &lt;ul&gt; 
   &lt;li&gt;Agent Space role – for basic service operations&lt;/li&gt; 
   &lt;li&gt;Agent Space web app role – for using the Agent Space web app functionality&lt;/li&gt; 
   &lt;li&gt;&amp;nbsp;(Optional) Secondary source account roles if monitoring multiple AWS accounts. Refer to the DevOps Agent user guide for the details on setting up these roles.&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
 &lt;li&gt;A Datadog account&lt;/li&gt; 
 &lt;li&gt;Access to Datadog MCP Server (in preview)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Setting up Datadog in the AWS DevOps Agent console&lt;/h3&gt; 
&lt;p&gt;Start the setup in the AWS DevOps Agent console by connecting your Datadog MCP Server. Navigate to Settings, select the Datadog integration panel, and choose “Register.” Enter your Datadog MCP Server details when prompted (you can learn more about requesting access to this server in &lt;a href="https://docs.datadoghq.com/bits_ai/mcp_server/"&gt;their documentation&lt;/a&gt;). AWS DevOps Agent validates the connection and displays a confirmation message.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;a href="https://aws.amazon.com/blogs/devops/?attachment_id=24573"&gt;&lt;img loading="lazy" class="aligncenter wp-image-24573 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/27/enterdatadogmcp.png" alt="This is the configuration in AWS DevOps Agent for Datadog MCP Server Details with three input fields: Server Name (with example 'my-datadog-server'), Endpoint URL (showing 'https://mcp.datadog.com/api/unstable/mcp-server/mcp'), and an optional Description field. The form includes navigation steps at the top and Cancel/Next buttons at the bottom. The interface has a dark theme with blue accents." width="2452" height="1218"&gt;&lt;/a&gt;Figure 1: Setting up Datadog MCP Server in AWS DevOps Agent Console&lt;/p&gt; 
&lt;h3&gt;Create an AWS DevOps Agent Agent Space&lt;/h3&gt; 
&lt;p&gt;Next, create an Agent Space in your primary AWS account. This requires an AWS IAM role that grants AWS DevOps Agent access to your AWS resources. After creating your Agent Space, add Datadog MCP Server as a telemetry source to enable comprehensive incident investigation.&lt;/p&gt; 
&lt;p&gt;To create your Agent Space, start by accessing the AWS DevOps Agent console in us-east-1. Choose the “Create Agent Space” button and provide a meaningful name and description for your space. After submitting the form, you’ll need to configure the required IAM roles, which can be done through either the automated creation process or manual setup.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;a href="https://aws.amazon.com/blogs/devops/?attachment_id=24545"&gt;&lt;img loading="lazy" class="aligncenter wp-image-24545 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/26/createdevopsagent.png" alt="This is the configuration for creating an AWS DevOps Agent AgentSpace. The screen shows the option to create a DevOps Agents, with areas to give agent details, resource access, and more. The interface is dark blue theme. " width="2328" height="1272"&gt;&lt;/a&gt;Figure 2: Creating a AWS DevOps Agent in Agent Space&lt;/p&gt; 
&lt;p&gt;Your Agent Space topology can be initialized using either AWS CloudFormation stacks or AWS Tags as starting points to identify your application components. Once the basic setup is complete, you can enhance your Agent Space configuration by adding Secondary source accounts for multi-account monitoring and configuring integrations with services like SIM ticketing system, Pipelines (where GitFarm packages and CloudFormation Stacks are located), Slack, and most importantly for our use case, Telemetry with the Datadog MCP Server.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;a href="https://aws.amazon.com/blogs/devops/?attachment_id=24574"&gt;&lt;img loading="lazy" class="aligncenter wp-image-24574 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/27/telemetryview.png" alt="This is a page that has options for adding telemetry source (datadog) in agent space. Here, there is a pop-up to add source association. The selected source here to add is Datadog. " width="2354" height="1208"&gt;&lt;/a&gt;Figure 3: Add additional telemetry sources for AWS DevOps Agent to investigate&lt;/p&gt; 
&lt;p&gt;From here, we can launch the Agent Space web app to begin the investigation.&lt;/p&gt; 
&lt;h3&gt;Real-World example: Resolving API Gateway errors&lt;/h3&gt; 
&lt;p&gt;Let’s walk through how AWS DevOps Agent and Datadog work together to resolve a production incident. In this scenario, Datadog detects a spike in Amazon API Gateway 5XX errors affecting downstream services.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;a href="https://aws.amazon.com/blogs/devops/?attachment_id=24547"&gt;&lt;img loading="lazy" class="aligncenter wp-image-24547 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/26/datadogmonitoringview.png" alt="This is a sample monitor view of sample 5XX errors in Datadog. There is a monitor of Amazon API Gateway pulled up. On the right, there is a monitor showing &amp;quot;Your 5XX Errors&amp;quot; with over 220 errors. " width="2258" height="1102"&gt;&lt;/a&gt;Figure 4: Sample API Gateway errors in Datadog&lt;/p&gt; 
&lt;h3&gt;Investigating 5XX errors from API Gateway Incident with the Datadog MCP Server and AWS DevOps Agent&lt;/h3&gt; 
&lt;p&gt;When the alert triggers, AWS DevOps Agent automatically analyzes both Datadog metrics and API Gateway logs. Through the investigation chat interface, an engineer guides AWS DevOps Agent to examine the API Gateway configuration. The agent correlates API Gateway and AWS Lambda execution logs, quickly identifying error patterns.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;a href="https://aws.amazon.com/blogs/devops/?attachment_id=24549"&gt;&lt;img loading="lazy" class="aligncenter wp-image-24549 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/26/AWSDevOpsAgentview.png" alt="This is a view in AWS DevOps Agent to allow for investigating an incident with AWS DevOps Agent and Datadog MCP" width="2292" height="1174"&gt;&lt;/a&gt;Figure 4: Investigating an incident with AWS DevOps Agent and Datadog MCP&lt;/p&gt; 
&lt;h3&gt;Resolving and prevention&lt;/h3&gt; 
&lt;p&gt;AWS DevOps Agent helps identify potential misconfigurations in the Lambda and Amazon DynamoDB integration and implements immediate fixes. The agent documents all findings and actions in an incident record, backed by telemetry from both Datadog and AWS services. After resolution, AWS DevOps Agent generates a detailed analysis report with specific recommendations to prevent similar incidents. Teams can review and implement these suggestions through the Prevention feature in the AWS DevOps Agent web app.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;a href="https://aws.amazon.com/blogs/devops/?attachment_id=24550"&gt;&lt;img loading="lazy" class="aligncenter wp-image-24550 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/26/AWSDevOpsAgentinvestigationreport-.png" alt="This view show the investigation summary produced by AWS DevOps Agent. Here, we see the root cause for this sample incident. The root cause head line states that &amp;quot;1. DynamoDB table name misconfiguration - typo in environment variable&amp;quot;. There is a longer description explaining this under it. The background for this view is plain white. " width="2282" height="1156"&gt;&lt;/a&gt;Figure 5: Investigation summary produced by AWS DevOps Agent&lt;/p&gt; 
&lt;h3&gt;Clean up&lt;/h3&gt; 
&lt;p&gt;When you’re done using the integration, you can clean up your resources by following these steps:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Delete your Agent Space from the AWS DevOps Agent console&lt;/li&gt; 
 &lt;li&gt;Remove the Datadog MCP Server connection from your settings&lt;/li&gt; 
 &lt;li&gt;Delete the IAM roles created for the Agent Space&lt;/li&gt; 
 &lt;li&gt;(Optional) If you created additional source account roles, remove those as well&lt;/li&gt; 
&lt;/ol&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;The integration between Datadog MCP Server and AWS DevOps Agent reduces incident resolution time by automatically correlating data across your monitoring tools. Instead of manually switching between Datadog and AWS dashboards during incidents, teams can now get an AI-powered investigation that identifies root causes and suggests fixes. Early adopters report significant improvements in their incident response. Resolution times drop from hours to minutes, while on-call teams spend less time gathering data. Teams also see more consistent incident responses and improved root cause analysis through comprehensive data correlation. To learn more, check out the &lt;a href="http://aws.amazon.com/devops-agent"&gt;AWS DevOps Agent product page&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://www.datadoghq.com/"&gt;Datadog&lt;/a&gt; is an &lt;a href="https://partners.amazonaws.com/partners/001E000000Rp57sIAB/Datadog Inc"&gt;AWS Specialization Partner&lt;/a&gt; and &lt;a href="https://aws.amazon.com/marketplace/seller-profile?id=e56c35d0-c5d4-4dac-91d5-ebf57fef6e5c"&gt;AWS Marketplace Seller&lt;/a&gt; that has been building integrations with AWS services for over a decade, amassing a growing catalog of 100+ AWS and 1000+ built-in integrations. This new AWS DevOps Agent and Datadog MCP Server integration builds upon Datadog’s strong track record of AWS partnership success. If you’re not already using Datadog, you can get started with a &lt;a href="https://signin.aws.amazon.com/oauth?client_id=arn%3Aaws%3Aiam%3A%3A015428540659%3Auser%2Fawsmp-contessa&amp;amp;redirect_uri=https%3A%2F%2Faws.amazon.com%2Fmarketplace%2Fpp%2Fprodview-7tlwraipohxq6%3Fsc_channel%3Del%26source%3Ddatadog%26trk%3D176b570f-20dd-4b84-aa7e-cae53990fe91%26isauthcode%3Dtrue&amp;amp;response_type=code"&gt;14-day free trial via the AWS Marketplace&lt;/a&gt;.&lt;/p&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/26/Screenshot-2025-11-26-at-5.51.53 PM-150x150.png" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Sujatha Kuppuraju&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Sujatha Kuppuraju is a Principal Solutions Architect at AWS, specializing in Cloud and, Generative AI Security. She collaborates with software companies’ leadership teams to architect secure, scalable solutions on AWS and guide strategic product development. Leveraging her expertise in cloud architecture and emerging technologies, Sujatha helps organizations optimize offerings, maintain robust security, and bring innovative products to market in an evolving tech landscape.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/26/Screenshot-2025-11-26-at-5.48.57 PM-150x150.png" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;DhilipVenkatesh Uvarajan&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;DhilipVenkatesh Uvarajan is as an Enterprise Support Lead TAM within AWS Enterprise Support, specializing in Independent Software Vendors (ISVs) across the United States. In this role, Dhilip provides strategic technical guidance to help customers innovate, optimize their AWS architecture, and ensure the seamless operation of their business-critical applications on the AWS cloud. Beyond his professional endeavors, Dhilip is passionate about AI and Robotics, often exploring innovative projects in his spare time.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="alignnone wp-image-24558" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/26/Screenshot-2025-11-26-at-5.53.11 PM-211x300.png" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Nina Chen&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Nina Chen is a Customer Solutions Manager at AWS specializing in leading software companies to leverage the power of the AWS cloud to accelerate their product innovation and growth. With over 4 years of experience working in the strategic Independent Software Vendor (ISV) vertical, Nina enjoys guiding ISV partners through their cloud transformation journeys, helping them optimize their cloud infrastructure, driving product innovation, and delivering exceptional customer experiences.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="alignnone size-medium wp-image-24593" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/Screenshot-2025-11-28-at-3.36.40 PM-300x300.png" alt="" width="300" height="300"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Omri Sass&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Omri Sass is a Director of Product Management at Datadog, where he’s overseen the development and launch of a multitude of products and capabilities including Bits AI SRE and updog.ai. He is a keen advocate for good user experience and doing what’s right by users.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="alignnone size-medium wp-image-24596" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/Screenshot-2025-11-28-at-3.40.13 PM-296x300.png" alt="" width="296" height="300"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Cansu Berkem&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Cansu Berkem is a Director of Product Management at Datadog, overseeing the company’s end-to-end incident response experience, including Incident Management, On-Call, Automations, and Bits AI SRE. Her products help engineers resolve incidents faster through AI-driven workflows, powered by Bits AI SRE as an autonomous incident investigator and supported by integration-rich incident management and paging flows.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="alignnone size-medium wp-image-24595" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/Screenshot-2025-11-28-at-3.39.38 PM-300x300.png" alt="" width="300" height="300"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Mohammad Jama&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Mohammad Jama is a Product Marketing Manager at Datadog. He leads go-to-market for Datadog’s AWS integrations, working closely with product, marketing, and sales to help companies observe and secure their hybrid and AWS environments.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Building with AI-DLC using Amazon Q Developer</title>
		<link>https://aws.amazon.com/blogs/devops/building-with-ai-dlc-using-amazon-q-developer/</link>
		
		<dc:creator><![CDATA[Will Matos]]></dc:creator>
		<pubDate>Sat, 29 Nov 2025 21:07:21 +0000</pubDate>
				<category><![CDATA[Amazon Machine Learning]]></category>
		<category><![CDATA[Amazon Q]]></category>
		<category><![CDATA[Amazon Q Developer]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Best Practices]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Software]]></category>
		<guid isPermaLink="false">aa62feef21e6b678c0ceaf0d128bb63535aacd37</guid>

					<description>The AI-Driven Development Life Cycle (AI-DLC) methodology marks a significant change in software development by strategically assigning routine tasks to AI while maintaining human oversight for critical decisions. Amazon Q Developer, a generative AI coding assistant, supports the entire software development lifecycle and offers the Project Rules feature, allowing users to tailor their development practices […]</description>
										<content:encoded>&lt;p&gt;The &lt;a href="https://aws.amazon.com/blogs/devops/ai-driven-development-life-cycle/"&gt;AI-Driven Development Life Cycle (AI-DLC)&lt;/a&gt; methodology marks a significant change in software development by strategically assigning routine tasks to AI while maintaining human oversight for critical decisions. &lt;a href="https://aws.amazon.com/q/developer/"&gt;Amazon Q Developer&lt;/a&gt;, a generative AI coding assistant, supports the entire software development lifecycle and offers the &lt;a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/context-project-rules.html"&gt;Project Rules&lt;/a&gt; feature, allowing users to tailor their development practices within the platform.&lt;/p&gt; 
&lt;p&gt;Recently, &lt;a href="https://aws.amazon.com/blogs/devops/open-sourcing-adaptive-workflows-for-ai-driven-development-life-cycle-ai-dlc/"&gt;AWS made its AI-DLC workflow open-source&lt;/a&gt;, enabling developers to create software using this methodology. This workflow is implemented in Amazon Q Developer through its Project Rules customization feature. In this post, we will demonstrate how the AI-DLC workflow operates in Amazon Q Developer using an example use case.&lt;/p&gt; 
&lt;h2&gt;AI-DLC Workflow Overview&lt;/h2&gt; 
&lt;p&gt;The AI-DLC workflow is the practical implementation of the AI-DLC methodology for executing software development tasks. As outlined in the &lt;a href="https://prod.d13rzhkk8cj2z0.amplifyapp.com/"&gt;AI-DLC Method Definition Paper&lt;/a&gt;, the workflow has three phases. These phases are Inception, Construction, and Operations. Inception involves planning and architecture. Construction focuses on design and implementation. Operations cover deployment and monitoring. Each phase includes distinct stages. These stages address specific software development life cycle functions. The workflow adapts to project requirements. It analyzes requests, codebases, and complexity. This analysis determines the necessary stages. Simple bug fixes skip planning. They go directly to code generation. Complex features need requirements analysis. They also require architectural design and detailed testing.&lt;/p&gt; 
&lt;p&gt;The workflow maintains quality and control through structured milestones and transparent decision-making. At each phase, AI-DLC asks clarifying questions, creates execution plans, and waits for approval. Every decision, input, and response is logged in an audit trail for traceability. Whether building a new microservice, refactoring legacy code, or fixing a production bug, AI-DLC scales its rigor to match needs—comprehensive when complex, efficient when simple, and always in control. Figure 1 shows the phases and stages within the adaptive AI-DLC workflow. The stages shown in green boxes are mandatory, while those in yellow boxes are conditional.&lt;/p&gt; 
&lt;div id="attachment_24513" style="width: 2772px" class="wp-caption aligncenter"&gt;
 &lt;img aria-describedby="caption-attachment-24513" loading="lazy" class="wp-image-24513 size-full" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/24/01_AI-DLC_workflow-1.png" alt="AI-DLC workflow diagram showing three phases: Inception Phase (blue) with mandatory steps for Workspace Detection, Requirements Analysis, and Workflow Planning, plus conditional steps for Reverse Engineering, User Stories, Application Design, and Units Generation; Construction Phase (green) with conditional steps for Functional Design, NFR Requirements, NFR Design, and Infrastructure Design, followed by mandatory Code Generation and Build and Test steps that loop for each unit; and Operations Phase (orange) with an Operations step. The workflow flows from User Request at the top to Complete at the bottom." width="2762" height="1996"&gt;
 &lt;p id="caption-attachment-24513" class="wp-caption-text"&gt;Figure 1. SDLC phases and stages in AI-DLC workflow&lt;/p&gt;
&lt;/div&gt; 
&lt;h2&gt;Prerequisites&lt;/h2&gt; 
&lt;p&gt;Before we begin the walk-through, we must have an AWS account or AWS Builder Id for authenticating Amazon Q Developer. If you don’t have one, sign up for &lt;a href="https://aws.amazon.com/"&gt;AWS account&lt;/a&gt; or &lt;a href="https://docs.aws.amazon.com/signin/latest/userguide/create-builder-id.html"&gt;create an AWS builder id&lt;/a&gt;. You can use any of the &lt;a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/q-in-IDE.html#supported-ides-features"&gt;Integrated Development Environments (IDEs) supported by Amazon Q Developer&lt;/a&gt; and install the extension as per the &lt;a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/q-in-IDE-setup.html"&gt;AWS documentation&lt;/a&gt;. In this post, we’ll be using the &lt;a href="https://marketplace.visualstudio.com/items?itemName=AmazonWebServices.amazon-q-vscode"&gt;Amazon Q Developer extension in VS Code IDE&lt;/a&gt;. Once the plug-in is installed, you’ll need to authenticate Q Developer with the AWS cloud backend. Refer to the AWS documentation for &lt;a href="https://marketplace.visualstudio.com/items?itemName=AmazonWebServices.amazon-q-vscode"&gt;Q Developer authentication instructions&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;The AI-DLC workflow generates various Mermaid diagrams in markdown files. To view these diagrams within your IDE, you can install a Mermaid viewer plugin.&lt;/p&gt; 
&lt;h2&gt;Let’s Begin Building!&lt;/h2&gt; 
&lt;p&gt;Let’s construct a simple &lt;a href="https://en.wikipedia.org/wiki/River_crossing_puzzle"&gt;River Crossing Puzzle&lt;/a&gt; as a web UI app using AI-DLC. By choosing a straightforward app, we can concentrate more on learning the AI-DLC workflow and less on the project’s technical intricacies.&lt;/p&gt; 
&lt;p&gt;The sections below outline the individual steps in the AI-DLC development process using Amazon Q Developer. We’ll showcase screenshots of our IDE with the Amazon Q Developer plug-in and demonstrate how to interact with the workflow.&lt;/p&gt; 
&lt;p&gt;Although we’ve used the &lt;a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/q-in-IDE.html"&gt;Amazon Q Developer IDE plug-in&lt;/a&gt; in this blog post, you can also use &lt;a href="https://kiro.dev/cli/"&gt;Kiro Command Line Interface (CLI)&lt;/a&gt; to build with AI-DLC without any additional setup. The workflow remains the same, except that you’ll interact through the command line instead of the graphical interface in the IDE.&lt;/p&gt; 
&lt;div style="border: 1px solid black;padding: 5px"&gt;
 &lt;i&gt;As we progress through the workflow, your AI-DLC experience will be tailored to your specific problem statement. You’ll also notice the probabilistic nature of large language models (LLMs), as the questions and artifacts generated by them will vary from one run to another for the same problem statement. For example, if you attempt to replicate the same problem statement we used in this blog post, your experience will likely differ. This is expected and desirable. Despite these minor variations, we’ll eventually find a solution to the problem we initially set out to address.&lt;/i&gt;
&lt;/div&gt; 
&lt;h2&gt;Step 1: Clone GitHub repo containing the AI-DLC Q Developer Rules&lt;/h2&gt; 
&lt;p&gt;Clone the &lt;a href="https://github.com/awslabs/aidlc-workflows"&gt;GitHub repo&lt;/a&gt; containing the AI-DLC Q Developer Rules:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;git clone https://github.com/awslabs/aidlc-workflows.git&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Step 2: Load Q Developer Rules in your project workspace&lt;/h2&gt; 
&lt;p&gt;Follow the &lt;code&gt;README.md&lt;/code&gt; instructions in the &lt;a href="https://github.com/awslabs/aidlc-workflows"&gt;GitHub repo&lt;/a&gt; to copy the rules files over to your project folder.&lt;/p&gt; 
&lt;h2&gt;Step 3: Install and authenticate Amazon Q Developer Extension in IDE&lt;/h2&gt; 
&lt;p&gt;Open the project folder you created in Step 2 in VS Code. Open the Amazon Q Chat Panel in the IDE and ensure that the AI-DLC workflow rules are loaded in Q Developer, as shown in Figure 2. If you don’t see what’s shown in Figure 2, please double-check the steps you performed in Step 2.&lt;/p&gt; 
&lt;div id="attachment_24514" style="width: 2936px" class="wp-caption aligncenter"&gt;
 &lt;img aria-describedby="caption-attachment-24514" loading="lazy" class="size-full wp-image-24514" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/24/02_AI-DLC_Rules-1.png" alt="Screenshot showing four steps to access AI-DLC rules in Amazon Q: Step 1 shows opening Amazon Q Chat Panel from the left sidebar; Step 2 shows opening a chat session in Amazon Q at the top; Step 3 shows clicking on the Rules button in the chat interface; Step 4 shows ensuring AI-DLC rules are loaded in the rules panel on the right side of the screen." width="2926" height="898"&gt;
 &lt;p id="caption-attachment-24514" class="wp-caption-text"&gt;Figure 2: AI-DLC rules enabling in Amazon Q Developer&lt;/p&gt;
&lt;/div&gt; 
&lt;h2&gt;Step 4: Start the AI-DLC workflow by entering a high-level problem statement&lt;/h2&gt; 
&lt;p&gt;Our development environment is now set up, and we’re ready to begin application development using AI-DLC. In our Q Developer chat session, we enter the following problem statement:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-text"&gt;Using AI-DLC let's build a web application to solve the river crossing puzzle.&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;Notice that we’ve prefixed our problem statement with “Using AI-DLC …” to ensure that Q Developer engages the AI-DLC workflow.&amp;nbsp;Figure 3 shows what happens next. The AI-DLC workflow is triggered within Q Developer. It greets us with a welcome message and provides a brief overview of the AI-DLC methodology.&lt;/p&gt; 
&lt;p&gt;Figure 3 shows an expanded view of the AI-DLC workflow rules folder structure on the left. You’ll notice that a single &lt;code&gt;aws-aidlc-rules/core-workflow.md&lt;/code&gt; is placed in the &lt;a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/context-project-rules.html"&gt;designated&lt;/a&gt; &lt;code&gt;.amazonq/rules&lt;/code&gt; folder, while the rest of the rules files are placed in an ordinary &lt;code&gt;aws-aidlc-rule-details&lt;/code&gt; folder. This arrangement is designed to optimize model efficiency. By placing the &lt;code&gt;aws-aidlc-rules/core-workflow.md&lt;/code&gt; file in the &lt;code&gt;.amazonq/rules&lt;/code&gt; folder,&amp;nbsp;, it serves as additional context, ensuring that the core workflow structure is always accessible without incurring additional token consumption. Conversely, the detailed phase and stage-level behavior rules are stored in the &lt;code&gt;aws-aidlc-rule-details&lt;/code&gt; folder and are dynamically loaded as required. This approach conserves Amazon Q’s context window and token usage by retaining only the necessary information within the context at any given time, thereby enhancing model efficiency.&lt;/p&gt; 
&lt;p&gt;The rules files under the&lt;code&gt; aws-aidlc-rule-details&lt;/code&gt; folder are organized into three sub-folders, each representing a phase of AI-DLC. Within each phase, there are stage-specific files. A &lt;code&gt;common&lt;/code&gt; folder houses cross-cutting rules applicable to all AI-DLC phases and stages such as the “human-in-the-loop”.&lt;/p&gt; 
&lt;p&gt;The AI-DLC workflow is self-guided and provides us with a clear understanding of what to expect next. It informs us that it will enter the AI-DLC Inception phase next, starting with the Workspace Detection stage within it.&lt;/p&gt; 
&lt;div id="attachment_24515" style="width: 2896px" class="wp-caption aligncenter"&gt;
 &lt;img aria-describedby="caption-attachment-24515" loading="lazy" class="size-full wp-image-24515" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/24/03_AI-DLC_problem_statement-1.png" alt="Screenshot of Amazon Q interface showing AI-DLC workflow initialization. The left sidebar displays a file tree with various workflow stages and configuration files. The main chat area shows a problem statement input box at the top with placeholder text 'Using AI-DLC let's build a web application to solve the most pressing problem.' Below is a welcome message explaining AI-DLC (AI-Driven Development Life Cycle) and its capabilities. Three callout annotations highlight: 1) The core workflow dynamically utilizes detailed instructions for different phases and stages, loading and unloading them as required; 2) AI-DLC workflow kicks off with a welcome message and precise overview; 3) The workflow is structured to load a single Q Developer Rule file, one workflow.md, which then dynamically loads and unloads the stage definitions housed in the 'aws-aidlc-rule-details' folder as needed." width="2886" height="1906"&gt;
 &lt;p id="caption-attachment-24515" class="wp-caption-text"&gt;Figure 3: User enters high level problem statement in Amazon Q. AI-DLC workflow is triggered.&lt;/p&gt;
&lt;/div&gt; 
&lt;h2&gt;Step 5: Workspace Detection&lt;/h2&gt; 
&lt;p&gt;We enter the &lt;strong&gt;Workspace Detection&lt;/strong&gt; stage within the &lt;strong&gt;Inception&lt;/strong&gt; phase. In this stage, AI-DLC analyzes the current workspace and determines whether it’s a greenfield (new) or brownfield (existing) application. Since AI-DLC is an adaptive workflow, it decides whether the next stage will be Reverse Engineering (for brownfield projects) or Requirements Analysis (for greenfield projects).&lt;/p&gt; 
&lt;p&gt;Since we’re building a greenfield application and there’s no existing code in the workspace to reverse engineer, the workflow will guide us to Requirements Analysis next. If we were working on a brownfield application, the workflow would have performed Reverse Engineering first and then moved on to Requirements Analysis. This demonstrates the adaptive nature of the workflow.&lt;/p&gt; 
&lt;p&gt;Figure 4 illustrates the process in our IDE when we enter this stage. The workflow requests our permission to create an &lt;code&gt;aidlc-docs&lt;/code&gt; folder under the project root. This folder will serve as the repository for all the artifacts generated by AI-DLC during the workflow execution. Subsequently, the workflow generates two files within this folder: &lt;code&gt;aidlc-state.md&lt;/code&gt; and &lt;code&gt;audit.md&lt;/code&gt;. The purpose of these files is explained in Figure 4.&lt;/p&gt; 
&lt;p&gt;&lt;/p&gt;
&lt;div id="attachment_24516" style="width: 2912px" class="wp-caption alignnone"&gt;
 &lt;img aria-describedby="caption-attachment-24516" loading="lazy" class="size-full wp-image-24516" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/24/04_AI-DLC_workspace_detection-1.png" alt="Screenshot of AI-DLC workspace detection phase showing the Amazon Q chat interface. The left sidebar displays the file tree with an 'aidlc-doc' folder highlighted. The main chat area shows the Inception Phase - Workspace Detection stage with explanatory text about analyzing the workspace. Five callout annotations explain: 1) Workflow creates aidlc-doc directory for storing AI-DLC generated artifacts; 2) The workflow tracks its progress in aidlc-metadata.json for error recovery and session continuity; 3) The audit.md file stores user's prompts; 4) Workflow highlights the AI-DLC phase and stage name with a clear heading for easy tracking; 5) Workflow loads detailed stage-level behavior files dynamically such that they don't consume the context window statically. At the bottom, a user approval prompt shows 'mkdir -p /Users/[...]/NewConsumerPortal/aidlc-docs' with the user asked to approve the 'mkdir' command." width="2902" height="1938"&gt;
 &lt;p id="caption-attachment-24516" class="wp-caption-text"&gt;Figure 4: Workspace Detection&lt;/p&gt;
&lt;/div&gt;The Workspace Detection will quickly finish as this is a greenfield project. The workflow will guide us into Requirements Analysis stage within the Inception phase next.
&lt;p&gt;&lt;/p&gt; 
&lt;h2&gt;Step 6: Requirements Analysis&lt;/h2&gt; 
&lt;p&gt;The workflow has progressed to the Requirements Analysis stage, where we will define the application requirements. The AI-DLC workflow presented our high-level problem statement to the Q Developer, which then responded with several requirements clarifications questions, as illustrated in Figure 4.&lt;/p&gt; 
&lt;p&gt;Several AI-DLC rules came into play at this stage. One rule instructed Amazon Q to avoid making assumptions on the user’s behalf and instead ask clarifying questions. Since LLMs tend to make assumptions and rush towards outcomes, they must be explicitly instructed to align with the engineering rigor of the AI-DLC methodology. To achieve this, the Q Developer presented several requirements clarification questions in &lt;code&gt;requirement-verification-questions.md&lt;/code&gt; file and asked us to answer them inline in the file.&lt;/p&gt; 
&lt;p&gt;Another AI-DLC rule instructed the Q Developer to present questions in multiple-choice format and always include an open-ended option (“Other”) to enhance user convenience and provide flexibility in answering.&lt;/p&gt; 
&lt;p&gt;As shown in Figure 5, Amazon Q has asked us about the desired puzzle variant, such as the Classic Farmer, Fox, Chicken, and Grain puzzle or other popular variations. Additionally, it has asked us questions about user interaction methods, score persistence across multiple players, and the creation of a leaderboard.&lt;/p&gt; 
&lt;p&gt;These questions are essential for achieving our desired application outcome. Our responses to these questions will determine the final product. While we didn’t explicitly specify this level of detail in our high-level problem statement, AI-DLC has delegated detailed requirements elaboration to Amazon Q, but we still retain control over what gets built.&lt;/p&gt; 
&lt;div id="attachment_24517" style="width: 2934px" class="wp-caption aligncenter"&gt;
 &lt;img aria-describedby="caption-attachment-24517" loading="lazy" class="size-full wp-image-24517" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/24/05_AI-DLC_requirements_analysis-1.png" alt="Screenshot of AI-DLC Requirements Analysis phase showing a split view. The left side displays a requirements clarification questions markdown file with multiple-choice questions about the Kuer Crossing Portal, including sections about user crossing portal variants, primary user interaction methods, and data storage preferences. The right side shows the Amazon Q chat interface with the Inception Phase - Requirements Analysis heading. Two callout annotations highlight: 1) AI-DLC asks questions in multiple choice format, with an 'Other' option that leaves an open-ended fill-in-the-blank when the answer doesn't match the predefined options; 2) AI-DLC generates config.requirements-clarification-questions.md file containing requirements clarification questions, with questions placed in an MD file where the user can respond inline in the file, using 'Answered' to indicate completion. The chat shows instructions for answering questions to clarify requirements." width="2924" height="1942"&gt;
 &lt;p id="caption-attachment-24517" class="wp-caption-text"&gt;Figure 5: Requirements Analysis&lt;/p&gt;
&lt;/div&gt; 
&lt;p&gt;We answer all the questions in &lt;code&gt;requirement-verification-questions.md&lt;/code&gt; file and enter “Done” in the chat window.&lt;/p&gt; 
&lt;p&gt;Amazon Q processes our responses. The AI-DLC workflow is designed to identify human errors. It checks if we’ve answered all the questions and identifies any contradictions or ambiguities in our answers. Any confusions, contradictions, or ambiguities will be flagged for follow-up questions. AI-DLC adheres to high standards and ensures that we don’t proceed to the next step until we’re fully in agreement on the requirements between us and Amazon Q.&lt;/p&gt; 
&lt;p&gt;Since we answered all the questions and there were no contradictions in our answers, the workflow continues and generates a comprehensive &lt;code&gt;requirements.md&lt;/code&gt; document, as shown in figure 6.&lt;/p&gt; 
&lt;div id="attachment_24518" style="width: 2938px" class="wp-caption aligncenter"&gt;
 &lt;img aria-describedby="caption-attachment-24518" loading="lazy" class="size-full wp-image-24518" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/24/06_AI-DLC_requirements_review-1.png" alt="Screenshot showing AI-DLC requirements review phase with split view. The left side displays a Requirements Document for the River Crossing Puzzle Web Application, including Intent Analysis Summary, User Request details, Request Type, and Functional Requirements with Core Puzzle Functionality items (FR-001 through FR -006) describing game features like classic farmer puzzle, timer display, move tracking, puzzle state validation, and victory messages. The right side shows the Amazon Q chat interface with 'Requirements Analysis Complete' heading, displaying project details including Puzzle Type (Classic Farmer, Fox, Chicken, and Grain river crossing puzzle), Technology (React-based modern web application), and Target Devices (web browsers only). Three callout annotations highlight: 1) Requirements Analysis phase complete; 2) Requirements document generated; 3) User may Request Changes, Add User Stories for Approval, or Approve &amp;amp; Continue, with a REVIEW SAFETY note warning users to review requirements and approve to continue, with options to request changes or add modifications if required." width="2928" height="1894"&gt;
 &lt;p id="caption-attachment-24518" class="wp-caption-text"&gt;Figure 6: Requirements Review&lt;/p&gt;
&lt;/div&gt; 
&lt;p&gt;The workflow prompts us to review the &lt;code&gt;requirements.md&lt;/code&gt; document and decide on the next step. If we’re not aligned on the requirements, we can prompt Amazon Q to help us achieve alignment. We can then iterate on the requirements until we’re fully aligned. Once we’re fully aligned, we prompt AI-DLC to progress to the next stage.&lt;/p&gt; 
&lt;p&gt;Given the adaptive nature of the AI-DLC workflow, Amazon Q has recommended that this application is simple enough, and we can skip the User Stories stage. If we felt otherwise, we would have overridden the model’s recommendation. In this case, we agree with Q’s recommendation and will therefore enter “Continue” in the chat window.&lt;/p&gt; 
&lt;p&gt;The workflow will enter Workflow Planning stage next.&lt;/p&gt; 
&lt;h2&gt;Step 7: Workflow Planning&lt;/h2&gt; 
&lt;p&gt;With our requirements established, we proceed to the Workflow Planning stage. In this phase, we leverage the requirements context and the workflow’s intelligence to plan the execution of specific stages of AI-DLC within the workflow to build our application as per the requirements specification.&lt;/p&gt; 
&lt;p&gt;Figure 7 illustrates the workflow planning stage in Q Developer. The workflow has generated an &lt;code&gt;execution-plan.md&lt;/code&gt; file that outlines the recommended stages for execution and those that should be skipped.&lt;/p&gt; 
&lt;p&gt;The workflow planning process is highly contextual to the requirements. During requirements analysis, we decided to develop a simple river crossing puzzle application, consisting of a single HTML file, without a backend, leaderboard, or persistence. Consequently, Amazon Q recommends that we skip all the conditional stages, such as User Stories, Application Design, Units of Work Planning, and so on, and proceed directly to the Code Generation Planning stage in the Construction phase.&lt;/p&gt; 
&lt;p&gt;Figure 7 visually represents the recommended workflow graphically, indicating the stages that will be executed and those that will be skipped.&lt;/p&gt; 
&lt;div id="attachment_24520" style="width: 2904px" class="wp-caption aligncenter"&gt;
 &lt;img aria-describedby="caption-attachment-24520" loading="lazy" class="size-full wp-image-24520" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/24/07_AI-DLC_workflow_planning-1.png" alt="Screenshot of AI-DLC Workflow Planning phase showing an Execution Plan document on the left with Detailed Analysis Summary including user-facing changes, brownfield changes, API changes, and NFR changes, plus Risk Assessment. Below is a Workflow Visualization flowchart diagram showing the workflow stages from Inception through Construction to Operations phases. The right side shows the Amazon Q chat with 'Workflow Planning Complete' heading. Three callout annotations highlight: 1) AI -DLC workflow has analyzed requirements and based on the problem complexity has proposed a set of stages to execute in the workflow; 2) Problem is simple enough that AI-DLC is proposing to skip the detailed optional stages; 3) User may Request Changes, Add back skipped stages or Approve &amp;amp; Continue." width="2894" height="1900"&gt;
 &lt;p id="caption-attachment-24520" class="wp-caption-text"&gt;Figure 7: Workflow Planning&lt;/p&gt;
&lt;/div&gt; 
&lt;p&gt;Since we’ve opted for a straightforward web UI app in this blog post for brevity, the workflow execution plan suggested by AI-DLC aligns seamlessly with our objectives. Should we not be aligned with the AI-DLC recommended workflow execution plan, we would request Q Developer to modify the plan to suit our preferences.&lt;/p&gt; 
&lt;p&gt;Since we’ve agreed on the workflow plan, we’ll enter “Continue” in Q’s chat session. If we weren’t aligned with the recommended workflow execution plan, we’d have prompted Q with our concerns and iterated over the revised execution plan until it aligned with our preferences. Following the recommended execution plan, the workflow will transition into the Construction phase and directly into the Code Generation Plan stage in the phase.&lt;/p&gt; 
&lt;h2&gt;Step 8: Code Generation Planning&lt;/h2&gt; 
&lt;p&gt;AI-DLC prioritizes planning over rushing to outcomes. This approach aligns with the concept of human-in-the-loop behavior, allowing us to detect issues early on, provide feedback on the plan, and prevent wrong assumptions from propagating further. Before we proceed with actual Code Generation, we undergo Code Generation Planning.&lt;/p&gt; 
&lt;p&gt;During Code Generation Planning, AI-DLC creates a detailed, numbered plan. It analyzes the requirements and design artifacts, breaking down the process into explicit steps for generating business logic, the API layer, the data layer, tests, documentation, and deployment files.&lt;/p&gt; 
&lt;p&gt;The plan is documented in a &lt;code&gt;{unit-name}-code-generation-plan.md&lt;/code&gt; file, complete with check boxes. This ensures transparency, allowing users to see what will be built. It also provides control, enabling users to modify the plan. Additionally, it maintains quality by ensuring comprehensive coverage of code, tests, and documentation.&lt;/p&gt; 
&lt;p&gt;Figure 8 illustrates the AI-DLC’s code generation plan. The proposed workflow comprises eight steps, starting with creating an HTML structure and progressing to adding styling, game logic, and concluding with testing and documentation.&lt;/p&gt; 
&lt;div id="attachment_24521" style="width: 2920px" class="wp-caption aligncenter"&gt;
 &lt;img aria-describedby="caption-attachment-24521" loading="lazy" class="size-full wp-image-24521" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/24/08_AI-DLC_code_gen_planning-1.png" alt="Screenshot of AI-DLC Code Generation Planning showing a Code Generation Plan document for River Crossing Puzzle on the left, with Unit Context listing HTML Structure, CSS Styling, and JavaScript files, followed by Unit Generation Steps including Step 1: HTML Structure Generation, Step 2: CSS Styling Generation, and Step 3 : Core Game Logic Generation with detailed checkboxes for each step. The right side shows Amazon Q chat with code generation plan details. Three callout annotations highlight: 1) The plan doc contains to-do items for AI-DLC to execute. These checkboxes get completed when the task is done; 2) This is how AI-DLC workflow persists and tracks progress state; 3) AI-DLC has proposed an 8-step code generation plan with checkboxes and review prompts, and User may Request Changes or Approve &amp;amp; Continue." width="2910" height="1902"&gt;
 &lt;p id="caption-attachment-24521" class="wp-caption-text"&gt;Figure 8: Code Generation Planning&lt;/p&gt;
&lt;/div&gt; 
&lt;p&gt;The code generation plan appears reasonable to us. We will proceed to the Code Generation stage by entering “Continue” in Q’s chat session.&lt;/p&gt; 
&lt;h2&gt;Step 9: Code Generation&lt;/h2&gt; 
&lt;p&gt;The Code Generation stage executes the Code Generation Plan we approved in the previous step. It generates actual code artifacts step-by-step, including business logic, APIs, data layers, tests, and documentation. Completed steps are marked with check boxes, progress is tracked, and story traceability is ensured before presenting the generated code for user approval.&lt;/p&gt; 
&lt;p&gt;Figure 9 illustrates that the Code Generation stage has been completed. We are now reviewing a single &lt;code&gt;index.html&lt;/code&gt; file generated with embedded styling and JavaScript consistent with our preference specified in &lt;code&gt;requirements.md&lt;/code&gt;.&lt;/p&gt; 
&lt;p&gt;The workflow provides a summary of the activities performed during the Code Generation phase.&lt;/p&gt; 
&lt;div id="attachment_24522" style="width: 2920px" class="wp-caption aligncenter"&gt;
 &lt;img aria-describedby="caption-attachment-24522" loading="lazy" class="size-full wp-image-24522" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/24/09_AI-DLC_code_generation-1.png" alt="Screenshot of AI-DLC Code Generation phase showing generated HTML code on the left with embedded styling and JavaScript for the River Crossing Puzzle application. The right side shows Amazon Q chat with 'Code Generation Complete - river-crossing-puzzle' heading and a list of generated artifacts including HTML file, CSS interface, drag-and-drop interface, game logic, and testing services. Two callout annotations highlight: 1) The generated code is an HTML file with embedded styling and JavaScript; 2) We have specified during requirements analysis phase that we want a single-file index.html file implementation; 3) Code generation has been completed, and a summary of the generated artifacts is provided." width="2910" height="1910"&gt;
 &lt;p id="caption-attachment-24522" class="wp-caption-text"&gt;Figure 9: Code Generation&lt;/p&gt;
&lt;/div&gt; 
&lt;p&gt;We’re about to test our newly created application soon. While it may be straightforward to test this simple puzzle app right now, for complex applications, we generate build and test instructions using AI-DLC.&lt;/p&gt; 
&lt;p&gt;We’ll enter “Continue” in the workflow and enter the final Build and Test stage in the Construction phase.&lt;/p&gt; 
&lt;h2&gt;Step 10: Build and Test&lt;/h2&gt; 
&lt;p&gt;These questions are essential for achieving our desired application.&lt;br&gt; We’ve reached the final stage of the AI-DLC Construction Phase, known as the Build and Test stage. During this stage, we create comprehensive instruction files that guide the build and packaging of the project, and document the necessary testing layers. These layers include unit tests (validating generated code), integration tests (checking unit interactions), performance tests (load/stress testing), and additional tests as required (security, contract, e2e).&lt;/p&gt; 
&lt;p&gt;The generated build instructions include dependencies and commands, test execution steps with expected results, and a summary document that provides an overview of the overall build/test status and the project’s readiness for deployment.&lt;/p&gt; 
&lt;p&gt;Figure 10 illustrates the documentation generated during this stage.&lt;/p&gt; 
&lt;div id="attachment_24523" style="width: 2914px" class="wp-caption aligncenter"&gt;
 &lt;img aria-describedby="caption-attachment-24523" loading="lazy" class="size-full wp-image-24523" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/24/10_AI-DLC_build_and_test-2.png" alt="Screenshot of AI-DLC Build and Test phase showing a Build and Test Summary document on the left with Build Status (Build Tool, Build Status, Build Artifacts, Build Warnings) and Test Execution Summary including Unit Tests, Integration Tests, and Performance Tests sections with checkmarks and failure indicators. The right side shows Amazon Q chat with build and test completion status and project summary. Two callout annotations highlight: 1) Build and Test Complete! Build and Test instructions have been documented; 2) The AI-DLC workflow has concluded with a comprehensive summary of all completed stages and generated artifacts." width="2904" height="1912"&gt;
 &lt;p id="caption-attachment-24523" class="wp-caption-text"&gt;Figure 10: Build and Test&lt;/p&gt;
&lt;/div&gt; 
&lt;p&gt;The AI-DLC workflow has now concluded.&lt;/p&gt; 
&lt;h2&gt;Let’s Solve the Puzzle!&lt;/h2&gt; 
&lt;p&gt;We open &lt;code&gt;index.html&lt;/code&gt; in a web browser to access our newly created River Crossing Puzzle application. As shown in figure 11, we see our graphical web UI.&lt;/p&gt; 
&lt;p&gt;During requirements assessment, we chose a straightforward user interface using HTML, CSS, and JavaScript (without any frameworks), as evident in the display shown in Figure 11. Your display may vary due to the probabilistic nature of LLMs and the choices you made for requirements.&lt;/p&gt; 
&lt;p&gt;We attempt to solve the puzzle and find that it works as expected.&lt;/p&gt; 
&lt;div id="attachment_24524" style="width: 2882px" class="wp-caption aligncenter"&gt;
 &lt;img aria-describedby="caption-attachment-24524" loading="lazy" class="size-full wp-image-24524" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/24/11_AI-DLC_river_crossing_app-1.png" alt="Side-by-side screenshots of the River Crossing Puzzle web application showing two game states. The left screenshot shows the initial state with a farmer on the left bank, and fox, chicken, and grain items listed below, with a blue river in the center and right bank on the right. The right screenshot shows a game state after moves with the farmer on the right bank and a success message 'Congratulations! You won in 7 moves!' displayed at the bottom. Both screens have a yellow 'Start Over' button and show move counts." width="2872" height="1308"&gt;
 &lt;p id="caption-attachment-24524" class="wp-caption-text"&gt;Figure 11: River Crossing Puzzle Web App&lt;/p&gt;
&lt;/div&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;This post shows how AWS’s open-source AI-DLC workflow, guided by Amazon Q Developer’s Project Rules feature, helps developers build applications with structured oversight and transparency.&lt;/p&gt; 
&lt;p&gt;Using a River Crossing Puzzle web application as an example, the walk-through illustrates how AI-DLC methodology adapts its rigor based on project complexity, skipping unnecessary stages for simple applications while maintaining comprehensive processes for complex projects. Throughout each stage, AI-DLC enforces “human-in-the-loop” behavior, requiring user approval at critical checkpoints, asking clarifying questions, and maintaining complete audit trails for traceability.&lt;/p&gt; 
&lt;p&gt;The exercise successfully demonstrates how AI-DLC balances AI automation with human oversight, enhancing productivity without sacrificing quality or control. By following this structured, repeatable methodology, development teams can leverage generative AI’s capabilities while ensuring humans remain in charge of architectural decisions and implementation approaches. This framework provides the necessary guardrails for responsible and effective AI-assisted software development across projects of varying complexity.&lt;/p&gt; 
&lt;h2&gt;Cleanup&lt;/h2&gt; 
&lt;p&gt;We did not create any AWS resources in this walk-through, so no AWS cleanup is needed. You may cleanup your project workspace at your discretion.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Ready to get started?&lt;/strong&gt; Visit our &lt;a href="https://github.com/awslabs/aidlc-workflows"&gt;GitHub repository&lt;/a&gt; to download the AI-DLC workflow and join the &lt;a href="https://ai-nativebuilders.org/"&gt;AI-Native Builders Community&lt;/a&gt; to contribute to the future of software development.&lt;/p&gt; 
&lt;p&gt;About the authors:&lt;/p&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="alignleft wp-image-11636" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/RajaProfile.jpeg" alt="" width="110" height="140"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Raja SP&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Raja is a Principal Solutions Architect at AWS, where he leads Developer Transformation Programs. He has worked with more than 100 large customers, helping them design and deliver mission critical systems built on modern architectures, platform engineering practices, and Amazon inspired operating models. As generative AI reshapes the software development landscape, Raja and his team created the AI Driven Development Lifecycle (AI-DLC) — an end to end, AI native methodology that re-imagines how large teams collaboratively build production-grade software in the AI era.&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="alignleft wp-image-11636" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/raj.png" alt="" width="110" height="140"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Raj Jain&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Raj is a Senior Solutions Architect, Developer Specialist at AWS. Prior to this role, Raj worked as a Senior Software Development Engineer at Amazon, where he helped build the security infrastructure underlying the Amazon platform. Raj is a published author in the Bell Labs Technical Journal, and has also authored IETF standards, AWS Security blogs, and holds twelve patents&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/image-11.jpg" alt="" width="150"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Siddhesh Jog&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Siddhesh is a Senior Solutions Architect at AWS. He has worked in multiple industries in a wide variety of roles and is passionate about all things technology. At AWS Siddhesh is most excited to help customers transition to the AI Driven Development Lifecycle and enable them to build applications rapidly in a secure, complaint and cost efficient cloud environment.&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img class="alignleft wp-image-11636" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2024/06/28/wilmatos.jpeg" alt="" width="150"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Will Matos&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Will Matos is a Principal Specialist Solutions Architect with AWS’s Next Generation Developer Experience (NGDE) team, revolutionizing developer productivity through Generative AI, AI-powered chat interfaces, and code generation. With 27 years of technology, AI, and software development experience, he collaborates with product teams and customers to create intelligent solutions that streamline workflows and accelerate&amp;nbsp;software development cycles. A thought leader engaging early adopters, Will bridges innovation and real-world needs .&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Open-Sourcing Adaptive Workflows for AI-Driven Development Life Cycle (AI-DLC)</title>
		<link>https://aws.amazon.com/blogs/devops/open-sourcing-adaptive-workflows-for-ai-driven-development-life-cycle-ai-dlc/</link>
		
		<dc:creator><![CDATA[Will Matos]]></dc:creator>
		<pubDate>Sat, 29 Nov 2025 20:54:04 +0000</pubDate>
				<category><![CDATA[Amazon Q]]></category>
		<category><![CDATA[Amazon Q Developer]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Developer]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">78356f1a4d514bc923f2928349a7b8d29f4847c2</guid>

					<description>AI-Driven Development Life Cycle (AI-DLC) holds the promise of unlocking the full potential of AI in software development. By emphasizing AI-led workflows and human-centric decision-making, AI-DLC can deliver velocity and quality. However, realizing these gains hinges on how organizations effectively integrate AI into their engineering workflows. Through our work with engineering teams across industries, we […]</description>
										<content:encoded>&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/devops/ai-driven-development-life-cycle/"&gt;AI-Driven Development Life Cycle (AI-DLC)&lt;/a&gt; holds the promise of unlocking the full potential of AI in software development. By emphasizing AI-led workflows and human-centric decision-making, AI-DLC can deliver velocity and quality. However, realizing these gains hinges on how organizations effectively integrate AI into their engineering workflows.&lt;/p&gt; 
&lt;p&gt;Through our work with engineering teams across industries, we have identified three recurring challenges. These challenges consistently limit the effectiveness of AI in accelerating modern software development. The first challenge is one-size-fits-all workflows. These workflows force every project through the same rigid sequence of steps. The second challenge is the lack of flexible depth in workflow stages. This leads to over-engineering or insufficient rigor. The third challenge is tools that over-automate. These tools unintentionally divert humans away from critical validation and oversight responsibilities.&lt;/p&gt; 
&lt;p&gt;Achieving true, sustainable productivity requires the process and AI coding agents to become &lt;strong&gt;adaptive to context&lt;/strong&gt;, &lt;strong&gt;flexible in depth&lt;/strong&gt;, and &lt;strong&gt;collaborative by design&lt;/strong&gt;. In this blog, we’ll show you how AI-DLC’s core principles address these three challenges, transforming them from productivity blockers into opportunities for adaptive, human-centered development. We’ll describe how AI-DLC enables workflows that adapt to the problem at hand by intelligently selecting stages, modulating depth, and embedding human oversight at every critical decision point.&lt;/p&gt; 
&lt;p&gt;We will also introduce our &lt;a href="https://github.com/awslabs/aidlc-workflows"&gt;&lt;strong&gt;open-source&lt;/strong&gt;&amp;nbsp;&lt;strong&gt;Amazon Q Developer/Kiro Rules implementation&lt;/strong&gt;&lt;/a&gt;, which brings AI-DLC principles to life through adaptive workflow scaffolds. This allows you to start applying these principles in your own projects and experience AI-native development that accelerates delivery without compromising engineering discipline or human judgment.&lt;/p&gt; 
&lt;h2&gt;How does AI-DLC address these challenges?&lt;/h2&gt; 
&lt;p&gt;Let’s explore how AI-DLC addresses these challenges.&lt;/p&gt; 
&lt;h3&gt;1. The “One-Size-Fits-All” Workflow Problem&lt;/h3&gt; 
&lt;p&gt;Software development has never been a linear process. In practice, different projects follow distinct pathways with their own checkpoints and deliverables. Consider these examples:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;A simple defect fix doesn’t require elaborate requirements analysis and planning&lt;/li&gt; 
 &lt;li&gt;A pure infrastructure porting project doesn’t warrant application design with domain modeling&lt;/li&gt; 
 &lt;li&gt;A new feature or service addition demands different steps than applying a security patch&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Yet, many modern Agentic coding tools provide &lt;strong&gt;hard-wired&lt;/strong&gt;, &lt;strong&gt;opinionated workflows&lt;/strong&gt; that ignore this diversity. Regardless of intent or scope, every project is forced through the same rigid sequence of steps—even when some add little or no value. This rigidity introduces friction, wastes time, and reduces productivity. The result: artificial ceremonies, unnecessary artifacts, redundant approvals, and process overhead that impede velocity.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;How AI-DLC addresses this challenge:&lt;/strong&gt;&lt;br&gt;AI-DLC addresses this challenge through the Principle 10 (&lt;strong&gt;No Hard-Wired, Opinionated SDLC Workflows&lt;/strong&gt;) as defined in the &lt;a href="https://prod.d13rzhkk8cj2z0.amplifyapp.com/"&gt;AI-DLC Method Definition Paper&lt;/a&gt;.&lt;/p&gt; 
&lt;div style="border: 1px solid black;padding: 5px"&gt; 
 &lt;p&gt;“&lt;em&gt;AI-DLC avoids prescribing opinionated workflows for different development pathways (such as new system development, refactoring, defect fixes, or microservice scaling). Instead, it adopts a truly AI-First approach where AI recommends the Level 1 Plan based on the given pathway intention.&lt;/em&gt;“&lt;/p&gt; 
&lt;/div&gt; 
&lt;h3&gt;2. Lack of Flexible Depth Within Each Stage&lt;/h3&gt; 
&lt;p&gt;True adaptivity must go beyond the breadth of a workflow and extend into its &lt;strong&gt;depth and intensity&lt;/strong&gt;. This is how human experts intuitively plan software projects today.&lt;/p&gt; 
&lt;p&gt;Even when workflows are flexible, many tools fail to &lt;strong&gt;modulate the depth of engagement&lt;/strong&gt; at each stage. For example, building a lightweight utility function doesn’t require full-scale Domain-Driven Design or detailed architectural modeling. When an AI coding agent compels teams to follow these steps regardless of need, the consequence is wasted effort and an &lt;strong&gt;over-engineered product&lt;/strong&gt;. Developers spend cycles reviewing artifacts as the tools dictate rather than delivering business value.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;How AI-DLC addresses this challenge:&lt;/strong&gt;&lt;br&gt;Through the same principle 10, AI-DLC adapts both the breadth (choice of stages) and the depth of each stage to match the complexity of the intent and context. For example, the complexity of the requirements determines whether a conceptual design is sufficient or whether a full architectural deep dive is required in the Design stage.&lt;/p&gt; 
&lt;p&gt;Humans validate and adjust this AI-proposed breadth and depth, ensuring that each stage’s rigor matches the scope of the challenge. This elasticity—balancing breadth and depth—is essential for sustaining true velocity without sacrificing engineering discipline.&lt;/p&gt; 
&lt;h3&gt;3. Tools that Reduce the Emphasis on Human Oversight&lt;/h3&gt; 
&lt;p&gt;As AI tools automate more of the Software Development Life Cycle (SDLC), a new risk has emerged: &lt;strong&gt;process atrophy&lt;/strong&gt;. Developers, excited by automation, often drift into passive execution—allowing AI to “decide everything.” The result is a loss of reflection, weakened oversight, and erosion of shared understanding. AI tools must not only automate work but also &lt;strong&gt;amplify the significance of human judgment&lt;/strong&gt;. They should remind practitioners that “human in the loop” is not a checkbox—it is the cornerstone of trust, accountability, and correctness in AI-native development. Equally critical are the &lt;strong&gt;rituals and rhythms&lt;/strong&gt; that sustain collaborative engineering.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;How AI-DLC addresses this challenge:&lt;/strong&gt;&lt;br&gt;AI-DLC addresses this challenge by requiring a collaborative human-in-the-loop cycle at every stage of the workflow. In this loop, AI generates a plan to execute a task, and relevant stakeholders assemble, review, and validate it.&lt;/p&gt; 
&lt;p&gt;These rituals, defined as &lt;em&gt;Mob Elaboration&lt;/em&gt; and &lt;em&gt;Mob Construction&lt;/em&gt; in AI-DLC, ensure that AI’s suggestions are not blindly accepted. Approved plans are executed, and stakeholders again review and validate the final artifacts. The AI-DLC workflow records every human action and approval, embedding reflection to ensure that humans remain the compass, guiding AI’s acceleration.&lt;/p&gt; 
&lt;div id="attachment_24599" style="width: 610px" class="wp-caption aligncenter"&gt;
 &lt;img aria-describedby="caption-attachment-24599" class="wp-image-24599" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/aidlc-image.png" alt="Circular workflow diagram showing AI-DLC collaboration cycle. Starting at top: Humans Provide Task (orange person icon) , arrow to AI Creates Plan and Seeks Clarification (blue brain icon), arrow to Humans Provide Clarification (orange person icon), arrow to AI Refines Plan (blue brain icon), arrow to Humans Approve Plan (orange person icon), arrow to AI Executes Plan (blue brain icon), arrow to Humans Verify Outcome (orange person icon), completing the cycle back to the start. The diagram illustrates iterative human-AI collaboration with humans making decisions and AI performing execution tasks." width="600"&gt;
 &lt;p id="caption-attachment-24599" class="wp-caption-text"&gt;Figure 1: AI-DLC workflow: Humans decide and validate, AI plans and executes.&lt;/p&gt;
&lt;/div&gt; 
&lt;p&gt;Effective tooling must therefore emphasize:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Promoting for stakeholder collaboration:&lt;/strong&gt; The system should explicitly call for collaborative rituals involving stakeholders&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Auditability:&lt;/strong&gt; Every AI-generated plan and artifact should surface rationale and invite review, recording every human oversight and interaction&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Flow awareness:&lt;/strong&gt; Tools should detect when automation races ahead of human validation and deliberately slow down to emphasize critical checkpoints&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;The goal is not to suppress automation but to embed critical human ownership.&lt;/p&gt; 
&lt;h2&gt;From Principles to Practice&lt;/h2&gt; 
&lt;p&gt;The ideas we outlined — adaptive workflows, flexible depth, and embedded human oversight — are compelling in theory and validated by all engineering teams we’ve engaged. The critical question is: &lt;strong&gt;How do we operationalize these ideas into practice without reintroducing the rigidity we seek to eliminate?&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;One approach is manual prompt engineering: crafting structured prompts that guide AI assistants through the AI-DLC workflow step by step. Each prompt encodes the role AI should assume, the task at hand, the governance requirements, and the audit trail expectations. This structured approach transforms a simple AI interaction into a disciplined workflow that embodies AI-DLC principles.&lt;/p&gt; 
&lt;p&gt;This approach, while promising, faces its own limitations. Crafting intricate prompts demands discipline and expertise, posing barriers to widespread adoption. Moreover, humans become responsible for maintaining workflow adaptability, selecting the appropriate prompt at the right moment, and ensuring collaborative checkpoints are honored. This places the burden of orchestration back on practitioners, diverging from our core principle of truly AI-native development, where AI itself drives adaptive decision-making.&lt;/p&gt; 
&lt;p&gt;The question arises: &lt;strong&gt;How can we embed AI-DLC principles directly into the execution layer, making adaptivity and collaboration inherent properties of the system rather than manual responsibilities?&lt;/strong&gt;&lt;/p&gt; 
&lt;h2&gt;Steering for Productivity&lt;/h2&gt; 
&lt;p&gt;The answer lies in &lt;strong&gt;workflow scaffolds&lt;/strong&gt;. These are Rules or Steering customizations for AI Coding Agents. They operationalize AI-DLC principles within the tools. This is done while maintaining transparency, audibility, and modifiability. Our implementation uses Rules/Steering Files. These serve as the foundation of this execution layer. It transforms AI from a passive assistant into an adaptive decision engine.&lt;/p&gt; 
&lt;p&gt;Rather than requiring developers to craft elaborate prompts, AI-Driven development begins with a simple statement of intent. From there, the workflow scaffolds evaluate context, assess complexity, and dynamically construct an appropriate development pathway. The core workflow definition, including a library of stages and decision heuristics for when and how to apply them, empowers AI to continuously tailor the development process to the nature of the work at hand.&lt;/p&gt; 
&lt;p&gt;Each AI-DLC phase (Inception, Construction, Operations) evaluates the depth at which it should execute, resulting in a process that &lt;strong&gt;adapts to the problem rather than forcing the problem to adapt to the process&lt;/strong&gt;. This approach yields several critical outcomes:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Adaptive decisioning:&lt;/strong&gt; The workflow conforms to the problem’s shape, intelligently skipping or deepening stages based on contextual assessment rather than predetermined rules.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Transparent checkpoints:&lt;/strong&gt; Human approvals are embedded at every decision gate, preserving oversight while maintaining velocity. The system doesn’t just automate; it orchestrates collaboration.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;End-to-end traceability:&lt;/strong&gt; Every artifact, decision, and conversation is logged, creating a continuous, inspectable trail of reasoning that supports both accountability and continuous improvement.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;The result is a process that is context-aware, scalable, and self-correcting – capable of supporting everything from a single-line defect fix to a comprehensive system modernization, all while maintaining the rigor and human judgment that define engineering excellence.&lt;/p&gt; 
&lt;h2&gt;Build, Test, and Evolve with Us&lt;/h2&gt; 
&lt;p&gt;We’re &lt;a href="https://github.com/awslabs/aidlc-workflows"&gt;open-sourcing the AI-DLC workflow&lt;/a&gt;, implemented as Amazon Q Rules and Kiro Steering Files, so organizations everywhere can experience AI-DLC in practice and build production-grade systems. &lt;strong&gt;We invite developers, architects, and engineering leaders to:&lt;/strong&gt;&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Apply the steering rules&lt;/strong&gt; in real-world projects, whether brownfield or greenfield. Refer to our &lt;a href="https://aws.amazon.com/blogs/devops/building-with-ai-dlc-using-amazon-q-developer/"&gt;companion AI-DLC workflow walkthrough blog&lt;/a&gt; for step-by-step instructions on how to build using AI-DLC in Amazon Q Developer.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Observe how the process adapts&lt;/strong&gt; to your project’s size, scope, and intent.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Share your experience&lt;/strong&gt; through our GitHub repository, where you can open issues, propose improvements, and contribute ideas.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;Your feedback will help evolve this into a foundation for AI-native software development – one that accelerates delivery without sacrificing rigor or human judgment. Together, we can redefine what software engineering looks like in the age of AI: not scripted but steered.&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;AI-DLC addresses multiple challenges limiting AI’s effectiveness in software development such as rigid workflows, inflexible workflow depth, and tools that reduce human oversight. AI-DLC enables adaptive workflows that intelligently select stages, modulate depth, and embed human oversight at critical decision points. This approach, implemented through open-source tools like Amazon Q Developer Rules and Kiro Steering, accelerates delivery while maintaining engineering discipline and human judgment.&lt;/p&gt; 
&lt;p&gt;AI-DLC emphasizes human oversight and collaboration in AI-driven software development. Workflow scaffolds, embed AI-DLC principles into the execution layer, enabling adaptive decision-making, transparent checkpoints, and end-to-end traceability. Open-sourcing the AI-DLC workflow allows organizations to experience AI-DLC in practice and contribute to its evolution.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Ready to get started?&lt;/strong&gt; Visit our &lt;a href="https://github.com/awslabs/aidlc-workflows"&gt;GitHub repository&lt;/a&gt; to download the AI-DLC workflow and join the AI-Native Builders Community to contribute to the future of software development.&lt;/p&gt; 
&lt;p&gt;&amp;nbsp;&lt;/p&gt; 
&lt;p&gt;About the authors:&lt;/p&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt; 
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/RajaProfile.jpeg" alt="" width="120" height="160"&gt; 
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Raja SP&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Raja is a Principal Solutions Architect at AWS, where he leads Developer Transformation Programs. He has worked with more than 100 large customers, helping them design and deliver mission critical systems built on modern architectures, platform engineering practices, and Amazon inspired operating models. As generative AI reshapes the software development landscape, Raja and his team created the AI Driven Development Lifecycle (AI-DLC) — an end to end, AI native methodology that re-imagines how large teams collaboratively build production-grade software in the AI era.&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt; 
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/raj.png" alt="" width="120" height="160"&gt; 
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Raj Jain&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Raj is a Senior Solutions Architect, Developer Specialist at AWS. Prior to this role, Raj worked as a Senior Software Development Engineer at Amazon, where he helped build the security infrastructure underlying the Amazon platform. Raj is a published author in the Bell Labs Technical Journal, and has also authored IETF standards, AWS Security blogs, and holds twelve patents&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt; 
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/image-11.jpg" alt="" width="120" height="160"&gt; 
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Siddhesh Jog&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Siddhesh is a Senior Solutions Architect at AWS. He has worked in multiple industries in a wide variety of roles and is passionate about all things technology. At AWS Siddhesh is most excited to help customers transition to the AI Driven Development Lifecycle and enable them to build applications rapidly in a secure, complaint and cost efficient cloud environment.&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt; 
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2024/06/28/wilmatos.jpeg" alt="" width="120" height="160"&gt; 
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Will Matos&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Will Matos is a Principal Specialist Solutions Architect with AWS’s Next Generation Developer Experience (NGDE) team, revolutionizing developer productivity through Generative AI, AI-powered chat interfaces, and code generation. With 27 years of technology, AI, and software development experience, he collaborates with product teams and customers to create intelligent solutions that streamline workflows and accelerate software development cycles. A thought leader engaging early adopters, Will bridges innovation and real-world needs.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Introducing the AWS Infrastructure as Code MCP Server: AI-Powered CDK and CloudFormation Assistance</title>
		<link>https://aws.amazon.com/blogs/devops/introducing-the-aws-infrastructure-as-code-mcp-server-ai-powered-cdk-and-cloudformation-assistance/</link>
		
		<dc:creator><![CDATA[Idriss Laouali Abdou]]></dc:creator>
		<pubDate>Fri, 28 Nov 2025 22:52:07 +0000</pubDate>
				<category><![CDATA[AWS Cloud Development Kit]]></category>
		<category><![CDATA[AWS CloudFormation]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Management Tools]]></category>
		<category><![CDATA[AI/ML]]></category>
		<category><![CDATA[AWS CDK]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Infrastructure as Code]]></category>
		<guid isPermaLink="false">9675b39f159909812cfc087a3fbf6613f209fbe8</guid>

					<description>Streamline your AWS infrastructure development with AI-powered documentation search, validation, and troubleshooting Introduction Today, we’re excited to introduce the AWS Infrastructure-as-Code (IaC) MCP Server, a new tool that bridges the gap between AI assistants and your AWS infrastructure development workflow. Built on the Model Context Protocol (MCP), this server enables AI assistants like Kiro CLI, […]</description>
										<content:encoded>&lt;p&gt;Streamline your AWS infrastructure development with AI-powered documentation search, validation, and troubleshooting&lt;/p&gt; 
&lt;h1&gt;Introduction&lt;/h1&gt; 
&lt;p&gt;Today, we’re excited to introduce the &lt;a href="https://awslabs.github.io/mcp/servers/aws-iac-mcp-server"&gt;AWS Infrastructure-as-Code (IaC) MCP Server&lt;/a&gt;, a new tool that bridges the gap between AI assistants and your AWS infrastructure development workflow. Built on the Model Context Protocol (MCP), this server enables AI assistants like &lt;a href="https://kiro.dev/cli/"&gt;Kiro CLI&lt;/a&gt;, Claude or Cursor to help you search &lt;a href="https://aws.amazon.com/cloudformation/"&gt;AWS CloudFormation&lt;/a&gt; and&amp;nbsp;&lt;a href="https://aws.amazon.com/cdk/"&gt;Cloud Development Kit (CDK)&lt;/a&gt; documentation, validate templates, troubleshoot deployments, and follow best practices – all while maintaining the security of local execution.&lt;/p&gt; 
&lt;p&gt;Whether you’re writing AWS CloudFormation templates or AWS Cloud Development Kit (CDK) code, the IaC MCP Server acts as an intelligent companion that understands your infrastructure needs and provides contextual assistance throughout your development lifecycle.&lt;/p&gt; 
&lt;p&gt;The&amp;nbsp;&lt;a href="https://modelcontextprotocol.io/"&gt;Model Context Protocol (MCP)&lt;/a&gt;&amp;nbsp;is an open standard that enables AI assistants to securely connect to external data sources and tools. Think of it as a universal adapter that lets AI models interact with your development tools while keeping sensitive operations local and under your control.&lt;/p&gt; 
&lt;p&gt;The IaC MCP Server provides nine specialized tools organized into two categories:&lt;/p&gt; 
&lt;h2&gt;&lt;strong&gt;Remote Documentation Search Tools&lt;/strong&gt;&lt;/h2&gt; 
&lt;p&gt;These tools connect to the AWS Knowledge MCP backend to retrieve relevant, up-to-date information:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;&amp;nbsp;search_cdk_documentation&lt;/strong&gt;&lt;br&gt; Search the AWS CDK knowledge base for APIs, concepts, and implementation guidance.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;search_cdk_samples_and_constructs&lt;/strong&gt;&lt;br&gt; Discover pre-built AWS CDK constructs and patterns from the AWS Construct Library.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;search_cloudformation_documentation&lt;/strong&gt;&lt;br&gt; Query CloudFormation documentation for resource types, properties, and intrinsic functions.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;read_iac_documentation_page&lt;/strong&gt;&lt;br&gt; Retrieve and read full CloudFormation and CDK documentation pages returned from searches or provided URLs.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;h2&gt;&lt;strong&gt;Local Validation and Troubleshooting Tools&lt;/strong&gt;&lt;/h2&gt; 
&lt;p&gt;These tools run entirely on your machine&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;cdk_best_practices&lt;/strong&gt;&lt;br&gt; Access a curated collection of AWS CDK best practices and design principles.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;validate_cloudformation_template&lt;/strong&gt;&lt;br&gt; Perform syntax and schema validation using&amp;nbsp;cfn-lint&amp;nbsp;to catch errors before deployment.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;check_cloudformation_template_compliance&lt;/strong&gt;&lt;br&gt; Run security and compliance checks against your templates using AWS Guard rules and&amp;nbsp;cfn-guard.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;troubleshoot_cloudformation_deployment&lt;/strong&gt;&lt;br&gt; Analyze CloudFormation stack deployment failures with integrated CloudTrail event analysis. This tool will use your AWS credentials to analyze your stack status.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;get_cloudformation_pre_deploy_validation_instructions&lt;br&gt; &lt;/strong&gt;Returns instructions for CloudFormation’s pre-deployment validation feature, which validates templates during change set creation.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;h3&gt;&lt;strong&gt;Key Use Cases&lt;/strong&gt;&lt;/h3&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Intelligent Documentation Assistant&lt;/strong&gt;&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;Instead of manually searching through documentation, ask your AI assistant natural language questions:&lt;/p&gt; 
&lt;p&gt;“How do I create an S3 bucket with encryption enabled in CDK?”&lt;/p&gt; 
&lt;p&gt;The server searches CDK best practice and samples, returning relevant code examples and explanations.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;2. Proactive Template Validation&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Before deploying infrastructure changes:&lt;/p&gt; 
&lt;p&gt;User: “Validate my CloudFormation template and check for security issues”&lt;/p&gt; 
&lt;p&gt;AI Agent: [Uses validate_cloudformation_template and check_cloudformation_template_compliance]&lt;/p&gt; 
&lt;p&gt;“Found 2 issues: Missing encryption on EBS volumes,&lt;/p&gt; 
&lt;p&gt;and S3 bucket lacks public access block configuration”&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt; &amp;nbsp;3. Rapid Deployment Troubleshooting&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;When a stack deployment fails:&lt;/p&gt; 
&lt;p&gt;User: “My stack ‘stack_03’ in us-east-1 failed to deploy. What happened?”&lt;/p&gt; 
&lt;p&gt;AI Agent: [Uses troubleshoot_stack_deployment with CloudTrail integration]&lt;/p&gt; 
&lt;p&gt;“The deployment failed due to insufficient IAM permissions.&lt;/p&gt; 
&lt;p&gt;CloudTrail shows AccessDenied for ec2:CreateVpc.&lt;/p&gt; 
&lt;p&gt;You need to add VPC permissions to your deployment role.”&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;4. Learning and Exploration&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;New to AWS CDK? The server helps you discover constructs and patterns:&lt;/p&gt; 
&lt;p&gt;User: “Show me how to build a serverless API”&lt;/p&gt; 
&lt;p&gt;AI Agent: [Searches CDK constructs and samples]&lt;/p&gt; 
&lt;p&gt;“Here are three approaches using API Gateway + Lambda…”&lt;/p&gt; 
&lt;h1&gt;Architecture and Security&lt;/h1&gt; 
&lt;h2&gt;Security Design&lt;/h2&gt; 
&lt;p&gt;&lt;strong&gt;Local Execution:&lt;/strong&gt; The MCP server runs entirely on your local machine using uv (the fast Python package manager). No code or templates are sent to external services except for documentation searches.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;AWS Credentials:&lt;/strong&gt; The server uses your existing AWS credentials (from&amp;nbsp;~/.aws/credentials, environment variables, or IAM roles) to access CloudFormation and CloudTrail APIs. This follows the same security model as the AWS CLI.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;stdio Communication:&lt;/strong&gt; The server communicates with AI assistants over standard input/output (stdio), with no network ports opened.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Minimal Permissions:&lt;/strong&gt; For full functionality, the server requires read-only access to CloudFormation stacks and CloudTrail events—no write permissions needed for validation and troubleshooting workflows.&lt;/p&gt; 
&lt;h1&gt;Getting Started&lt;/h1&gt; 
&lt;h2&gt;Prerequisites&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;Python 3.10 or later&lt;br&gt; uv&amp;nbsp;package manager&lt;br&gt; AWS credentials configured locally&lt;br&gt; MCP-compatible AI client (e.g., Kiro CLI, Claude Desktop)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Configuration&lt;/h2&gt; 
&lt;p&gt;Configure the MCP server in your MCP client configuration. For this blog we will focus on Kiro CLI. Edit&amp;nbsp;.kiro/settings/mcp.json):&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-json"&gt;{
  "mcpServers": {
    "awslabs.aws-iac-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.aws-iac-mcp-server@latest"],
      "env": {
        "AWS_PROFILE": "your-named-profile",
        "FASTMCP_LOG_LEVEL": "ERROR"
      },
      "disabled": false,
      "autoApprove": []
    }
  }
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Security Considerations&lt;/h2&gt; 
&lt;p&gt;&lt;strong&gt;Privacy Notice&lt;/strong&gt;: This MCP server executes AWS API calls using your credentials and shares the response data with your third-party AI model provider (e.g., Amazon Q, Claude Desktop, Cursor, VS Code). Users are responsible for understanding your AI provider’s data handling practices and ensuring compliance with your organization’s security and privacy requirements when using this tool with AWS resources.&lt;/p&gt; 
&lt;h3&gt;IAM Permissions&lt;/h3&gt; 
&lt;p&gt;The MCP server requires the following AWS permissions:&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;For Template Validation and Compliance:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;No AWS permissions required (local validation only)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;For Deployment Troubleshooting:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;cloudformation:DescribeStacks&lt;/li&gt; 
 &lt;li&gt;cloudformation:DescribeStackEvents&lt;/li&gt; 
 &lt;li&gt;cloudformation:DescribeStackResources&lt;/li&gt; 
 &lt;li&gt;cloudtrail:LookupEvents (for CloudTrail deep links)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Example IAM policy:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-json"&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "cloudformation:DescribeStacks",
        "cloudformation:DescribeStackEvents",
        "cloudformation:DescribeStackResources",
        "cloudtrail:LookupEvents"
      ],
      "Resource": "*"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Example Use Case With Kiro CLI&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;IMPORTANT: Ensure you have satisfied all prerequisites before attempting these commands.&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;1. With the&amp;nbsp;mcp.json&amp;nbsp;file correctly set, try to run a sample prompt. In your terminal, run kiro-cli chat to start using Kiro-cli in the CLI.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24608" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/Figure-1-Kiro-CLI-with-AWS-IaC-MCP-server-.png" alt="Figure 1: Kiro-CLI with AWS IaC MCP server " width="638" height="752"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 1: Kiro-CLI with AWS IaC MCP server&lt;/strong&gt;&lt;/p&gt; 
&lt;h3&gt;Scenarios:&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;“What are the CDK best practices for Lambda functions?”&lt;/strong&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24611" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/Figure-2-Search-the-CDK-best-practices-for-Lambda-functions.png" alt="Figure 2 Search the CDK best practices for Lambda functions" width="574" height="955"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 2: Search the CDK best practices for Lambda functions&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;“Search for CDK samples that use DynamoDB with Lambda”&lt;/strong&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24612" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/Figure-3-Search-for-CDK-samples-that-use-DynamoDB-with-Lambda.png" alt="Figure 3: Search for CDK samples that use DynamoDB with Lambda" width="637" height="906"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 3: Search for CDK samples that use DynamoDB with Lambda&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;“Validate my CloudFormation template at ./template.yaml”&lt;/strong&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24621" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/Figure-4-Validate-my-CloudFormation-template-with-AWS-IaC-MCP-Server-1.png" alt="Figure 4: Validate my CloudFormation template with AWS IaC MCP Server" width="639" height="972"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 4: Validate my CloudFormation template with AWS IaC MCP Server&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;“Check if my template complies with security best practices”&lt;/strong&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter size-full wp-image-24614" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/28/Screenshot-2025-11-28-at-12.10.01 PM.png" alt="Figure 5: Check if my template complies with security best practices with AWS IaC MCP Server" width="637" height="363"&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 5: Check if my template complies with security best practices with AWS IaC MCP Server&lt;/strong&gt;&lt;/p&gt; 
&lt;h2&gt;Best Practices&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Start with Documentation Search:&lt;/strong&gt; Before writing code, search for existing constructs and patterns&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Validate Early and Often:&lt;/strong&gt; Run validation tools before attempting deployment&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Check Compliance:&lt;/strong&gt; Use check_template_compliance to catch security issues during development&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Leverage CloudTrail:&lt;/strong&gt; When troubleshooting, the CloudTrail integration provides detailed failure context&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Follow CDK Best Practices:&lt;/strong&gt; Use the cdk_best_practices tool to align with AWS recommendations&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;What’s Next?&lt;/h2&gt; 
&lt;p&gt;The IAC MCP Server represents a new paradigm in the AI agentic workflow infrastructure development – one where AI assistants understand your tools, help you navigate complex documentation, and provide intelligent assistance throughout the development lifecycle.&lt;/p&gt; 
&lt;h2&gt;Get Involved&lt;/h2&gt; 
&lt;p&gt;The AWS IaC MCP Server is available now:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Documentation and GitHub Repository:&lt;/strong&gt; &lt;a href="https://awslabs.github.io/mcp/servers/aws-iac-mcp-server"&gt;aws-iac-mcp-server&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Feedback:&lt;/strong&gt; We welcome issues and pull requests! Or respond to our IaC survey here.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Ready to supercharge your infrastructure as code development? Install the IaC MCP Server today and experience AI-powered assistance for your AWS CDK and CloudFormation workflows.&lt;/p&gt; 
&lt;p&gt;Have questions or feedback? Reach out to the blog authors on the AWS Developer Forums.&lt;/p&gt; 
&lt;h2&gt;&lt;strong&gt;About Authors&lt;/strong&gt;&lt;/h2&gt; 
&lt;footer&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/10/08/idriss-profile-cut-scaled.jpg" alt="" width="127" height="127"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Idriss Laouali Abdou&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Idriss is a Sr. Product Manager Technical on the AWS Infrastructure-as-Code team based in Seattle. He focuses on improving developer productivity through AWS CloudFormation and StackSets Infrastructure provisioning experiences. Outside of work, you can find him creating educational content for thousands of students, cooking, or dancing.&lt;/p&gt; 
 &lt;/div&gt; 
 &lt;div class="blog-author-box"&gt; 
  &lt;div class="blog-author-image"&gt;
   &lt;img loading="lazy" class="wp-image-11636 alignleft" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/08/09/brian-terry.jpg" alt="" width="120" height="160"&gt;
  &lt;/div&gt; 
  &lt;h3 class="lb-h4"&gt;Brian Terry&lt;/h3&gt; 
  &lt;p style="text-align: left"&gt;Brian Terry, Senior WW Data &amp;amp; AI PSA, is an innovation leader with more than 20 years of experience in technology and engineering. Brian is pursuing a PhD in computer science at the University of North Dakota and has spearheaded generative AI projects, optimized infrastructure scalability, and driven partner integration strategies. He is passionate about leveraging technology to deliver scalable, resilient solutions that foster business growth and innovation.&lt;/p&gt; 
 &lt;/div&gt; 
&lt;/footer&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>The Future of AWS CodeCommit</title>
		<link>https://aws.amazon.com/blogs/devops/aws-codecommit-returns-to-general-availability/</link>
		
		<dc:creator><![CDATA[Anthony Hayes]]></dc:creator>
		<pubDate>Mon, 24 Nov 2025 19:56:45 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS CodeCommit]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[DevOps]]></category>
		<guid isPermaLink="false">ae48e6a81b9c7489fa708005b19be9743a6e50fc</guid>

					<description>Back in July 2024, we announced plans to de-emphasize AWS CodeCommit based on adoption patterns and our assessment of customer needs. We never stopped looking at the data or listening to you, and what you’ve shown us is clear: you need an AWS-managed solution for your code repositories. Based on this feedback, CodeCommit is returning […]</description>
										<content:encoded>&lt;p&gt;Back in July 2024, we announced plans to de-emphasize AWS CodeCommit based on adoption patterns and our assessment of customer needs. We never stopped looking at the data or listening to you, and what you’ve shown us is clear: you need an AWS-managed solution for your code repositories. Based on this feedback, CodeCommit is returning to full General Availability, effective immediately.&lt;/p&gt; 
&lt;h3&gt;We Listened, and We Heard You&lt;/h3&gt; 
&lt;p&gt;After the de-emphasis announcement last year, we heard from many of you. Your feedback was direct and revealing. You told us that CodeCommit isn’t just another code repository for you—it’s a critical piece of your infrastructure. Its deep IAM integration, VPC endpoint support, CloudTrail logging, and seamless connectivity with CodePipeline and CodeBuild provide significant value, especially for teams operating in regulated industries or those who want all their development infrastructure within AWS boundaries. In short, we learned that CodeCommit is essential for many of you, so we’re bringing it back.&lt;/p&gt; 
&lt;p&gt;We acknowledge the uncertainty the de-emphasis has caused. If you invested time and resources planning or executing a migration away from CodeCommit, we apologize. We’ve learned from this, and we’re committed to doing better.&lt;/p&gt; 
&lt;h3&gt;What’s Changing Today&lt;/h3&gt; 
&lt;p&gt;Here’s what you need to know:&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;CodeCommit is open to new customers again&lt;/strong&gt; – New customer sign-ups are open as of today. If you’ve been waiting to onboard new accounts or create repositories, you can do so right now through the AWS Console, CLI, or APIs.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;For current and former customers – &lt;/strong&gt;If you already migrated away, we understand you may have completed your transition to GitHub, GitLab, Bitbucket, or another provider. Those are excellent platforms, and we fully support your decision to use them. If you’re interested in returning to CodeCommit, our support team and account teams are available to help.&lt;/p&gt; 
&lt;p&gt;If you’re mid-migration, you can pause or reverse your plans. Contact AWS Support or your account team to discuss your specific situation and determine the best path forward.&lt;/p&gt; 
&lt;p&gt;If you stayed with CodeCommit, thank you for your patience during this period. We’re working through the backlog of feature requests and support tickets that accumulated, prioritizing by customer need. Continue to tell us how we can improve the service and support your workflows (human, machine, and agentic) moving forward.&lt;/p&gt; 
&lt;h3&gt;What’s Coming Next&lt;/h3&gt; 
&lt;p&gt;We’re not just maintaining CodeCommit—we’re investing in it. Here’s what’s on the roadmap:&lt;/p&gt; 
&lt;p&gt;Git LFS Support (Q1 2026) – This has been your most requested feature. Git Large File Storage will enable you to efficiently manage large binary files like images, videos, design assets, and compiled binaries without bloating your repositories. You’ll get faster clones, better performance, and cleaner version history for large assets.&lt;/p&gt; 
&lt;p&gt;Regional Expansions (Starting Q3 2026) – CodeCommit will expand to additional AWS Regions in eu-south-2 and ca-west-1, bringing the service closer to where you’re building and deploying your applications.&lt;/p&gt; 
&lt;p&gt;We’ll share more details about these features and additional roadmap items in the coming months. Keep an eye on our &lt;a href="https://aws.amazon.com/new/"&gt;What’s New&lt;/a&gt; feed for the latest AWS launches.&lt;/p&gt; 
&lt;h3&gt;Pricing, SLA, and Getting Started&lt;/h3&gt; 
&lt;p&gt;Pricing remains unchanged—you can review the current structure on the &lt;a href="https://aws.amazon.com/codecommit/pricing/"&gt;CodeCommit pricing page&lt;/a&gt;. We continue to maintain our 99.9% uptime SLA as defined in our service terms.&lt;/p&gt; 
&lt;p&gt;If you’re new to CodeCommit or returning after a migration, check out our &lt;a href="https://docs.aws.amazon.com/codecommit/latest/userguide/getting-started-topnode.html"&gt;Getting Started Guide&lt;/a&gt; for step-by-step instructions. For migration assistance or questions about your specific setup, contact AWS Support or your account team.&lt;/p&gt; 
&lt;h3&gt;Available Now&lt;/h3&gt; 
&lt;p&gt;AWS CodeCommit is available now in 29 regions. New customers can begin creating repositories immediately. Visit the &lt;a href="https://console.aws.amazon.com/console/home/?nc2=h_si&amp;amp;src=header-signin"&gt;CodeCommit console&lt;/a&gt; to get started.&lt;/p&gt; 
&lt;p&gt;Thank you for your feedback, your patience, and your continued trust in AWS. We’re committed to making CodeCommit the best integrated Git repository service for AWS development.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Learn More:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/codecommit/"&gt;AWS CodeCommit Documentation&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/devops/"&gt;AWS DevOps Blog&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/codecommit/pricing/"&gt;CodeCommit Pricing&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Your Guide to the Developer Tools Track at AWS re:Invent 2025</title>
		<link>https://aws.amazon.com/blogs/devops/your-guide-to-the-developer-tools-track-at-aws-reinvent-2025/</link>
		
		<dc:creator><![CDATA[Brian Beach]]></dc:creator>
		<pubDate>Mon, 24 Nov 2025 12:27:35 +0000</pubDate>
				<category><![CDATA[AWS re:Invent]]></category>
		<category><![CDATA[Events]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[re:invent]]></category>
		<guid isPermaLink="false">28b6774149d1684bb4d78eaac74473382c34c023</guid>

					<description>AWS re:Invent 2025 is just around the corner, and if you’re a developer looking to level up your skills, the Developer Tool (DVT) track has an incredible lineup waiting for you. From CI/CD pipelines and full-stack development to Infrastructure as Code and AI-powered coding agents, this year’s sessions will help you build faster, smarter, and […]</description>
										<content:encoded>&lt;p&gt;&lt;a href="https://reinvent.awsevents.com/"&gt;AWS re:Invent 2025&lt;/a&gt; is just around the corner, and if you’re a developer looking to level up your skills, the Developer Tool (DVT) track has an incredible lineup waiting for you. From CI/CD pipelines and full-stack development to Infrastructure as Code and AI-powered coding agents, this year’s sessions will help you build faster, smarter, and more efficiently. Here’s your essential guide to navigating the week.&lt;/p&gt; 
&lt;h2&gt;Must-Attend Sessions&lt;/h2&gt; 
&lt;p&gt;AWS re:Invent is a learning focused conference and the best place for developer to learn is in one of the roughly 75 sessions on the Developer Tools track. With breakout sessions, lightening talks, chalk talks, code talks, workshops, builder sessions, and meetups, you are sure to find a something that appeals the developer in you. Check you the event catalog, or start with these stand out sessions.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;DVT202: Continuous integration and continuous delivery (CI/CD) on AWS&lt;/strong&gt; – Learn about creating complete CI/CD pipelines using infrastructure as code on AWS, with hands-on insights into planning work, collaborating on code, and deploying applications. &lt;em&gt;Mandalay Bay – Monday 10:00 AM&lt;/em&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;DVT203: AWS infrastructure as code: A year in review&lt;/strong&gt; – Discover the latest features and improvements for &lt;a href="https://aws.amazon.com/cloudformation/"&gt;AWS CloudFormation&lt;/a&gt; and &lt;a href="https://aws.amazon.com/cdk/"&gt;AWS CDK&lt;/a&gt;, and learn how these tools can bring rigor, clarity, and reliability to your application development. &lt;em&gt;MGM Grand – Monday 10:30 AM&lt;/em&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;DVT204: What’s new in full-stack AWS app development&lt;/strong&gt; – Find out how AWS is evolving to help web developers deliver differentiating experiences at 10x speed with solutions that empower you to get started easily, ship quickly, and iterate rapidly. &lt;em&gt;Mandalay Bay – Monday 12:00 PM&lt;/em&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;DVT209: Kiro: Your agentic IDE for spec-driven development&lt;/strong&gt; – Explore how &lt;a href="https://kiro.dev/"&gt;Kiro&lt;/a&gt; is revolutionizing development with spec-driven workflows, agent hooks, multimodal agent chat, and MCP support to help you go from idea to production faster. &lt;em&gt;MGM Grand – Wednesday 11:30 AM&lt;/em&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;DVT405: From Code completion to autonomous agents: The evolution of software development&lt;/strong&gt; – Journey through the evolution of AI-powered coding agents from inline code completion to sophisticated autonomous tools, grounded in empirical evidence and real-world applications. &lt;em&gt;MGM Grand – Wednesday 3:00 PM&lt;/em&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;DVT207: Developer experience economics: moving past productivity metrics&lt;/strong&gt; – Learn Amazon’s approach to understanding the impact of developer experience and tooling, and discover how to bring strategic thinking to your team’s developer experience improvements. &lt;em&gt;Mandalay Bay – Tuesday 5:30 PM&lt;/em&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;House of Kiro&lt;/h2&gt; 
&lt;p&gt;Start your journey at the House of Kiro in the Venetian. Walk through Kiro’s haunted house filled with developer nightmares and horrors, and explore how Kiro brings structure to coding chaos through spec-driven development, vibe coding, and agent hooks. If you survive the haunted house, you will be rewarded with Kiro swag.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24488" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/23/riv25-dvt-002-2.png" alt="Rustic wooden cabin structure with &amp;quot;KIRO&amp;quot; branding and ghost logo on the roof, featuring boarded-up windows with glowing purple light emanating from behind, creating a haunted house aesthetic with a front porch and chimney." width="819" height="522"&gt;&lt;/p&gt; 
&lt;h2&gt;AWS Village&lt;/h2&gt; 
&lt;p&gt;Visit the &lt;a href="https://reinvent.awsevents.com/experience/expo/"&gt;AWS Village in the Expo&lt;/a&gt; at the Venetian Level 2 Hall B to speak with me and other experts at either the Kiro kiosk or the Developer Tools kiosk, covering CodePipeline, CodeBuild, CloudFormation, CDK, and all the essential developer tools.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;The Venetian, Monday, Dec 1: 4:00 PM – 7:00 PM&lt;/li&gt; 
 &lt;li&gt;Tuesday, Dec 2: 10:00 AM – 6:00 PM&lt;/li&gt; 
 &lt;li&gt;Wednesday, Dec 3: 10:00 AM – 6:00 PM&lt;/li&gt; 
 &lt;li&gt;Thursday, Dec 4: 10:00 AM – 4:00 PM&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24484" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/23/riv25-dvt-003.png" alt="AWS booth at a conference or trade show featuring the iconic AWS logo and smile design suspended above a multi-level exhibition space with purple and blue gradient lighting, surrounded by attendees exploring various demo stations." width="1920" height="1080"&gt;&lt;/p&gt; 
&lt;h2&gt;Builders Loft&lt;/h2&gt; 
&lt;p&gt;Located at the south end of the strip in Mandalay Bay, the Builders Loft offers a collaborative workspace with dedicated co-working spaces and meetup zones. Enjoy coffee, snacks, SWAG, and daily tech challenges for a chance to win AWS credits. Kiro experts will be at the builders loft Monday-Thursday:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;8:00 AM – 12:00 PM: Co-working space for one-on-one consultations&lt;/li&gt; 
 &lt;li&gt;12:00 PM – 1:00 PM: Daily meetup in the meetup space&lt;/li&gt; 
 &lt;li&gt;4:50 PM – 5:00 PM: Q&amp;amp;A in the whiteboard section&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24486" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/23/riv25-dvt-001.png" alt="Isometric 3D rendering of an AWS re:Invent expo floor layout featuring purple and pink branded kiosks, blue seating areas with round tables, interactive display stations, and workspace zones in a modern conference environment." width="936" height="624"&gt;&lt;/p&gt; 
&lt;h2&gt;Hands-On Challenges&lt;/h2&gt; 
&lt;h3&gt;Kiro’s Labyrinth&lt;/h3&gt; 
&lt;p&gt;Stop by the Kiro kiosk in the Venetian Expo to participate in Kiro’s Labyrinth, a coding challenge where you’ll help Kiro escape from a spooky Halloween maze and win prizes. The Kiro code champions will be crowned in DVT221 at Mandalay Bay on Thursday at 11:30 AM.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24482" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/23/riv25-dvt-005.png" alt="Atmospheric 3D render of a medieval dungeon or castle interior with dramatic red and orange lighting from wall-mounted torches, featuring stone archways, staircases, cobblestone floors, and blue accent lighting creating a moody gaming environment." width="936" height="528"&gt;&lt;/p&gt; 
&lt;h3&gt;Kiroween Hackathon&lt;/h3&gt; 
&lt;p&gt;Build something wicked for &lt;a href="https://kiroween.devpost.com/"&gt;Kiroween&lt;/a&gt;, the annual hackathon that started on Halloween and ends on Friday, December 5th—the last day of re:Invent. Need help? Visit us at in the Builder Loft in Mandalay Bay: Monday-Friday, 8:30 AM – 12:00 PM or the Developer Pavilion in Venetian whenever the Expo is open.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="alignnone size-full wp-image-24481" src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/11/23/riv25-dvt-006.png" alt="Purple banner with &amp;quot;KIROWEEN&amp;quot; text in white, flanked by three ghost characters including the Kiro ghost mascot, a mummy ghost, and a skeleton ghost, creating aHalloween-themed branding element." width="936" height="184"&gt;&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;Make the most of your re:Invent experience by attending these sessions, connecting with experts at the AWS Village and Builders Loft, and participating in hands-on challenges. Whether you’re interested in CI/CD, infrastructure as code, AI-powered development, or just want to network with fellow builders, the Developer Tools track has something for everyone. See you in Vegas!&lt;/p&gt;</content:encoded>
					
		
		
			</item>
	</channel>
</rss>