<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

<channel>
	<title>AWS Developer Tools Blog</title>
	<atom:link href="https://aws.amazon.com/blogs/developer/feed/" rel="self" type="application/rss+xml"/>
	<link>https://aws.amazon.com/blogs/developer/</link>
	<description/>
	<lastBuildDate>Mon, 06 Apr 2026 17:44:15 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>Smithy Java client framework is now generally available</title>
		<link>https://aws.amazon.com/blogs/developer/smithy-java-client-framework-is-now-generally-available/</link>
					
		
		<dc:creator><![CDATA[Manuel Sugawara]]></dc:creator>
		<pubDate>Mon, 06 Apr 2026 17:41:04 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Java]]></category>
		<category><![CDATA[smithy]]></category>
		<guid isPermaLink="false">e3088a36806335b9ee66f405fb7d47ecdfe10dbc</guid>

					<description>Smithy Java client code generation is now generally available. You can use it to build type-safe, protocol-agnostic Java clients directly from Smithy models. With Smithy Java, serialization, protocol handling, and request/response lifecycles are all generated automatically from your model. This removes the need to write or maintain any of this code by hand. In this […]</description>
										<content:encoded>&lt;p&gt;&lt;a href="https://github.com/smithy-lang/smithy-java" target="_blank" rel="noopener"&gt;Smithy Java&lt;/a&gt; client code generation is now generally available. You can use it to build type-safe, protocol-agnostic Java clients directly from Smithy models. With Smithy Java, serialization, protocol handling, and request/response lifecycles are all generated automatically from your model. This removes the need to write or maintain any of this code by hand.&lt;/p&gt; 
&lt;p&gt;In this post, you will learn what Smithy Java client generation is, how it works, what makes it different, and how you can use it. Modern service development is built on strong contracts and automation. &lt;a href="https://smithy.io/" target="_blank" rel="noopener"&gt;Smithy&lt;/a&gt; provides a model-driven approach to defining services and generating code from those definitions. It produces clients, services, and documentation from a single source of truth that stays aligned with your API as it evolves. Smithy Java client code generation enforces protocol correctness and removes serialization boilerplate, so you can focus on building features instead of hand-writing requests and responses.&lt;/p&gt; 
&lt;h2&gt;How it works&lt;/h2&gt; 
&lt;p&gt;At a high level, Smithy Java client code generation transforms Smithy models into strongly typed Java clients.&lt;/p&gt; 
&lt;h3&gt;Model-driven development&lt;/h3&gt; 
&lt;p&gt;At the core of the workflow is modeling services using Smithy. You define services, operations, and data shapes in a declarative format that captures API structure, constraints, and protocol bindings. These models act as the canonical definition of the API surface. For example:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-smithy"&gt;namespace com.example

use aws.api#service
use smithy.protocols#rpcv2Cbor

@title("Coffee Shop Service")
@rpcv2Cbor
@service(sdkId: "CoffeeShop")
service CoffeeShop {
&amp;nbsp;&amp;nbsp;&amp;nbsp; version: "2024-08-23"
&amp;nbsp;&amp;nbsp;&amp;nbsp; operations: [
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; GetMenu
&amp;nbsp;&amp;nbsp;&amp;nbsp; ]
}

@readonly
operation GetMenu {
&amp;nbsp;&amp;nbsp;&amp;nbsp; output := {
        items: CoffeeItems
    }
} 
...
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Smithy Java consumes the models and produces Java client code. The generated output includes typed operations, serializers, deserializers, and protocol handling.&lt;/p&gt; 
&lt;p&gt;For more information about writing Smithy models, see &lt;a href="https://smithy.io/2.0/quickstart.html" target="_blank" rel="noopener"&gt;Smithy’s quick start documentation&lt;/a&gt;.&lt;/p&gt; 
&lt;h3&gt;Generated clients&lt;/h3&gt; 
&lt;p&gt;The generated clients support a range of features that are typical for client-service communication, including request/response handling, serialization, protocol negotiation, retries, error mapping, and custom interceptors. You only need to define them in the model, and Smithy Java writes the code for you.&lt;/p&gt; 
&lt;p&gt;The following is an example of a generated Java client:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-java"&gt;var client = CoffeeShopClient.builder()
&amp;nbsp;&amp;nbsp;&amp;nbsp; .endpointProvider(EndpointResolver.staticEndpoint("http://localhost:8888"))
&amp;nbsp;&amp;nbsp;&amp;nbsp; .build();

var menu = client.getMenu();&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;You can regenerate clients after API changes to the model, keeping them up to date without writing any manual code.&lt;/p&gt; 
&lt;p&gt;For more information about how to start generating Java clients from Smithy models, see our &lt;a href="https://smithy.io/2.0/languages/java/quickstart.html" target="_blank" rel="noopener"&gt;quick start guide&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Key capabilities&lt;/h2&gt; 
&lt;h3&gt;Protocol flexibility&lt;/h3&gt; 
&lt;p&gt;Smithy Java generated clients are protocol-agnostic. The framework includes built-in support for HTTP transport, AWS protocols (including &lt;a href="https://smithy.io/2.0/aws/protocols/aws-json-1_0-protocol.html" target="_blank" rel="noopener"&gt;AWS JSON 1.0&lt;/a&gt;/&lt;a href="https://smithy.io/2.0/aws/protocols/aws-json-1_1-protocol.html" target="_blank" rel="noopener"&gt;1.1&lt;/a&gt;, &lt;a href="https://smithy.io/2.0/aws/protocols/aws-restjson1-protocol.html" target="_blank" rel="noopener"&gt;restJson1&lt;/a&gt;, &lt;a href="https://smithy.io/2.0/aws/protocols/aws-restxml-protocol.html" target="_blank" rel="noopener"&gt;restXml&lt;/a&gt; and &lt;a href="https://smithy.io/2.0/aws/protocols/aws-query-protocol.html" target="_blank" rel="noopener"&gt;Query&lt;/a&gt;), and &lt;a href="https://smithy.io/2.0/additional-specs/protocols/smithy-rpc-v2.html" target="_blank" rel="noopener"&gt;Smithy RPCv2 CBOR&lt;/a&gt;. You can swap protocols at runtime without rebuilding the client, enabling gradual protocol migrations and multi-protocol support with no code changes.&lt;/p&gt; 
&lt;h3&gt;Dynamic client&lt;/h3&gt; 
&lt;p&gt;Not every use case requires code generation at build time. Smithy Java includes a dynamic client that loads Smithy models at runtime and can interact with any service API without a codegen step. This is particularly useful for building tools, service aggregators, or systems that must interact with unknown services at build time, all while keeping the deployment footprint small.&lt;/p&gt; 
&lt;p&gt;The following is an example of calling the Coffee Shop service using the DynamicClient :&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-java"&gt;var model = Model.assembler().addImport("model.smithy").assemble().unwrap();
var serviceId = ShapeId.from("com.example#CoffeeShop");
var client = DynamicClient.builder().model(model).serviceId(serviceId).build();
var result = client.call("GetMenu");&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Shape code generation independent of services&lt;/h3&gt; 
&lt;p&gt;Smithy Java can generate type-safe Java classes from Smithy shapes without any service context. This extends Smithy’s model-first approach beyond service calls into the data and logic layers of your system, enabling code reuse and consistency across projects that share common types.&lt;/p&gt; 
&lt;h3&gt;Built on Java virtual threads&lt;/h3&gt; 
&lt;p&gt;Smithy Java is built from the ground up around &lt;a href="https://docs.oracle.com/en/java/javase/21/core/virtual-threads.html" target="_blank" rel="noopener"&gt;Java 21’s virtual threads&lt;/a&gt;. Instead of exposing complex async APIs with callbacks or reactive streams, it provides a blocking-style interface that is straightforward to read, write, and debug, without sacrificing performance. Users can concentrate on their business logic while letting Smithy Java and the JVM handle task scheduling, synchronization, and structured error handling.&lt;/p&gt; 
&lt;p&gt;The following example demonstrates using &lt;a href="https://aws.amazon.com/transcribe/" target="_blank" rel="noopener"&gt;Amazon Transcribe&lt;/a&gt; with Smithy’s Java event streams blocking API. To send an event, Smithy clients use a &lt;code&gt;EventStreamWriter&amp;lt;T&amp;gt;&lt;/code&gt; with a &lt;code&gt;write(T event)&lt;/code&gt; method, and to receive an event the client uses &lt;code&gt;EventStreamReader&lt;/code&gt; with a &lt;code&gt;T read()&lt;/code&gt; method. For example:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-java"&gt;// Create an Amazon Transcribe client
var client = TranscribeClient.builder().build();
var audioStream = EventStream.&amp;lt;AudioStream&amp;gt;newWriter();

// Create a stream transcription request
var request = StartStreamTranscriptionInput.builder().audioStream(audioStream).build();

// Create a VT to send the audio that we want to transcribe
Thread.startVirtualThread(() -&amp;gt; {
&amp;nbsp;&amp;nbsp;&amp;nbsp; try (var audioStreamWriter = audioStream.asWriter()) {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for (var chunk : iterableAudioChunks()) {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; var event = AudioEvent.builder().audioChunk(chunk).build()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; audioStreamWriter.write(AudioStream.builder().audioEvent(event).build());
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }
&amp;nbsp;&amp;nbsp;&amp;nbsp; }
});

// Send the request to Amazon Transcribe
var response = client.startStreamTranscription(request);

// Create a VT to read the transcription from the audio.
Thread.startVirtualThread(() -&amp;gt; {
&amp;nbsp;&amp;nbsp;&amp;nbsp; // The reader
&amp;nbsp;&amp;nbsp;&amp;nbsp; try (var results = response.getTranscriptResultStream().asReader()) {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; // The reader implements Iterable
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for (var event : results) {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; switch (event) {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; case TranscriptResultStream.TranscriptEventMember transcript -&amp;gt; {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; var transcriptText = getTranscript(transcript);
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if (transcriptText != null) {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appendAudioTranscript(transcriptText);
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; default -&amp;gt; throw new IllegalStateException("Unexpected event " + event);
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }
&amp;nbsp;&amp;nbsp;&amp;nbsp; }
});&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;In this post, I explained what Smithy Java client generation is and how it works. With this general availability release, Smithy Java’s public APIs are now stable; we commit to backwards compatibility, making it ready for use in production systems. To get started with Smithy Java client code generation, use our &lt;a href="https://smithy.io/2.0/languages/java/quickstart.html" target="_blank" rel="noopener"&gt;quick start guide&lt;/a&gt; and &lt;a href="https://smithy.io/2.0/languages/java/index.html" target="_blank" rel="noopener"&gt;documentation&lt;/a&gt;. If you want to send us feedback, ask a question, or discuss, you can reach us through &lt;a href="https://github.com/smithy-lang/smithy-java/issues" target="_blank" rel="noopener"&gt;GitHub issues&lt;/a&gt; and &lt;a href="https://github.com/smithy-lang/smithy-java/discussions" target="_blank" rel="noopener"&gt;GitHub discussions&lt;/a&gt;.&lt;/p&gt; 
&lt;hr&gt; 
&lt;h2&gt;About the author&lt;/h2&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>Smithy Kotlin client code generation now generally available</title>
		<link>https://aws.amazon.com/blogs/developer/smithy-kotlin-client-code-generation-now-generally-available/</link>
					
		
		<dc:creator><![CDATA[Omar Perez]]></dc:creator>
		<pubDate>Thu, 02 Apr 2026 15:24:42 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[Kotlin]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[smithy]]></category>
		<guid isPermaLink="false">ed10045367cdd2883286784b20e72a07634d9aa8</guid>

					<description>Smithy Kotlin&amp;nbsp;client code generation is now generally available. With Smithy Kotlin, you can keep client libraries in sync with evolving service APIs. By using client code generation, you can reduce repetitive work and instead, automatically create type-safe Kotlin clients from your service models. In this post, you will learn what Smithy Kotlin client generation is, how it works, and how you can use it.</description>
										<content:encoded>&lt;p&gt;&lt;a href="https://github.com/smithy-lang/smithy-kotlin" target="_blank" rel="noopener noreferrer"&gt;Smithy Kotlin&lt;/a&gt;&amp;nbsp;client code generation is now generally available. With Smithy Kotlin, you can keep client libraries in sync with evolving service APIs. By using client code generation, you can reduce repetitive work and instead, automatically create type-safe Kotlin clients from your service models. In this post, you will learn what Smithy Kotlin client generation is, how it works, and how you can use it.&lt;/p&gt; 
&lt;p&gt;Modern service development increasingly relies on strong contracts, automation, and consistency. &lt;a href="https://smithy.io/" target="_blank" rel="noopener noreferrer"&gt;Smithy&lt;/a&gt;&amp;nbsp;provides a model-driven approach to defining services and enables code generation from those definitions, helping you to produce reliable clients from a single source of truth.&lt;/p&gt; 
&lt;h2&gt;How it works&lt;/h2&gt; 
&lt;p&gt;At a high level, Smithy Kotlin client code generation transforms Smithy service models into strongly typed Kotlin clients. This process bridges the gap between API design and implementation, producing code that handles serialization, protocol details, and request/response lifecycles automatically.&lt;/p&gt; 
&lt;h3&gt;Model-driven development&lt;/h3&gt; 
&lt;p&gt;At the core of the workflow is modeling services using Smithy. You can define services, operations, and data shapes in a declarative format that captures structure, constraints, and protocol bindings. These models specify the canonical definition of the API surface. For example:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-php"&gt;namespace com.example

use aws.api#service
use smithy.protocols#rpcv2Cbor

@title("Coffee Shop Service")
@rpcv2Cbor
@service(sdkId: "CoffeeShop")
service CoffeeShop {
&amp;nbsp;&amp;nbsp; &amp;nbsp;version: "2024-08-23"
&amp;nbsp;&amp;nbsp; &amp;nbsp;operations: [
&amp;nbsp;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;GetMenu
&amp;nbsp;&amp;nbsp; &amp;nbsp;]
}

@http(method: "GET", uri: "/menu")
@readonly
operation GetMenu {
&amp;nbsp;&amp;nbsp; &amp;nbsp;output := {
&amp;nbsp;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;items: CoffeeItems
&amp;nbsp;&amp;nbsp; &amp;nbsp;}
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;Smithy Kotlin consumes the models and produces Kotlin client code. The generated output includes typed operations, serializers, and deserializers, maintaining alignment between the model and client implementation.&lt;/p&gt; 
&lt;p&gt;For more information about writing Smithy models, see &lt;a href="https://smithy.io/2.0/quickstart.html" target="_blank" rel="noopener noreferrer"&gt;Smithy’s quick start documentation&lt;/a&gt;.&lt;/p&gt; 
&lt;h3&gt;Clients&lt;/h3&gt; 
&lt;p&gt;The generated clients support a range of features typical for service communication, including request/response handling, serialization, protocols, and error mapping. You only need to define them in the model and Smithy Kotlin writes the code for you. Because Smithy Kotlin targets Kotlin and generated clients run on the Java Virtual Machine (JVM), they integrate naturally with existing language tools. You can incorporate them into modern build systems, use concurrency features, and combine them with established libraries and frameworks already used in Kotlin. An example of a generated Kotlin client:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-php"&gt;CoffeeShopClient {
 &amp;nbsp; &amp;nbsp;endpointProvider = CoffeeShopEndpointProvider {
 &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;endpointUrl = Url.parse("http://localhost:8888")
 &amp;nbsp; &amp;nbsp;}
}.use { client -&amp;gt;
&amp;nbsp; &amp;nbsp; val menu =&amp;nbsp;client.getMenu()
}&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;For more information about how to start generating Kotlin clients from Smithy models, see the&amp;nbsp;&lt;a href="https://smithy.io/2.0/languages/kotlin/client/generating-clients.html" target="_blank" rel="noopener noreferrer"&gt;client generation guide&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;What does general availability mean?&lt;/h2&gt; 
&lt;p&gt;Smithy Kotlin has been in development and available in developer preview for a few years. This milestone reflects production readiness, stability, and broader confidence in adopting the generated clients as part of standard development workflows.&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;In this blog post, we covered what Smithy Kotlin client generation is, how it works, and how you can use it. To get started with Smithy Kotlin client code generation see the&amp;nbsp;&lt;a href="https://github.com/smithy-lang/smithy-examples/tree/main/smithy-kotlin-examples/quickstart-kotlin" target="_blank" rel="noopener noreferrer"&gt;quick start example&lt;/a&gt; and &lt;a href="https://smithy.io/2.0/languages/kotlin/index.html" target="_blank" rel="noopener noreferrer"&gt;documentation page&lt;/a&gt;. If you’d like to share feedback, ask a question, or discuss, you can reach us&lt;a href="https://github.com/smithy-lang/smithy-kotlin/issues" target="_blank" rel="noopener noreferrer"&gt;&amp;nbsp;through GitHub issues&lt;/a&gt;&amp;nbsp;and &lt;a href="https://github.com/smithy-lang/smithy-kotlin/discussions" target="_blank" rel="noopener noreferrer"&gt;GitHub discussions&lt;/a&gt;.&lt;/p&gt; 
&lt;hr&gt; 
&lt;h2&gt;About the author&lt;/h2&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>Upgrading AWS CLI From v1 to v2 Using the Migration Tool</title>
		<link>https://aws.amazon.com/blogs/developer/upgrading-aws-cli-from-v1-to-v2-using-the-migration-tool/</link>
					
		
		<dc:creator><![CDATA[Ahmed Moustafa]]></dc:creator>
		<pubDate>Fri, 27 Mar 2026 22:53:53 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS Command Line Interface]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Technical How-to]]></category>
		<guid isPermaLink="false">d6b77276aae18d8b3f6d5f96afe597563841bdbb</guid>

					<description>Upgrading from AWS Command Line Interface (AWS CLI) v1 to AWS CLI v2 brings valuable improvements, but requires attention to several changes that may affect your existing workflows, such as failing commands, or misconfiguration. The AWS CLI v1-to-v2 Migration Tool helps you identify and resolve issues before upgrading, making transition easier. It analyzes bash scripts […]</description>
										<content:encoded>&lt;p&gt;Upgrading from &lt;a href="https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-welcome.html" target="_blank" rel="noopener"&gt;AWS Command Line Interface (AWS CLI) v1 &lt;/a&gt;to &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" target="_blank" rel="noopener"&gt;AWS CLI v2&lt;/a&gt; brings valuable improvements, but requires attention to several changes that may affect your existing workflows, such as failing commands, or misconfiguration.&lt;/p&gt; 
&lt;p&gt;The AWS CLI v1-to-v2 Migration Tool helps you identify and resolve issues before upgrading, making transition easier. It analyzes bash scripts containing AWS CLI v1 commands where behavior differs in AWS CLI v2. The tool will either suggest a change to a command or guide you to resolve a potential risk. It can also automatically create an updated version of the script with implemented changes. Where applicable, the migration tool will change the commands in a way that preserves AWS CLI version 1 behavior.&lt;/p&gt; 
&lt;p&gt;The AWS CLI v1-to-v2 Migration Tool is a standalone tool compatible with &lt;i&gt;any&lt;/i&gt;&amp;nbsp;version of AWS CLI v1, and does not require executing AWS CLI commands.&amp;nbsp;Compared to Upgrade Debug Mode, an alternative solution built into AWS CLI version &lt;code&gt;1.44.0&lt;/code&gt; or later, the Migration Tool offers broader compatibility and works independently of your CLI installation. For a thorough comparison between the Upgrade Debug Mode and the AWS CLI v1-to-v2 Migration Tool see &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html#cliv2-migration-choosing-migration-tool" target="_blank" rel="noopener"&gt;Choosing Between Upgrade Debug Mode and AWS CLI v1-to-v2 Migration Tool&lt;/a&gt;&lt;b&gt; &lt;/b&gt;in our &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html" target="_blank" rel="noopener"&gt;Migration guide for the AWS CLI version 2&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;In this post, we’ll walk you through using&amp;nbsp;AWS CLI v1-to-v2 Migration Tool to identify potential breaking changes, resolve compatibility issues, and safely transition your scripts to v2.&lt;/p&gt; 
&lt;h2&gt;Prerequisites&lt;/h2&gt; 
&lt;p&gt;Before you begin, you’ll need Python version 3.9 or later, and pip installed on your machine. See the &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-migration-tool.html#migration-tool-prerequisites" target="_blank" rel="noopener"&gt;Prerequisites&lt;/a&gt; in &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-migration-tool.html" target="_blank" rel="noopener"&gt;Using AWS CLI v1-to-v2 Migration Tool to upgrade AWS CLI version 1 to AWS CLI version 2&lt;/a&gt;&amp;nbsp;for instructions to install these prerequisites.&lt;/p&gt; 
&lt;h2&gt;Getting Started&lt;/h2&gt; 
&lt;p&gt;You’ll start by installing the AWS CLI v1-to-v2 Migration Tool. Then, you’ll use this tool to analyze bash scripts for AWS CLI v1 commands that may need to be updated before upgrading to AWS CLI v2.&amp;nbsp;Then, you’ll review the&amp;nbsp;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html" target="_blank" rel="noopener"&gt;AWS CLI v2 breaking changes list&lt;/a&gt;&amp;nbsp;in the&amp;nbsp;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html" target="_blank" rel="noopener"&gt;Migration guide for the AWS CLI version 2&lt;/a&gt;&amp;nbsp;to manually verify whether your workflows may be broken by upgrading, and safely upgrade to AWS CLI v2.&lt;/p&gt; 
&lt;h3&gt;Step 1: Install the&amp;nbsp;AWS CLI v1-to-v2 Migration Tool&lt;/h3&gt; 
&lt;p&gt;See &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-migration-tool.html#migration-tool-installation" target="_blank" rel="noopener"&gt;Installation&lt;/a&gt; in &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-migration-tool.html" target="_blank" rel="noopener"&gt;Using AWS CLI v1-to-v2 Migration Tool to upgrade AWS CLI version 1 to AWS CLI version 2&lt;/a&gt; for instructions to install the AWS CLI v1-to-v2 Migration Tool.&lt;/p&gt; 
&lt;h3&gt;Step 2: Lint a bash script using interactive mode&lt;/h3&gt; 
&lt;p&gt;Next, you’ll run the migration tool in interactive mode. Interactive mode walks you through each flagged command one at a time. For each detection, it will suggest a change to make the command have the same behavior in AWS CLI v2.&lt;/p&gt; 
&lt;p&gt;For this blog post, we’ll use&amp;nbsp;the following example bash script, which uses AWS CLI v1 to upload an AWS CloudFormation template to Amazon Simple Storage Service (Amazon S3), copy the template to a backup Amazon S3 bucket, and create a CloudFormation stack from the template.&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-bash"&gt;#!/bin/bash
set -e

TEMPLATE="$1"
BUCKET="$2"
BACKUP="$3"
STACK_NAME="$4"

if [ -z "$TEMPLATE" ] || [ -z "$BUCKET" ] || [ -z "$BACKUP" ] || [ -z "$STACK_NAME" ]; then
&amp;nbsp;&amp;nbsp; &amp;nbsp;echo "Usage: $0&amp;nbsp;&amp;lt;template-file&amp;gt; &amp;lt;bucket&amp;gt; &amp;lt;backup-bucket&amp;gt; &amp;lt;stack-name&amp;gt;"
&amp;nbsp;&amp;nbsp; &amp;nbsp;exit 1
fi

TMPKEY="cloudformation/$(basename "$TEMPLATE")"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
BACKUP_KEY="cloudformation/$TIMESTAMP-$(basename "$TEMPLATE")"

# Upload template
aws s3 cp $TEMPLATE s3://$BUCKET/$TMPKEY

# Copy template to backup bucket
aws s3 cp s3://$BUCKET/$TMPKEY&amp;nbsp;s3://$BACKUP/$BACKUP_KEY

# Create a stack from the template
aws cloudformation create-stack \
&amp;nbsp;&amp;nbsp;--stack-name "$STACK_NAME"&amp;nbsp;\
&amp;nbsp;&amp;nbsp;--template-body "https://s3.amazonaws.com/$BUCKET/$TMPKEY"

echo "Stack creation initiated. Stack ID: $(
&amp;nbsp;&amp;nbsp;aws cloudformation describe-stacks \
&amp;nbsp;&amp;nbsp; &amp;nbsp;--stack-name "$STACK_NAME" \
&amp;nbsp;&amp;nbsp; &amp;nbsp;--query 'Stacks[0].StackId' \
&amp;nbsp;&amp;nbsp; &amp;nbsp;--output text \
&amp;nbsp; &amp;nbsp; --cli-input-json file://describe_stacks_input.json
)"&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;You will use the command below to use the migration tool to analyze the bash script &lt;code&gt;upload_s3_files.sh&lt;/code&gt;, suggest fixes, and write the modified script to the path &lt;code&gt;upload_s3_files_v2.sh&lt;/code&gt; in interactive mode. For the sake of demonstration, this blog post does not include every finding that gets detected in the example script:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-bash"&gt;$ migrate-aws-cli --script upload_s3_files.sh&amp;nbsp;--output upload_s3_files_v2.sh \
&amp;nbsp;&amp;nbsp;--interactive
&amp;nbsp;&amp;nbsp;
19 19│ aws s3 cp $TEMPLATE s3://$BUCKET/$TMPKEY
20 20│ 
21 21│ # Copy template to backup bucket
22 &amp;nbsp; │-aws s3 cp s3://$BUCKET/$TMPKEY s3://$BACKUP/$BACKUP_KEY
&amp;nbsp;&amp;nbsp; 22│+aws s3 cp s3://$BUCKET/$TMPKEY s3://$BACKUP/$BACKUP_KEY --copy-props none
23 23│ 
24 24│ # Create a stack from the template
25 25│ aws cloudformation create-stack \

script.sh:22 [s3-copy] In AWS CLI v2, object properties will be copied from the 
source in multipart copies between S3 buckets. If a copy is or becomes multipart 
after upgrading to AWS CLI v2, extra API calls will be made. See 
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html#cliv2-migration-s3-copy-metadata." rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html#cliv2-migration-s3-copy-metadata.&lt;/a&gt;

Apply this fix? [y] yes, [n] no, [a] accept all of type, [r] reject all of type, 
[u] update all, [s] save and exit, [q] quit:&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;In the preceding finding, the associated breaking change is &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html#cliv2-migration-s3-copy-metadata" target="_blank" rel="noopener"&gt;Improved Amazon S3 handling of file properties and tags for multipart copies&lt;/a&gt;. The suggested fix, given in a form similar to a Git diff, is to add the &lt;code&gt;--copy-props none&lt;/code&gt;&amp;nbsp;flag to the command. Adding the suggested flag will preserve AWS CLI v1 behavior in AWS CLI v2.&lt;/p&gt; 
&lt;p&gt;The following output snippet shows another finding:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-bash"&gt;16 16│ BACKUP_KEY="cloudformation/$TIMESTAMP-$(basename "$TEMPLATE")"
17 17│ 
18 18│ # Upload template
19 &amp;nbsp; │-aws s3 cp $TEMPLATE s3://$BUCKET/$TMPKEY
&amp;nbsp;&amp;nbsp; 19│+aws s3 cp $TEMPLATE s3://$BUCKET/$TMPKEY&amp;nbsp;--cli-binary-format raw-in-base64-out
20 20│ 
21 21│ # Copy template to backup bucket
22 22│ aws s3 cp "s3://$BUCKET/$TMPKEY" "s3://$BACKUP/$BACKUP_KEY"

examples/upload_s3_files.sh:19 [binary-params-base64] In AWS CLI v2, an input 
parameter typed as binary large object (BLOB) expects the input to be base64-encoded. 
If using a BLOB-type input parameter, retain v1 behavior after upgrading to AWS CLI 
v2&amp;nbsp;by adding `--cli-binary-format raw-in-base64-out`. See 
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html#cliv2-migration-binaryparam." rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html#cliv2-migration-binaryparam.&lt;/a&gt;

Apply this fix? [y] yes, [n] no, [a] accept all of type, [r] reject all of type, 
[u] update all, [s] save and exit, [q] quit:&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;In the preceding detection, the associated breaking change is that&amp;nbsp;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html#cliv2-migration-binaryparam" target="_blank" rel="noopener"&gt;Binary parameters are passed as base64-encoded strings by default&lt;/a&gt;. The suggested fix, is to add the &lt;code&gt;--cli-binary-format raw-in-base64-out&lt;/code&gt;&amp;nbsp;flag to the command. Adding the suggested flag will preserve AWS CLI v1 behavior in AWS CLI v2.&lt;/p&gt; 
&lt;p&gt;Note that in this particular case, we are not using a binary-type parameter in the &lt;code&gt;aws s3 cp&lt;/code&gt;&amp;nbsp;command.&amp;nbsp;This highlights a core behavior of the migration tool: by design, it errs on the side of caution when detecting potential issues, flagging changes that might be breaking even when uncertain, provided the suggested fix won’t alter the code’s behavior.&lt;/p&gt; 
&lt;p&gt;The following output snippet shows another finding:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-bash"&gt;27 27│ &amp;nbsp;--template-body "https://s3.amazonaws.com/$BUCKET/$TEMPLATE_KEY" --cli-binary-format raw-in-base64-out --no-cli-pager
28 28│
29 29│echo "Stack creation initiated. Stack ID: $(
30 30│ &amp;nbsp;aws cloudformation describe-stacks \
31 31│ &amp;nbsp; &amp;nbsp;--stack-name "$STACK_NAME" \
32 32│ &amp;nbsp; &amp;nbsp;--query 'Stacks[0].StackId' \
33 33│ &amp;nbsp; &amp;nbsp;--output text \
34 34│ &amp;nbsp; &amp;nbsp;--cli-input-json file://describe_stacks_input.json --cli-binary-format raw-in-base64-out --no-cli-pager
35 35│)"

examples/upload_s3_files.sh:30 [MANUAL REVIEW REQUIRED] [cli-input-json] In AWS CLI 
v2, specifying pagination parameters via `--cli-input-json` turns off automatic 
pagination. If pagination-related parameters are present in the input JSON specified 
with `--cli-input-json`, remove the pagination parameters from the input JSON to 
retain v1 behavior. See 
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html#cliv2-migration-skeleton-paging." rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html#cliv2-migration-skeleton-paging.&lt;/a&gt;

[n] next, [s] save, [q] quit:&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;In the preceding detection, the detected breaking change is&amp;nbsp;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html#cliv2-migration-skeleton-paging" target="_blank" rel="noopener"&gt;AWS CLI version 2 is more consistent with paging parameters&lt;/a&gt;.&amp;nbsp;The migration tool cannot automatically modify the script in this case, so the detection is flagged with&amp;nbsp;&lt;code&gt;[MANUAL REVIEW REQUIRED]&lt;/code&gt;.&lt;/p&gt; 
&lt;p&gt;For detections that require manual fixes, such as the example, you’ll enter &lt;code&gt;n&lt;/code&gt; and manually address the finding after the migration tool finishes executing.&lt;/p&gt; 
&lt;p&gt;After all detections are displayed, a summary is printed, including the number of issues found and the path to the modified script:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-bash"&gt;Found 10 issue(s). 9 fixed. 1 require(s) manual review.
Changes written to: upload_s3_files_v2.sh&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;To resolve the detections that were flagged for manual review, follow the guidance in the suggested actions.&lt;/p&gt; 
&lt;h3&gt;Step 3: Upgrade to AWS CLI v2&lt;/h3&gt; 
&lt;p&gt;Customers are responsible for safely migrating their scripts; using the migration tool does not guarantee that all commands will have the same behavior in AWS CLI v2.&amp;nbsp;To complete a manual review, reference the&amp;nbsp;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html" target="_blank" rel="noopener"&gt;breaking changes list&lt;/a&gt;&amp;nbsp;in the&amp;nbsp;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html" target="_blank" rel="noopener"&gt;AWS CLI v2 Migration Guide&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;After going through and applying any required changes identified in the previous steps, you are now ready to upgrade to AWS CLI v2 following the &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" target="_blank" rel="noopener"&gt;installation guide&lt;/a&gt;.&lt;/p&gt; 
&lt;h3&gt;Step 4: Uninstall migration tool if no longer needed&lt;/h3&gt; 
&lt;p&gt;After migration, you can uninstall the migration tool and remove original scripts if no longer needed.&lt;/p&gt; 
&lt;h2&gt;Important Considerations&lt;/h2&gt; 
&lt;p&gt;The AWS CLI v1-to-v2 Migration Tool uses static analysis to identify most compatibility considerations in your scripts. However, some scenarios—such as parameters stored in variables or determined at runtime—fall outside the tool’s detection scope and require manual review.&lt;/p&gt; 
&lt;p&gt;For more details on the limitations of the AWS CLI v1-to-v2 Migration Tool, see&amp;nbsp;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-migration-tool.html#migration-tool-limitations" target="_blank" rel="noopener"&gt;Limitations&lt;/a&gt; in &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-migration-tool.html" target="_blank" rel="noopener"&gt;Using AWS CLI v1-to-v2 Migration Tool to upgrade AWS CLI version 1 to AWS CLI version 2&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;We strongly recommend customers understand our &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html" target="_blank" rel="noopener"&gt;breaking changes list&lt;/a&gt; published in our &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html" target="_blank" rel="noopener"&gt;AWS CLI v2 Migration Guide&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;In this blog post, we showed you how to get started with the new AWS CLI v1-to-v2 Migration Tool to assist your upgrade from AWS CLI v1 to AWS CLI v2. To learn more, visit &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-migration-tool.html" target="_blank" rel="noopener"&gt;Using AWS CLI v1-to-v2 Migration Tool to upgrade AWS CLI version 1 to AWS CLI version 2&lt;/a&gt;. We would love your feedback!&amp;nbsp;You can also open a discussion or issue on &lt;a href="https://github.com/aws/aws-cli/issues/new?template=migration-tool.yml" target="_blank" rel="noopener"&gt;GitHub&lt;/a&gt;. Thank you for using the AWS CLI!&lt;/p&gt; 
&lt;p&gt;Have you encountered challenges migrating from AWS CLI v1 to AWS CLI v2? Share your experience in the comments below.&lt;/p&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>Transfer Manager Directory Support for AWS SDK for Ruby</title>
		<link>https://aws.amazon.com/blogs/developer/transfer-manager-directory-support-for-aws-sdk-for-ruby/</link>
					
		
		<dc:creator><![CDATA[Juli Tera]]></dc:creator>
		<pubDate>Thu, 19 Mar 2026 14:39:01 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS SDK for Ruby]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Programing Language]]></category>
		<category><![CDATA[Ruby]]></category>
		<category><![CDATA[aws-sdk-ruby]]></category>
		<category><![CDATA[ruby]]></category>
		<category><![CDATA[S3]]></category>
		<guid isPermaLink="false">3104e1d9e1abaf185fccd247465d4f339610df22</guid>

					<description>In this post, we show you how to upload and download directories using Transfer Manager, customize transfer with filtering options and handle results effectively.</description>
										<content:encoded>&lt;p&gt;Managing bulk file transfer to &lt;a href="https://aws.amazon.com/s3/" target="_blank" rel="noopener noreferrer"&gt;Amazon Simple Storage Service (Amazon S3)&lt;/a&gt; can be complex when transferring directories containing multiple files and subdirectories. &lt;a href="https://aws.amazon.com/sdk-for-ruby/" target="_blank" rel="noopener noreferrer"&gt;AWS SDK for Ruby&lt;/a&gt; Transfer Manager (&lt;code&gt;aws-sdk-s3&lt;/code&gt; version 1.215) now supports directory upload and download. This feature can help streamline bulk transfers by providing multipart handling and parallelism options.&lt;/p&gt; 
&lt;p&gt;Previously, uploading directories to Amazon S3 required manual iteration and handling. You also had to manage multipart uploads for large files and implement parallelism for performance. With directory support in Transfer Manager, you can handle this with a single method call that automates the process. In this post, we show you how to upload and download directories using Transfer Manager, customize transfer with filtering options and handle results effectively.&lt;/p&gt; 
&lt;h2&gt;Getting started&lt;/h2&gt; 
&lt;p&gt;This support requires &lt;code&gt;aws-sdk-s3&lt;/code&gt; version&amp;nbsp;1.215 or higher. Add &lt;code&gt;aws-sdk-s3&lt;/code&gt;&amp;nbsp;to your Gemfile:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ruby"&gt;gem 'aws-sdk-s3', '&amp;gt;= 1.215'&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Initialize the Transfer Manager&lt;/h3&gt; 
&lt;p&gt;To initialize a Transfer Manager with a default S3 client:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ruby"&gt;require 'aws-sdk-s3' 
tm = Aws::S3::TransferManager.new&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Or you could create a custom S3 client to pass to the Transfer Manager.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ruby"&gt;client = Aws::S3::Client.new(region: 'us-east-1')
tm = Aws::S3::TransferManager.new(client: client)&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Upload a directory&lt;/h3&gt; 
&lt;p&gt;Upload a local directory to an S3 bucket by providing a source path and bucket name:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ruby"&gt;tm.upload_directory('/path/to/directory', bucket: 'my-bucket')&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;By default, only files in the specified directory are uploaded. To include subdirectories, set&amp;nbsp;&lt;code&gt;recursive: true&lt;/code&gt;:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ruby"&gt;tm.upload_directory('/path/to/directory', bucket: 'my-bucket', recursive: true)&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Download a directory&lt;/h3&gt; 
&lt;p&gt;Download objects from an S3 bucket to a local directory by providing a destination path and bucket name:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ruby"&gt;tm.download_directory('/local/path', bucket: 'my-bucket')&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;To download only objects with a specific prefix, set &lt;code&gt;s3_prefix&lt;/code&gt;. The full object key is preserved in the local path. For example, given &lt;code&gt;s3_prefix: 'photos/'&lt;/code&gt;:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Object key: &lt;code&gt;photos/vacation/beach.jpg&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;Resolved local path: &lt;code&gt;/local/path/photos/vacation/beach.jpg&lt;/code&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;pre&gt;&lt;code class="lang-ruby"&gt;tm.download_directory('/local/path', bucket: 'my-bucket', s3_prefix: 'photos/2026/')&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Filtering contents&lt;/h3&gt; 
&lt;p&gt;You can also filter transfers by using &lt;code&gt;filter_callback&lt;/code&gt;:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ruby"&gt;# Upload only .txt files 
filter = proc { |_path, name| name.end_with?('.txt') } 
tm.upload_directory('/path/to/directory', bucket: 'my-bucket', filter_callback: filter) 

# Download only .jpg files 
filter = proc { |obj| obj.key.end_with?('.jpg') } 
tm.download_directory('/local/path', bucket: 'my-bucket', filter_callback: filter)&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Handling results&lt;/h3&gt; 
&lt;p&gt;On success, both operations return a hash containing completed and failed transfer details:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ruby"&gt;result = tm.upload_directory('/path/to/directory', bucket: 'my-bucket') 
# =&amp;gt; { completed_uploads: 7, failed_uploads: 0 }&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;By default, an error raises an exception, which stops the transfer but does not clean up completed transfers. You can set &lt;code&gt;ignore_failure: true&lt;/code&gt;&amp;nbsp;to continue transferring remaining files and see what errors occurred in the results hash.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-ruby"&gt;result = tm.upload_directory(
  '/path/to/directory', 
  bucket: 'my-bucket', 
  ignore_failure: true
)
# =&amp;gt; { completed_uploads: 5, failed_uploads: 2, errors: [...] }&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;Directory upload and download support in the AWS SDK for Ruby Transfer Manager can help streamline bulk S3 transfers with built-in parallelism and multipart handling.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Key takeaways:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Use &lt;a href="https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/TransferManager.html#upload_directory-instance_method" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;upload_directory&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/TransferManager.html#download_directory-instance_method" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;download_directory&lt;/code&gt;&lt;/a&gt; for bulk transfers with a single method call&lt;/li&gt; 
 &lt;li&gt;Customize behavior with options like &lt;code&gt;recursive&lt;/code&gt;, &lt;code&gt;s3_prefix&lt;/code&gt;, and &lt;code&gt;filter_callback&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;Handle errors gracefully with &lt;code&gt;ignore_failure&lt;/code&gt; and inspect results for details&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;These are only a few of the available options. See the &lt;a href="https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/TransferManager.html" target="_blank" rel="noopener noreferrer"&gt;API documentation&lt;/a&gt; for a full list.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Next steps:&lt;/strong&gt;&amp;nbsp;Try implementing directory transfers in your applications and explore other Transfer Manager features like &lt;a href="https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/TransferManager.html#upload_file-instance_method" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;upload_file&lt;/code&gt;&lt;/a&gt; and&amp;nbsp;&lt;a href="https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/TransferManager.html#download_file-instance_method" target="_blank" rel="noopener noreferrer"&gt;&lt;code&gt;download_file&lt;/code&gt;&lt;/a&gt; for single-object transfers.&lt;/p&gt; 
&lt;p&gt;Share your questions, comments, and issues with us on &lt;a href="https://github.com/aws/aws-sdk-ruby" target="_blank" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt; 
&lt;hr&gt; 
&lt;h2&gt;About the author&lt;/h2&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>Upgrade AWS CLI from v1 to v2 using upgrade debug mode</title>
		<link>https://aws.amazon.com/blogs/developer/upgrade-aws-cli-from-v1-to-v2-using-upgrade-debug-mode/</link>
					
		
		<dc:creator><![CDATA[Ahmed Moustafa]]></dc:creator>
		<pubDate>Tue, 10 Mar 2026 15:00:04 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS CLI]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<guid isPermaLink="false">7cf8b669e184d3d596423c097dd22d90e68427ef</guid>

					<description>Upgrading from 
&lt;a href="https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-welcome.html"&gt;AWS Command Line Interface (AWS CLI) v1&lt;/a&gt; to 
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html"&gt;AWS CLI v2&lt;/a&gt; can be challenging and time-consuming due to changes introduced in AWS CLI v2 that can potentially break your existing workflows. If you don’t properly address breaking changes in your scripts or workflows, then executing these workflows after upgrading to AWS CLI v2 may result in unintended consequences, such as failing commands or misconfiguring resources in your AWS account.</description>
										<content:encoded>&lt;p&gt;Upgrading from &lt;a href="https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-welcome.html" target="_blank" rel="noopener"&gt;AWS Command Line Interface (AWS CLI) v1&lt;/a&gt; to &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" target="_blank" rel="noopener"&gt;AWS CLI v2&lt;/a&gt; can be challenging and time-consuming due to changes introduced in AWS CLI v2 that can potentially break your existing workflows. If you don’t properly address breaking changes in your scripts or workflows, then executing these workflows after upgrading to AWS CLI v2 may result in unintended consequences, such as failing commands or misconfiguring resources in your AWS account.&lt;/p&gt; 
&lt;p&gt;AWS CLI v1’s upgrade debug mode helps you identify and resolve these issues before upgrading, for a safer and seamless transition. This mode detects usage of features in AWS CLI v1 that have been updated with breaking changes in AWS CLI v2, and outputs a warning for each detection.&lt;/p&gt; 
&lt;p&gt;In this post, we’ll walk you through using AWS CLI v1’s upgrade debug mode to identify potential breaking changes, resolve compatibility issues, and safely transition your workflows to v2.&lt;/p&gt; 
&lt;h2&gt;Getting Started&lt;/h2&gt; 
&lt;p&gt;You’ll start by verifying you have the correct version of AWS CLI v1 to use upgrade debug mode, then you’ll use this mode to test commands in AWS CLI v1 for usage of features that were updated with breaking changes in AWS CLI v2. Then, you’ll review the &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html" target="_blank" rel="noopener"&gt;AWS CLI v2 breaking changes list&lt;/a&gt; in the &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html" target="_blank" rel="noopener"&gt;Migration guide for the AWS CLI version 2&lt;/a&gt; to manually verify whether your workflows may be broken by upgrading. Finally, you’ll follow guidance to mitigate breaking your workflows and safely upgrade to AWS CLI v2.&lt;/p&gt; 
&lt;h3&gt;AWS CLI v1&lt;/h3&gt; 
&lt;p&gt;The following steps walk you through using upgrade debug mode to identify potential breaking changes in your existing AWS CLI v1 usage, resolve compatibility issues, and safely transition to AWS CLI v2.&lt;/p&gt; 
&lt;h4&gt;Step 1: Verify you are using AWS CLI v1 version 1.44.0 or higher.&lt;/h4&gt; 
&lt;p&gt;We released the upgrade debug mode feature to the AWS CLI in version 1.44.0.&lt;/p&gt; 
&lt;p&gt;Using AWS CLI v1, run &lt;code&gt;aws --version&lt;/code&gt;, and verify that the AWS CLI version is 1.44.0 or higher.&lt;/p&gt; 
&lt;p&gt;If the version is older than 1.44.0, see our &lt;a href="https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-install.html" target="_blank" rel="noopener"&gt;Developer Guide&lt;/a&gt; for instructions to update to a later version.&lt;/p&gt; 
&lt;h4&gt;Step 2: Test your AWS CLI v1 usage with AWS CLI upgrade debug mode&lt;/h4&gt; 
&lt;p&gt;Set the &lt;code&gt;AWS_CLI_UPGRADE_DEBUG_MODE&lt;/code&gt; environment variable to &lt;code&gt;true&lt;/code&gt; to detect usage of features broken in AWS CLI v2. Alternatively, you can enable this functionality at the command-level using the &lt;code&gt;--v2-debug&lt;/code&gt; command line option. If you are upgrading the AWS CLI in existing scripts or workflows to use v2, we recommend testing each AWS CLI command used with this functionality enabled before upgrading them to use AWS CLI v2.&lt;/p&gt; 
&lt;p&gt;We recommend performing this step in the same environment that you will upgrade to use AWS CLI v2, since the execution environment determines whether commands will experience breaking changes.&lt;/p&gt; 
&lt;p&gt;For example, suppose you have a script that executes the AWS CLI command below:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;aws secretsmanager update-secret --secret-id SECRET-NAME \
  --secret-binary file://BINARY-SECRET.json
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Execute the command with the &lt;code&gt;AWS_CLI_UPGRADE_DEBUG_MODE&lt;/code&gt; set to true—or with the &lt;code&gt;--v2-debug&lt;/code&gt; flag—and check the output for the text “AWS CLI v2 UPGRADE WARNING”. Example output with the environment variable configured is shown below:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;$ aws secretsmanager update-secret --secret-id SECRET-NAME \
  --secret-binary file://BINARY-SECRET.json

AWS CLI v2 UPGRADE WARNING: When specifying a blob-type parameter, AWS CLI v2 will 
assume the parameter value is base64-encoded. This is different from v1 behavior, 
where the AWS CLI will automatically encode the value to base64. To retain v1 
behavior in AWS CLI v2, set the `cli_binary_format` configuration variable to 
`raw-in-base64-out`. See 
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html#cliv2-migration-binaryparam." rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html#cliv2-migration-binaryparam.&lt;/a&gt;

{
    "ARN": "ARN",
    "Name": "SECRET-NAME",
    "VersionId": "VERSION-ID"
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;h4&gt;Step 3: Use the warnings to prepare for AWS CLI v2&lt;/h4&gt; 
&lt;p&gt;If breaking changes were detected in step 2, the warnings provide guidance for preparing for the AWS CLI v2 upgrade. Some breaking changes can be mitigated prior to upgrading to AWS CLI v2 by modifying the command or execution environment; the warnings identified in step 2 include links to our &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html" target="_blank" rel="noopener"&gt;AWS CLI v2 breaking changes list&lt;/a&gt; that details options to mitigate the breakage.&lt;/p&gt; 
&lt;p&gt;In the previous example, the warning explains that AWS CLI v2 will assume that the contents of &lt;code&gt;BINARY-SECRET.json&lt;/code&gt; will be encoded in base64.&lt;/p&gt; 
&lt;p&gt;Following the instructions in the warning, you’ll configure the &lt;code&gt;cli_binary_format&lt;/code&gt; variable to &lt;code&gt;raw-in-base64-out&lt;/code&gt; in the &lt;a href="https://docs.aws.amazon.com/cli/v1/userguide/cli-configure-files.html" target="_blank" rel="noopener"&gt;configuration file&lt;/a&gt;. Even though &lt;code&gt;cli_binary_format&lt;/code&gt; is not a valid configuration setting in AWS CLI v1, it prepares your environment for AWS CLI v2 by configuring AWS CLI v2 to retain the same behavior as AWS CLI v1.&lt;/p&gt; 
&lt;p&gt;You’ll configure &lt;code&gt;cli_binary_format&lt;/code&gt; according to the instructions using the following command:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;aws configure set cli_binary_format raw-in-base64-out
&lt;/code&gt;&lt;/pre&gt; 
&lt;h4&gt;Step 4: Verify resolution of warnings&lt;/h4&gt; 
&lt;p&gt;For breaking changes mitigated in step 3, you’ll re-run the command to verify the warning is no longer printed.&lt;/p&gt; 
&lt;p&gt;Proceeding with the example, you configured the &lt;code&gt;cli_binary_format&lt;/code&gt; variable to &lt;code&gt;raw-in-base64-out&lt;/code&gt; in step 3. You’ll now re-run the command to verify the mitigation warning is resolved:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;aws secretsmanager update-secret --secret-id SECRET-NAME \
    --secret-binary file://BINARY-SECRET.json 
{
    "ARN": "ARN",
    "Name": "SECRET-NAME",
    "VersionId": "VERSION-ID"
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The warning is no longer printed, signaling that this command is now compatible with AWS CLI v2.&lt;/p&gt; 
&lt;p&gt;If you used the &lt;code&gt;--v2-debug&lt;/code&gt; argument instead of the &lt;code&gt;AWS_CLI_UPGRADE_DEBUG_MODE&lt;/code&gt; environment variable in step 2, remember to remove the flag from the command before upgrading to version 2.&lt;/p&gt; 
&lt;h4&gt;Step 5: Manually review for breaking changes&lt;/h4&gt; 
&lt;p&gt;After using upgrade debug mode to automatically detect usage of features that were updated with breaking changes, you will now manually review your AWS CLI usage by reviewing our &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html" target="_blank" rel="noopener"&gt;breaking changes list&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html" target="_blank" rel="noopener"&gt;AWS CLI v2 Migration Guide&lt;/a&gt;.&lt;/p&gt; 
&lt;h4&gt;Step 6: Upgrade to AWS CLI v2&lt;/h4&gt; 
&lt;p&gt;After preparing for the breaking changes identified in the previous steps, you will now upgrade to AWS CLI v2 following the &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" target="_blank" rel="noopener"&gt;installation guide&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Limitations&lt;/h2&gt; 
&lt;p&gt;The upgrade debug mode feature does not currently support every breaking change introduced with AWS CLI v2, and has false positive cases where it issues a warning even if no breaking changes are actually present.&lt;/p&gt; 
&lt;p&gt;Additionally, some of the detection depends on API responses, as well as the execution environment running the AWS CLI. For this reason, we recommend running this feature against an AWS account and execution environment that reflect your production workflows as close as possible.&lt;/p&gt; 
&lt;p&gt;For more details on the limitations of upgrade debug mode, see &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-upgrade-debug-mode.html" target="_blank" rel="noopener"&gt;Using upgrade debug mode to upgrade AWS CLI version 1 to AWS CLI version 2&lt;/a&gt; in &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html" target="_blank" rel="noopener"&gt;Migration guide for the AWS CLI version 2&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;We strongly recommend customers understand our &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html" target="_blank" rel="noopener"&gt;breaking changes list&lt;/a&gt; published in our &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html" target="_blank" rel="noopener"&gt;AWS CLI v2 Migration Guide&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;The only breaking change not supported by the upgrade debug mode is that &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html#cliv2-migration-return-codes" target="_blank" rel="noopener"&gt;AWS CLI version 2 provides more consistent return codes across commands&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;In this blog post, we showed you how to get started with the new upgrade debug mode. If you’re interested in using this feature to assist your upgrade from AWS CLI v1 to AWS CLI v2, try out upgrade debug mode. To learn more, visit &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-upgrade-debug-mode.html" target="_blank" rel="noopener"&gt;Using upgrade debug mode to upgrade AWS CLI version 1 to AWS CLI version 2&lt;/a&gt; in our &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html" target="_blank" rel="noopener"&gt;AWS CLI v2 Migration Guide&lt;/a&gt;. We would love your feedback! You can reach out to us by creating a &lt;a href="https://github.com/aws/aws-cli/issues/new/choose" target="_blank" rel="noopener"&gt;GitHub Issue&lt;/a&gt;.&lt;/p&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>AWS SDK for .NET V3 Maintenance Mode Announcement</title>
		<link>https://aws.amazon.com/blogs/developer/aws-sdk-for-net-v3-maintenance-mode-announcement/</link>
					
		
		<dc:creator><![CDATA[Muhammad Othman]]></dc:creator>
		<pubDate>Wed, 04 Mar 2026 19:35:48 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS SDK for .NET]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[.NET]]></category>
		<guid isPermaLink="false">5c2a8526dc1e5d73c7463cbd874d882ce5baf4e8</guid>

					<description>In alignment with our 
&lt;a href="https://aws.amazon.com/blogs/developer/general-availability-of-aws-sdk-for-net-v4-0/" target="_blank" rel="noopener noreferrer"&gt;V4.0 GA announcement&lt;/a&gt; and 
&lt;a href="https://docs.aws.amazon.com/sdkref/latest/guide/maint-policy.html" target="_blank" rel="noopener noreferrer"&gt;SDKs and Tools Maintenance Policy&lt;/a&gt;, version 3 of the 
&lt;a href="https://aws.amazon.com/sdk-for-net/" target="_blank" rel="noopener noreferrer"&gt;AWS SDK for .NET&lt;/a&gt; will enter maintenance mode on March 1, 2026, and reach end-of-support on June 1, 2026. Starting March 1, 2026 we will stop adding regular updates to V3 and will only provide security updates until end-of-support begins.</description>
										<content:encoded>&lt;p&gt;In alignment with our &lt;a href="https://aws.amazon.com/blogs/developer/general-availability-of-aws-sdk-for-net-v4-0/" target="_blank" rel="noopener noreferrer"&gt;V4.0 GA announcement&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/sdkref/latest/guide/maint-policy.html" target="_blank" rel="noopener noreferrer"&gt;SDKs and Tools Maintenance Policy&lt;/a&gt;, version 3 of the &lt;a href="https://aws.amazon.com/sdk-for-net/" target="_blank" rel="noopener noreferrer"&gt;AWS SDK for .NET&lt;/a&gt; will enter maintenance mode on March 1, 2026, and reach end-of-support on June 1, 2026. Starting March 1, 2026 we will stop adding regular updates to V3 and will only provide security updates until end-of-support begins.&lt;/p&gt; 
&lt;h2&gt;Support Timeline&lt;/h2&gt; 
&lt;p&gt;When we announced the general availability of AWS SDK for .NET V4 on April 28, 2025, we committed to a support timeline tied to the &lt;a href="https://aws.amazon.com/powershell/" target="_blank" rel="noopener noreferrer"&gt;AWS Tools for PowerShell&lt;/a&gt;, which depends on the SDK. With AWS Tools for PowerShell V5 reaching &lt;a href="https://aws.amazon.com/blogs/developer/aws-tools-for-powershell-v5-now-generally-available/" target="_blank" rel="noopener noreferrer"&gt;general availability in August 2025&lt;/a&gt;, the 6-month support window for V3 began. For more details on the original support commitment, see the&amp;nbsp;&lt;a href="https://aws.amazon.com/blogs/developer/general-availability-of-aws-sdk-for-net-v4-0/" target="_blank" rel="noopener noreferrer"&gt;V4.0 GA announcement&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;The following table outlines the level of support for each phase of the SDK lifecycle.&lt;/p&gt; 
&lt;table class="styled-table" border="1px" cellpadding="10px"&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;strong&gt;SDK Lifecycle Phase&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;strong&gt;Start Date&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;strong&gt;End Date&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;strong&gt;Support Level&lt;/strong&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;General Availability&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;July&amp;nbsp;28,&amp;nbsp;2015&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;February 28, 2026&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;During this phase, the SDK is fully supported. AWS will provide regular SDK releases that include support for new services, API updates for existing services, as well as bug and security fixes.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;Maintenance Mode&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;March 1, 2026&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;May 31, 2026&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;During the maintenance mode, AWS will limit SDK releases to address critical bug fixes and security issues only. AWS SDK for .NET v3.x will not receive API updates for new or existing services or be updated to support new regions.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;End-of-Support&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;June 1, 2026&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;N/A&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;AWS SDK for .NET v3.x will no longer receive updates or releases. Previously published releases will continue to be available via public package managers and the code will remain on GitHub.&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h2&gt;Next Steps&lt;/h2&gt; 
&lt;p&gt;We encourage the AWS SDK for .NET community to begin planning your migration to V4 as soon as possible:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Review the &lt;a href="https://docs.aws.amazon.com/sdk-for-net/v4/developer-guide/net-dg-v4.html" target="_blank" rel="noopener noreferrer"&gt;Migration Guide&lt;/a&gt; to understand the breaking changes.&lt;/li&gt; 
 &lt;li&gt;Test your applications with V4 in a development environment.&lt;/li&gt; 
 &lt;li&gt;Update your code to accommodate the changes.&lt;/li&gt; 
 &lt;li&gt;Provide feedback through our &lt;a href="https://github.com/aws/aws-sdk-net" target="_blank" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;With the maintenance mode transition now in effect and end of support on June 1st, 2026, we recommend prioritizing your migration planning to ensure a smooth transition. &lt;a href="https://docs.aws.amazon.com/sdk-for-net/v4/developer-guide/net-dg-v4.html" target="_blank" rel="noopener noreferrer"&gt;Migration documentation&lt;/a&gt; is available to guide you through the update process.&lt;/p&gt; 
&lt;p&gt;For questions or issues that arise while updating to the V4 SDK, please use the GitHub repository’s &lt;a href="https://github.com/aws/aws-sdk-net/discussions" target="_blank" rel="noopener noreferrer"&gt;discussion forums&lt;/a&gt; or open GitHub &lt;a href="https://github.com/aws/aws-sdk-net/issues" target="_blank" rel="noopener noreferrer"&gt;issues &lt;/a&gt;to reach out to us. If you find dependencies that are preventing you from updating to V4, please let us know to see if we can help.&lt;/p&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>GA Release of the AWS SDK Java 2.x HTTP Client built on Apache HttpClient 5.6</title>
		<link>https://aws.amazon.com/blogs/developer/ga-release-of-the-aws-sdk-java-2-x-http-client-built-on-apache-httpclient-5-6/</link>
					
		
		<dc:creator><![CDATA[Dongie Agnir]]></dc:creator>
		<pubDate>Mon, 02 Mar 2026 19:24:38 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS SDK for Java]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Foundational (100)]]></category>
		<guid isPermaLink="false">cd16503d04b534a948f559b24d262a026282ef04</guid>

					<description>In this post, you'll learn how to add the Apache 5 HTTP client to your project, configure it for your needs, and migrate from the 4.5.x version.</description>
										<content:encoded>&lt;p&gt;If you’re using the Apache HttpClient 4.5.x with the&amp;nbsp;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/home.html" target="_blank" rel="noopener noreferrer"&gt;AWS SDK for Java 2.x&lt;/a&gt;, you may have encountered dependency alerts for Jakarta Commons Logging (JCL) dependencies, concerns about long-term maintenance support, or compatibility issues with Java 21’s virtual threads. The new Apache 5 HTTP client solves these problems.&lt;/p&gt; 
&lt;p&gt;In this post, you’ll learn how to add the Apache 5 HTTP client to your project, configure it for your needs, and migrate from the 4.5.x version.&lt;/p&gt; 
&lt;h3&gt;What’s new&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Modern logging&lt;/strong&gt;: Replaces the outdated JCL dependency with Simple Logging Facade for Java (SLF4J), giving you better compatibility with modern logging frameworks like Logback and Log4j2&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Virtual thread support&lt;/strong&gt;: Full compatibility with Java 21’s virtual threads for improved concurrency&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Active maintenance&lt;/strong&gt;: Apache HttpClient 5.x receives regular security updates and bug fixes&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;This client is available alongside the existing SDK HTTP clients: Apache HttpClient 4.5.x, Netty, URL Connection, and AWS CRT HTTP client. The new&amp;nbsp;apache5-client&amp;nbsp;Maven artifact lets you use both Apache versions in the same project without conflicts.&lt;/p&gt; 
&lt;h2&gt;Getting started&lt;/h2&gt; 
&lt;p&gt;Using the Apache 5 HTTP client in your SDK can require as little as a single step. If you’re coming from the Apache 4 client and want to set specific configurations, the new Apache 5 client offers identical options.&lt;/p&gt; 
&lt;h3&gt;Add the Apache 5&amp;nbsp;client dependency&lt;/h3&gt; 
&lt;p&gt;To begin using the&amp;nbsp;Apache 5 HTTP client implementation, add the following dependency to your project:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-html"&gt;&amp;lt;dependency&amp;gt;
&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;lt;groupId&amp;gt;software.amazon.awssdk&amp;lt;/groupId&amp;gt;
&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;lt;artifactId&amp;gt;apache5-client&amp;lt;/artifactId&amp;gt;
&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;lt;version&amp;gt;2.41.26&amp;lt;/version&amp;gt;
&amp;lt;/dependency&amp;gt;&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;If you just want to use all the default configurations, then you do not need to configure anything else; your sync clients will use the Apache5 client under the hood.&lt;/p&gt; 
&lt;p&gt;&lt;code&gt;S3Client s3Client = S3Client.create();&lt;/code&gt;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Advanced configuration example&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;If you want to configure certain options on the client, then just like with all of our clients, you can use&amp;nbsp;&lt;code&gt;Apache5HttpClient.builder()&lt;/code&gt;&amp;nbsp;to obtain a builder and set the options you would like to:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-java"&gt;Apache5HttpClient httpClient = Apache5HttpClient.builder()
&amp;nbsp;&amp;nbsp; &amp;nbsp;.connectionTimeout(Duration.ofSeconds(30))
&amp;nbsp;&amp;nbsp; &amp;nbsp;.maxConnections(100)
&amp;nbsp;&amp;nbsp; &amp;nbsp;.build();

DynamoDbClient dynamoDbClient = DynamoDbClient.builder()
&amp;nbsp;&amp;nbsp; &amp;nbsp;.httpClient(httpClient)
&amp;nbsp;&amp;nbsp; &amp;nbsp;.build();&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;h3&gt;Migrate from Apache 4.5.x&lt;/h3&gt; 
&lt;p&gt;If you’re using the default Apache HTTP client, migration is straightforward:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Add the&amp;nbsp;apache5-client&amp;nbsp;dependency shown above&lt;/li&gt; 
 &lt;li&gt;Update any explicit&amp;nbsp;ApacheHttpClient&amp;nbsp;references to&amp;nbsp;Apache5HttpClient&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;The API remains consistent with other HTTP client implementations in the SDK. Note that like the 4.5.x client, this implementation supports synchronous service clients only.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Cleaning up&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;When you’re done with a service client, close it to release resources:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-java"&gt;
s3Client.close();
&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;If you created a shared&amp;nbsp;Apache5HttpClient&amp;nbsp;instance, close it separately after closing the service clients that use it.&lt;/p&gt; 
&lt;h3&gt;Conclusion&lt;/h3&gt; 
&lt;p&gt;In this blog post, we showed you how to get started with the new Apache 5 HTTP client in the AWS SDK for Java 2.x, which uses Apache HttpClient 5.6.x. Please share your experience and any feature requests by opening a &lt;a href="https://github.com/aws/aws-sdk-java-v2/issues" target="_blank" rel="noopener noreferrer"&gt;GitHub issue&lt;/a&gt;.&lt;/p&gt; 
&lt;hr style="width: 80%"&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>Announcing new output formats in AWS CLI v2</title>
		<link>https://aws.amazon.com/blogs/developer/announcing-new-output-formats-in-aws-cli-v2/</link>
					
		
		<dc:creator><![CDATA[Andrew Asseily]]></dc:creator>
		<pubDate>Sat, 28 Feb 2026 02:22:30 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS CLI]]></category>
		<guid isPermaLink="false">10f6905f9a3db0573751a626e198c250357ed74f</guid>

					<description>Amazon Web Services (AWS) is announcing two new features for the AWS Command Line Interface (AWS CLI) v2: structured error output and the “off” output format.</description>
										<content:encoded>&lt;p&gt;Amazon Web Services (AWS) is announcing two new features for the &lt;a href="https://aws.amazon.com/cli/" target="_blank" rel="noopener"&gt;AWS Command Line Interface (AWS CLI) v2&lt;/a&gt;: structured error output and the “off” output format.&lt;/p&gt; 
&lt;h2&gt;Structured error output&lt;/h2&gt; 
&lt;p&gt;Errors returned from AWS service APIs often include useful details beyond the code and message—bucket names, validation reasons, resource IDs—that were previously hidden unless you used &lt;code&gt;--debug&lt;/code&gt;. Now, you can see this error information directly in your error output.&lt;/p&gt; 
&lt;p&gt;Starting with AWS CLI v2 version 2.34.0, any additional error details returned from service APIs will now be shown in the stderr output. Additionally, you can configure the AWS CLI to output your errors in alternative structured formats. Control how errors are displayed using the new &lt;code&gt;--cli-error-format&lt;/code&gt; &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-error-format.html#cli-error-format-configuring" target="_blank" rel="noopener"&gt;CLI flag,&lt;/a&gt; the &lt;code&gt;cli_error_format&lt;/code&gt; &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-error-format.html#cli-error-format-configuring" target="_blank" rel="noopener"&gt;configuration setting&lt;/a&gt;, or the &lt;code&gt;AWS_CLI_ERROR_FORMAT&lt;/code&gt; &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-error-format.html#cli-error-format-configuring" target="_blank" rel="noopener"&gt;environment variable&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;Supported formats for the error format parameter:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-error-format.html#cli-error-format-enhanced" target="_blank" rel="noopener"&gt;enhanced&lt;/a&gt; (default) – Error message with additional details displayed inline&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-error-format.html#cli-error-format-json" target="_blank" rel="noopener"&gt;json&lt;/a&gt; – JSON structure with all error fields&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-error-format.html#cli-error-format-yaml" target="_blank" rel="noopener"&gt;yaml&lt;/a&gt; – YAML structure with all error fields&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-error-format.html#cli-error-format-text" target="_blank" rel="noopener"&gt;text&lt;/a&gt; – Tab-delimited error fields&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-error-format.html#cli-error-format-table" target="_blank" rel="noopener"&gt;table&lt;/a&gt; – ASCII table format&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-error-format.html#cli-error-format-legacy" target="_blank" rel="noopener"&gt;legacy&lt;/a&gt; – Original error format&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Accessibility enhancements&lt;/h3&gt; 
&lt;p&gt;Since September 2025, AWS CLI errors started including the &lt;code&gt;aws: [ERROR]:&lt;/code&gt; prefix for some exceptions. This prefix signals that an error has occurred and supports accessibility best practices and automation use cases. This release ensures the prefix is consistently included for all errors in the enhanced and legacy formats.&lt;/p&gt; 
&lt;h3&gt;Example: Using enhanced output error format&lt;/h3&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-bash"&gt;$ aws s3api get-object \
    --bucket amzn-s3-demo-bucket \
    --key file.txt out.txt \
    --cli-error-format enhanced

aws: [ERROR]: An error occurred (NoSuchBucket) when calling the GetObject operation: The specified bucket does not exist

Additional error details:

BucketName: amzn-s3-demo-bucket&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;h3&gt;Example: Using json output error format&lt;/h3&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-bash"&gt;$ aws s3api get-object \
    --bucket amzn-s3-demo-bucket \
    --key file.txt out.txt \
    --cli-error-format json
{
  "Code": "NoSuchBucket",
  "Message": "The specified bucket does not exist",
  "BucketName": "amzn-s3-demo-bucket"
}&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;h3&gt;Example: Using legacy output error format&lt;/h3&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-bash"&gt;$ aws s3api get-object \
    --bucket amzn-s3-demo-bucket \
    --key file.txt out.txt \
    --cli-error-format legacy

aws: [ERROR]: An error occurred (NoSuchBucket) when calling the GetObject operation: The specified bucket does not exist&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;h2&gt;Turning off CLI output&lt;/h2&gt; 
&lt;p&gt;Sometimes, you might want to hide the AWS CLI command output, such as when using a command that may output sensitive information. The &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-output-format.html#off-output" target="_blank" rel="noopener"&gt;off format&lt;/a&gt; suppresses stdout while still preserving errors on stderr.&lt;/p&gt; 
&lt;p&gt;For example, you can &lt;a href="https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html" target="_blank" rel="noopener"&gt;create an AWS Secrets Manager secret&lt;/a&gt; without writing the secret ARN or version information to logs:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-bash"&gt;$ aws secretsmanager create-secret \
    --name my-secret \
    --secret-string "password123" \
    --output off
$ echo $?
0&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;You can set this using the&lt;code&gt;--output off&lt;/code&gt; &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-output-format.html#cli-usage-output-format-how" target="_blank" rel="noopener"&gt;CLI flag&lt;/a&gt;, setting &lt;code&gt;output = off&lt;/code&gt; in your &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-output-format.html#cli-usage-output-format-how" target="_blank" rel="noopener"&gt;configuration file&lt;/a&gt;, or the &lt;code&gt;AWS_DEFAULT_OUTPUT=off&lt;/code&gt; &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-output-format.html#cli-usage-output-format-how" target="_blank" rel="noopener"&gt;environment variable&lt;/a&gt;.&lt;/p&gt; 
&lt;h3&gt;Next Steps&lt;/h3&gt; 
&lt;p&gt;To take advantage of these new output features, upgrade your version of the AWS CLI to 2.34.0. For more information, see the &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-error-format.html" target="_blank" rel="noopener"&gt;Structured error output&lt;/a&gt; and &lt;a href="https://aws.amazon.com/blogs/developer/introducing-agent-plugins-for-aws/" target="_blank" rel="noopener"&gt;Output format&lt;/a&gt; guides. Please share your questions, comments, and issues with us on &lt;a href="https://github.com/aws/aws-cli/tree/v2" target="_blank" rel="noopener"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>AWS Tools for PowerShell v4 Maintenance Mode Announcement</title>
		<link>https://aws.amazon.com/blogs/developer/aws-tools-for-powershell-v4-maintenance-mode-announcement/</link>
					
		
		<dc:creator><![CDATA[Sanket Tangade]]></dc:creator>
		<pubDate>Fri, 27 Feb 2026 18:25:37 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS Tools for PowerShell]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[PowerShell]]></category>
		<guid isPermaLink="false">923b452465494c037a84d5d0d1de80732d6b706c</guid>

					<description>In alignment with our previous announcement in August 2025 and SDKs and Tools Maintenance Policy, version 4 of the AWS Tools for PowerShell (AWS Tools for PowerShell v4.x) will enter maintenance mode on March 1, 2026 and reach end-of-support on June 1, 2026.</description>
										<content:encoded>&lt;p&gt;In alignment with our previous &lt;a href="https://aws.amazon.com/blogs/devops/announcing-the-end-of-support-for-aws-tools-for-powershell-v4/" target="_blank" rel="noopener noreferrer"&gt;announcement in August 2025&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/sdkref/latest/guide/maint-policy.html" target="_blank" rel="noopener noreferrer"&gt;SDKs and Tools Maintenance Policy&lt;/a&gt;, version 4 of the &lt;a href="https://aws.amazon.com/powershell/" target="_blank" rel="noopener noreferrer"&gt;AWS Tools for PowerShell&lt;/a&gt; (AWS Tools for PowerShell v4.x) will enter maintenance mode on March 1, 2026 and reach end-of-support on June 1, 2026.&lt;/p&gt; 
&lt;p&gt;Beginning March 1, 2026, AWS Tools for PowerShell v4.x will enter maintenance mode and will only receive critical bug fixes and security updates. We will not update it to support new AWS services, new service features, or changes to existing services. Existing applications that use AWS Tools for PowerShell v4.x will continue to function as intended unless there is a fundamental change to how an AWS service works. This is uncommon, and we will announce it broadly if it happens. After June 1, 2026, when AWS Tools for PowerShell v4.x reaches end-of-support, it will no longer receive any updates or releases.&lt;/p&gt; 
&lt;h2&gt;End of Support Timeline for Version 4&lt;/h2&gt; 
&lt;p&gt;The following table outlines the level of support for each phase of the SDK lifecycle.&lt;/p&gt; 
&lt;table class="styled-table" border="1px" cellpadding="6px"&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;strong&gt;SDK Lifecycle Phase&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;strong&gt;Start Date&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;strong&gt;End Date&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;strong&gt;Support Level&lt;/strong&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;General Availability&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;July 28, 2015&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;February 28, 2026&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;During this phase, the SDK is fully supported. AWS will provide regular SDK releases that include support for new services, API updates for existing services, as well as bug and security fixes.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;Maintenance Mode&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;March 1, 2026&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;May 31, 2026&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;During the maintenance mode, AWS will limit releases to address critical bug fixes and security issues only. AWS Tools for PowerShell v4.x will not receive API updates for new or existing services or be updated to support new regions.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;End-of-Support&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;June 1, 2026&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;N/A&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;AWS Tools for PowerShell v4.x will no longer receive updates or releases. Previously published releases will continue to be available via public package managers and the code will remain on GitHub.&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;We recommend upgrading to the latest major version of AWS Tools for PowerShell v5.x by using the &lt;a href="https://docs.aws.amazon.com/powershell/v5/userguide/migrating-v5.html" target="_blank" rel="noopener noreferrer"&gt;migration guide&lt;/a&gt;. This major version includes, but is not limited to, performance enhancements, bug fixes, modern .NET libraries and frameworks, and the latest AWS service updates. Upgrading enables you to leverage the latest services and innovations from AWS.&lt;/p&gt; 
&lt;p&gt;To learn more, refer to the following resources:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;The AWS Tools for PowerShell&lt;a href="https://aws.amazon.com/powershell/" target="_blank" rel="noopener noreferrer"&gt; landing page&lt;/a&gt; contains links to the getting started guide, key features, examples, and links to additional resources.&lt;/li&gt; 
 &lt;li&gt;The &lt;a href="https://docs.aws.amazon.com/powershell/v5/userguide/migrating-v5.html" target="_blank" rel="noopener noreferrer"&gt;Migrating to version 5 of the AWS Tools for PowerShell guide&lt;/a&gt; provides instructions for migrating and explains the changes between the two versions.&lt;/li&gt; 
 &lt;li&gt;The &lt;a href="https://aws.amazon.com/blogs/developer/aws-tools-for-powershell-v5-now-generally-available/" target="_blank" rel="noopener noreferrer"&gt;AWS Tools for PowerShell v5.x GA blog post&lt;/a&gt; outlines the motivation for launching AWS Tools for PowerShell v5.x and includes the benefits over AWS Tools for PowerShell v4.x.&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/powershell/v5/userguide/powershell_code_examples.html" target="_blank" rel="noopener noreferrer"&gt;AWS Tools for PowerShell Code Examples&lt;/a&gt; provide code examples to help you use v5.x.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Feedback&lt;/h2&gt; 
&lt;p&gt;If you need assistance or have feedback, reach out to your usual AWS support contacts. You can also open a &lt;a href="https://github.com/aws/aws-tools-for-powershell/discussions" target="_blank" rel="noopener noreferrer"&gt;discussion&lt;/a&gt; or &lt;a href="https://github.com/aws/aws-tools-for-powershell/issues" target="_blank" rel="noopener noreferrer"&gt;issue&lt;/a&gt; on GitHub. Thank you for using AWS Tools for PowerShell.&lt;/p&gt; 
&lt;hr&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>Automate Custom CI/CD Pipelines for Landing Zone Accelerator on AWS</title>
		<link>https://aws.amazon.com/blogs/developer/automate-custom-ci-cd-pipelines-for-landing-zone-accelerator-on-aws/</link>
					
		
		<dc:creator><![CDATA[Anwaar Hussain]]></dc:creator>
		<pubDate>Tue, 24 Feb 2026 21:29:00 +0000</pubDate>
				<category><![CDATA[Advanced (300)]]></category>
		<category><![CDATA[Amazon Simple Storage Service (S3)]]></category>
		<category><![CDATA[AWS CloudFormation]]></category>
		<category><![CDATA[AWS CodeBuild]]></category>
		<category><![CDATA[AWS CodePipeline]]></category>
		<category><![CDATA[AWS CodeStar]]></category>
		<category><![CDATA[AWS Identity and Access Management (IAM)]]></category>
		<category><![CDATA[AWS Key Management Service]]></category>
		<category><![CDATA[Learning Levels]]></category>
		<category><![CDATA[Technical How-to]]></category>
		<category><![CDATA[CloudFormation]]></category>
		<category><![CDATA[DynamoDB]]></category>
		<category><![CDATA[S3]]></category>
		<guid isPermaLink="false">e02e23aa461d03a95a4bd2c137c4e2dd0e83078a</guid>

					<description>This blog post shows you how to extend LZA with continuous integration and continuous deployment (CI/CD) pipelines that maintain your governance controls and accelerate workload deployments, offering rapid deployment of both Terraform and AWS CloudFormation across multiple accounts. You'll build automated infrastructure deployment workflows that run in parallel with LZA's baseline orchestration to help maintain your enterprise governance and compliance control requirements. You will implement built-in validation, security scanning, and cross-account deployment capabilities to help address Public Sector use cases that demand strict compliance and security requirements.</description>
										<content:encoded>&lt;p&gt;Managing infrastructure deployments across multiple AWS accounts and maintaining governance controls present a significant challenge for organizations. Manual deployment processes create bottlenecks that slow delivery, introduce human error, and make it difficult to maintain consistent security and compliance standards across environments. &lt;a href="https://aws.amazon.com/solutions/implementations/landing-zone-accelerator-on-aws/" target="_blank" rel="noopener noreferrer"&gt;Landing Zone Accelerator on AWS (LZA)&lt;/a&gt; provides foundational governance and baseline infrastructure across your AWS environment, but organizations often need to deploy workload-specific resources—such as &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html" target="_blank" rel="noopener noreferrer"&gt;Amazon Virtual Private Cloud (Amazon VPC)&lt;/a&gt; resources, &lt;a href="https://aws.amazon.com/pm/lambda/" target="_blank" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt; functions, or &lt;a href="https://aws.amazon.com/glue/" target="_blank" rel="noopener noreferrer"&gt;AWS Glue&lt;/a&gt; jobs—that fall outside LZA’s core scope. When you’re using LZA for provisioning workload infrastructure, deployments can typically take 45–90 minutes.&lt;/p&gt; 
&lt;p&gt;This blog post shows you how to extend LZA with continuous integration and continuous deployment (CI/CD) pipelines that maintain your governance controls and accelerate workload deployments, offering rapid deployment of both Terraform and &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html" target="_blank" rel="noopener noreferrer"&gt;AWS CloudFormation&lt;/a&gt; across multiple accounts. You’ll build automated infrastructure deployment workflows that run in parallel with LZA’s baseline orchestration to help maintain your enterprise governance and compliance control requirements. You will implement built-in validation, security scanning, and cross-account deployment capabilities to help address Public Sector use cases that demand strict compliance and security requirements.&lt;/p&gt; 
&lt;p&gt;In this post, you’ll learn how to configure cross-account deployment workflows that integrate seamlessly with your existing LZA environment through customizations, implement security controls including automated validation and scanning, and maintain centralized governance as teams deploy workload-specific infrastructure independently across multiple accounts.&lt;/p&gt; 
&lt;h2&gt;Prerequisites&lt;/h2&gt; 
&lt;p&gt;Before implementing this automated deployment solution, verify that your environment meets requirements across three key areas:&lt;/p&gt; 
&lt;h3&gt;LZA Requirements&lt;/h3&gt; 
&lt;p&gt;You need an LZA environment version &lt;a href="https://awslabs.github.io/landing-zone-accelerator-on-aws/v1.14.2/faq/general/" target="_blank" rel="noopener noreferrer"&gt;1.14.2&lt;/a&gt; or later with &lt;a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html" target="_blank" rel="noopener noreferrer"&gt;AWS Organizations&lt;/a&gt; enabled and Organizational Units (OUs) such as Root, Security, and Infrastructure configured.&lt;/p&gt; 
&lt;h3&gt;AWS Account Requirements&lt;/h3&gt; 
&lt;p&gt;You’ll need appropriate &lt;a href="https://aws.amazon.com/iam/" target="_blank" rel="noopener noreferrer"&gt;AWS Identity and Access Management (IAM)&lt;/a&gt;&amp;nbsp;permissions to create and manage&amp;nbsp;&lt;a href="https://aws.amazon.com/codepipeline/" target="_blank" rel="noopener noreferrer"&gt;AWS&amp;nbsp;CodePipeline&lt;/a&gt;, &lt;a href="https://aws.amazon.com/codebuild/" target="_blank" rel="noopener noreferrer"&gt;AWS CodeBuild&lt;/a&gt;, &lt;a href="https://aws.amazon.com/s3/" target="_blank" rel="noopener noreferrer"&gt;Amazon Simple Storage Service (Amazon S3)&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/kms/latest/developerguide/overview.html" target="_blank" rel="noopener noreferrer"&gt;AWS Key Management Service (KMS)&lt;/a&gt;, &lt;a href="https://aws.amazon.com/dynamodb/" target="_blank" rel="noopener noreferrer"&gt;Amazon DynamoDB (DynamoDB)&lt;/a&gt;, IAM roles, and &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html" target="_blank" rel="noopener noreferrer"&gt;AWS Systems Manager (SSM) Parameter Store &lt;/a&gt;&amp;nbsp;across multiple accounts.&lt;/p&gt; 
&lt;h3&gt;Source Code and Repository Access&lt;/h3&gt; 
&lt;p&gt;You’ll also need access to a GitHub repository with your infrastructure code and authorization to create &lt;a href="https://docs.aws.amazon.com/codeconnections/latest/APIReference/Welcome.html" target="_blank" rel="noopener noreferrer"&gt;AWS CodeConnections&lt;/a&gt; for secure source integration.&lt;/p&gt; 
&lt;p&gt;You can access complete implementation code, including CloudFormation templates and LZA configurations, in the&amp;nbsp;&lt;a href="https://github.com/aws-samples/sample-aws-lza-cicd-customizations" target="_blank" rel="noopener noreferrer"&gt;aws-samples/sample-aws-lza-cicd-customizations&lt;/a&gt; repository on the GitHub website.&lt;/p&gt; 
&lt;p&gt;With these prerequisites in place, let’s examine how the solution components work together to deliver automated CI/CD capabilities.&lt;/p&gt; 
&lt;h2&gt;Solution overview&lt;/h2&gt; 
&lt;p&gt;You’ll extend LZA with automated deployment workflows by combining several AWS services. CodePipeline orchestrates your CI/CD workflow with automated triggers from GitHub. CodeBuild executes validation, security scanning, and deployment tasks. CodeConnections integrates your workflows with GitHub repositories, and cross-account IAM roles provide secure multi-account deployments. S3 and KMS provide encrypted artifact storage, and DynamoDB manages Terraform state locking.&lt;/p&gt; 
&lt;p&gt;Your architecture supports both CloudFormation and Terraform deployments, with automated validation using cfn-lint, cfn-nag, tflint, and tfsec. Manual approval gates help control production changes, and centralized audit logging provides visibility into deployment activities.&lt;/p&gt; 
&lt;div class="note"&gt; 
 &lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Third-party open-source tools (cfn-lint, cfn-nag, tflint, and tfsec) are subject to their respective licenses. AWS does not endorse or support these third-party tools. Verify license compliance and evaluate tool capabilities for your specific use case.&lt;/p&gt; 
&lt;/div&gt; 
&lt;p&gt;Now that you understand the solution, let’s walk through the architecture that enables this automation.&lt;/p&gt; 
&lt;h3&gt;Architecture&lt;/h3&gt; 
&lt;p&gt;The solution spans three AWS accounts across three Organizational Units: Management (Root OU) manages the LZA control plane, SharedServices (Infrastructure OU) hosts the CI/CD pipelines, and Sandbox (Sandbox OU) serves as the deployment target for workloads. As shown in Figure 1, your SharedServices account acts as the CI/CD hub, while the Sandbox account serves as the deployment target with appropriate IAM roles for cross-account access. This hub-and-spoke approach offers three key benefits: centralized governance with consistent security controls, simplified maintenance that reduces operational overhead, and clear separation of duties that maintains security boundaries between accounts.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/0716d9708d321ffb6a00818614779e779925365c/2026/02/23/DEVTOOLS-16-1.jpg"&gt;&lt;img class="size-medium wp-image-12153 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/0716d9708d321ffb6a00818614779e779925365c/2026/02/23/DEVTOOLS-16-1-300x129.jpg" alt="" width="300" height="129"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 1: hub-and-spoke CI/CD architecture with cross-account deployment&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;With the hub-and-spoke architecture established, let’s understand how infrastructure changes flow through the deployment pipelines.&lt;/p&gt; 
&lt;h3&gt;CI/CD Pipeline Flow&lt;/h3&gt; 
&lt;p&gt;The CloudFormation pipeline begins with the Source stage, pulling from GitHub when changes occur to a CloudFormation template. The Validate stage runs CodeBuild to execute cfn-lint for syntax checking and cfn-nag for security analysis. Next, the CreateChangeSet stage creates a ChangeSet in the Sandbox account. The Manual Approval stage pauses execution for your review. Finally, the ExecuteChangeSet stage deploys the infrastructure.&lt;/p&gt; 
&lt;p&gt;For Terraform deployments, the pipeline follows a similar pattern with different validation tools. The Source stage is configured to pull from GitHub when changes are detected in the Terraform directory. The Plan stage runs CodeBuild to execute Terraform fmt, validate, tflint, and tfsec, then generates a Terraform plan. The Manual Approval stage pauses for your review. The Apply stage executes Terraform apply to deploy infrastructure to the Sandbox account.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/0716d9708d321ffb6a00818614779e779925365c/2026/02/23/DEVTOOLS-16-2.jpg"&gt;&lt;img loading="lazy" class="size-medium wp-image-12154 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/0716d9708d321ffb6a00818614779e779925365c/2026/02/23/DEVTOOLS-16-2-300x63.jpg" alt="" width="300" height="63"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 2: CloudFormation and Terraform pipeline deployment flows&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Both workflows shown in Figure 2 provide automated validation and manual approval for infrastructure changes before reaching target accounts. These workflows maintain governance and compliance controls throughout the deployment lifecycle.&lt;/p&gt; 
&lt;h2&gt;Key Security Features&lt;/h2&gt; 
&lt;p&gt;Security remains critical when automating infrastructure deployments across multiple AWS accounts. Your CI/CD extension to LZA implements a security framework that protects your infrastructure through multiple defensive layers. This approach verifies that automated deployments maintain the same security standards as manual processes while reducing the risk of human error. The security model centers on three core principles: least-privilege access through carefully scoped IAM roles, defense-in-depth controls that protect data and operations at multiple layers, and automated validation that catches security issues before they reach production environments.&lt;/p&gt; 
&lt;h3&gt;Cross-Account IAM Architecture&lt;/h3&gt; 
&lt;p&gt;Your secure cross-account deployment capability start with a carefully designed IAM architecture. The solution implements a three-tier IAM security model where each component has precisely the permissions it needs and no more. You’ll deploy specialized roles for Terraform operations that manage infrastructure state and execute plans, CloudFormation deployment roles that handle stack operations and resource management, and cross-account orchestration roles that are used to coordinate activities between your SharedServices and target accounts.&lt;/p&gt; 
&lt;p&gt;Each role receives permissions scoped only to the specific AWS services and operations required for their deployment tasks. For example, your Terraform role can access S3 for state management and DynamoDB for locking, but cannot create IAM users or modify AWS Organizations settings. This granular approach is designed to reduce your attack surface, enhancing operational functionality.&amp;nbsp;The complete IAM configuration details are available in&amp;nbsp;LZA’s &lt;a href="https://github.com/aws-samples/sample-aws-lza-cicd-customizations/blob/main/iam-config.yaml" target="_blank" rel="noopener noreferrer"&gt;iam-config.yaml&lt;/a&gt; for your review and customization.&lt;/p&gt; 
&lt;h3&gt;Security Controls&lt;/h3&gt; 
&lt;p&gt;Your deployment workflow implements defense-in-depth security across multiple layers to protect infrastructure deployments and maintain compliance standards. Rather than relying on a single security control, this approach creates overlapping protections that guard against various threat vectors.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Data Protection and Encryption&amp;nbsp;&lt;/strong&gt;The solution encrypts data at rest and in transit. S3 buckets use KMS customer-managed keys that you control. Terraform state files are encrypted to protect sensitive infrastructure details. Transport Layer Security (TLS) 1.2 or higher protects data transmission. Cross-account KMS key policies confirm that only authorized roles in specific accounts can decrypt build artifacts and deployment packages.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Compliance and Audit Controls&lt;/strong&gt; Automated governance and audit trails provide continuous oversight throughout the deployment lifecycle. Resource tagging adds custom markers for identifying and managing data processing activities. AWS CloudTrail tracks all deployment actions, creating an immutable record of operations.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Automated Security Validation&lt;/strong&gt; Your pipelines include multiple automated security checks that identify misconfigurations before they reach production environments. CloudFormation templates undergo validation for syntax and best practices compliance, plus security analysis that identifies potential security issues in your resource configurations. Terraform configurations receive similar treatment through syntax verification, best practices enforcement, and security scanning that identifies common misconfigurations.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Governance and Oversight&lt;/strong&gt; Manual approval gates provide human oversight for critical deployment decisions to maintain automated efficiency for routine operations. These approval points integrate with CloudTrail logging to create accountability records showing who approved which deployments. Your team can review Terraform plans and CloudFormation ChangeSets before execution, ensuring that automated systems implement exactly the changes you expect.&lt;/p&gt; 
&lt;h2&gt;Solution Walkthrough&lt;/h2&gt; 
&lt;p&gt;Now that you understand the architecture components, let’s walk through how you’ll implement this automated CI/CD solution.&amp;nbsp;The deployment follows a foundation-first approach: you’ll start with Stage 1 to establish your shared infrastructure, then deploy Stages 2 and 3 to create your specific deployment pipelines. After completing the foundation stage, you can deploy the CloudFormation and Terraform pipelines in either order since they operate independently.&lt;/p&gt; 
&lt;h3&gt;Stage 1: Foundation Resources&lt;/h3&gt; 
&lt;p&gt;You’ll begin by deploying the CICD-Pipeline-Foundation stack to your SharedServices account. This stage creates the shared infrastructure that both CloudFormation and Terraform pipelines depend on for secure, cross-account operations.&lt;/p&gt; 
&lt;p&gt;The foundation stage establishes your artifact storage with S3 buckets for build artifacts and Terraform state storage. You’ll also get a KMS customer-managed key for encryption that includes cross-account access policies, supporting secure operations across your account boundaries. For source code integration, the stack creates CodeConnections that provide secure, token-based access to your repositories. Additionally, you’ll deploy a DynamoDB table for Terraform state locking to prevent concurrent modifications, plus SSM Parameter Store values that enable cross-stack references between your pipeline components.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Deployment Time: &lt;/strong&gt;Estimated 10-15 minutes&lt;/p&gt; 
&lt;h3&gt;Stage 2: CloudFormation Pipeline&lt;/h3&gt; 
&lt;p&gt;Once your foundation is ready, you’ll deploy the CloudFormation-Workload-Deployment-Pipeline stack to your SharedServices account. This creates a complete CI/CD pipeline specifically designed for CloudFormation-based infrastructure deployments across multiple accounts.Your CloudFormation pipeline includes a CodePipeline that integrates with GitHub for source code management, performs automated validation to catch issues early, includes manual approval gates for governance compliance, and handles cross-account deployment stages. The pipeline uses a CodeBuild project configured with cfn-lint for syntax validation and cfn-nag for security analysis, ensuring your templates meet both technical and security standards before deployment. The cross-account deployment configuration targets your Sandbox account with appropriate IAM role assumptions, providing secure access without permanent credentials.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Deployment Time: &lt;/strong&gt;Estimated 5-10 minutes&lt;/p&gt; 
&lt;h3&gt;Stage 3: Terraform Pipeline&lt;/h3&gt; 
&lt;p&gt;Your final deployment creates the Terraform-Pipeline stack, also in the SharedServices account. This pipeline facilitates Terraform-based infrastructure deployments with integrated planning, approval, and deployment workflows. The Terraform workflow includes a CodePipeline with GitHub source integration for version control, Terraform planning stages that show you exactly what changes will be made, manual approval workflows that maintain governance oversight, and automated apply stages for consistent deployments. You’ll get CodeBuild projects specifically optimized for Terraform plan generation and infrastructure deployment, plus integrated validation, formatting and security scanning tools for Terraform. Like the CloudFormation pipeline, this includes cross-account role assumption capabilities for secure deployments to your Sandbox account.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Deployment Time: &lt;/strong&gt;Estimated 5-10 minutes&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/0716d9708d321ffb6a00818614779e779925365c/2026/02/23/DEVTOOLS-16-3.jpg"&gt;&lt;img loading="lazy" class="size-medium wp-image-12155 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/0716d9708d321ffb6a00818614779e779925365c/2026/02/23/DEVTOOLS-16-3-300x54.jpg" alt="" width="300" height="54"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;strong&gt;Figure 3: CodePipeline dashboard showing deployed CI/CD pipelines&lt;/strong&gt;&lt;/p&gt; 
&lt;h2&gt;Cost considerations&lt;/h2&gt; 
&lt;p&gt;Operating costs for this CI/CD solution depend on deployment frequency, build duration, and artifact storage requirements.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Small applications (20 deployments/month):&lt;/strong&gt; $5–7/month&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;CodePipeline: $2.00 (two active pipelines)&lt;/li&gt; 
 &lt;li&gt;CodeBuild: $1.50&lt;/li&gt; 
 &lt;li&gt;S3 storage: $0.23&lt;/li&gt; 
 &lt;li&gt;DynamoDB: $0.37&lt;/li&gt; 
 &lt;li&gt;KMS: $1.30&lt;/li&gt; 
 &lt;li&gt;CloudWatch Logs: $0.50&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Production environments (100+ deployments/month):&lt;/strong&gt; $30–40/month&lt;/p&gt; 
&lt;p&gt;The increase is driven primarily by additional build minutes and artifact storage requirements.&lt;/p&gt; 
&lt;div class="note"&gt; 
 &lt;p&gt;&lt;strong&gt;Cost disclaimer:&lt;/strong&gt; These estimates are examples based on January 19, 2025 AWS pricing in US East (N. Virginia). Actual costs vary by usage patterns and region.&lt;/p&gt; 
&lt;/div&gt; 
&lt;h2&gt;Best Practices&lt;/h2&gt; 
&lt;p&gt;Following these recommendations will help you optimize your CI/CD pipeline operations, reduce costs, and maintain security compliance.&lt;/p&gt; 
&lt;h3&gt;Environment Management and Security&lt;/h3&gt; 
&lt;p&gt;Implement proper environment separation using distinct AWS accounts for development, staging, and production deployments. This approach maintains isolation and prevents accidental changes to production infrastructure. Complement this with regular IAM policy reviews, restricting permissions following least-privilege principles to reduce security risks and maintain compliance requirements.&lt;/p&gt; 
&lt;h3&gt;Monitoring and Troubleshooting&lt;/h3&gt; 
&lt;p&gt;Configure monitoring by enabling CloudWatch logging for CodeBuild across regions and setting up alarms for pipeline failures. This visibility into deployment issues supports rapid troubleshooting when problems occur. Additionally, run validation tools locally before pushing to GitHub—this practice catches issues early, reduces feedback time compared to pipeline-based validation, and conserves CodeBuild minutes.&lt;/p&gt; 
&lt;h3&gt;State and Artifact Management&lt;/h3&gt; 
&lt;p&gt;Use DynamoDB for Terraform state locking to prevent concurrent modifications. Without proper locking, simultaneous Terraform apply operations can corrupt state files, causing infrastructure drift and deployment failures.&lt;/p&gt; 
&lt;h3&gt;Security Validation&lt;/h3&gt; 
&lt;p&gt;Make security scanning a priority by reviewing security findings before approval. Automated scanning catches misconfigurations early, reducing security vulnerabilities in production environments and enhancing your overall security posture.&lt;/p&gt; 
&lt;h2&gt;Clean Up&lt;/h2&gt; 
&lt;p&gt;To avoid ongoing charges, remove the deployed resources when you no longer need them. Follow these steps in order to prevent deletion conflicts:&lt;/p&gt; 
&lt;h3&gt;Remove Deployed Infrastructure&lt;/h3&gt; 
&lt;p&gt;Begin by removing workload infrastructure deployed through the pipelines in the Sandbox account. Use the AWS Management Console or AWS CLI to erase these resources, ensuring no dependencies remain before proceeding to the next step.&lt;/p&gt; 
&lt;h3&gt;Delete CI/CD Pipeline Stacks&lt;/h3&gt; 
&lt;p&gt;Remove the CI/CD pipeline stacks from the SharedServices account in reverse order of their creation. Start with the Terraform-Pipeline stack, followed by the CloudFormation-Workload-Deployment-Pipeline stack, and finally the CICD-Pipeline-Foundation stack. This sequence prevents dependency conflicts during deletion.&lt;/p&gt; 
&lt;h3&gt;Clear S3 Storage&lt;/h3&gt; 
&lt;p&gt;Before deleting the foundation stack, manually empty the S3 buckets containing artifacts and Terraform state files. CloudFormation cannot delete non-empty buckets, so this step prevents stack deletion failures.&lt;/p&gt; 
&lt;h3&gt;Confirm Resource Removal&lt;/h3&gt; 
&lt;p&gt;Check that resources have been successfully removed by reviewing CloudFormation stacks, S3 buckets, DynamoDB tables, and CodePipeline pipelines in the AWS Management Console. This verification step helps identify remaining resources that could generate ongoing charges.&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;In this post, we demonstrated how to extend AWS Landing Zone Accelerator with automated CI/CD pipelines supporting both Terraform and CloudFormation deployments across multiple accounts. By using LZA’s customizations feature alongside AWS native services like CodePipeline and CodeBuild, you can achieve consistent infrastructure deployment while preserving the security, governance, and compliance controls your organization requires.The hub-and-spoke architecture centralizes CI/CD operations in the SharedServices account, which standardizes deployment workflows, provides audit trails, and maintains security boundaries between target accounts. With automated validation, security scanning capabilities, and manual approval gates, this solution provides controls suitable for production environments managing complex multi-account AWS infrastructures.&lt;/p&gt; 
&lt;h2&gt;Getting Started&lt;/h2&gt; 
&lt;p&gt;Deploy the solution using the provided LZA configuration files:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Clone the Repository&lt;/strong&gt;:&amp;nbsp;Access the complete implementation from the &lt;a href="https://github.com/aws-samples/sample-aws-lza-cicd-customizations" target="_blank" rel="noopener noreferrer"&gt;aws-samples/sample-aws-lza-cicd-customizations &lt;/a&gt;GitHub repository&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Configure LZA&lt;/strong&gt;: Update the&amp;nbsp;&lt;a href="https://github.com/aws-samples/sample-aws-lza-cicd-customizations/blob/main/customizations-config.yaml" target="_blank" rel="noopener noreferrer"&gt;customizations-config.yaml&lt;/a&gt; file with your GitHub repository details&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Deploy configuration&lt;/strong&gt;: Commit and push changes to your LZA configuration repository&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Activate GitHub connection&lt;/strong&gt;: Manually authorize the CodeConnections integration in the AWS Management Console&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Verify deployment&lt;/strong&gt;: Check AWS CloudFormation stacks, CodePipeline creation, and SSM Parameter Store values&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;You can find complete deployment instructions, configuration examples, and troubleshooting guides in the repository&amp;nbsp;&lt;a href="https://github.com/aws-samples/sample-aws-lza-cicd-customizations/blob/main/README.md" target="_blank" rel="noopener noreferrer"&gt;README&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Additional Resources&lt;/h2&gt; 
&lt;p&gt;&lt;strong&gt;AWS Documentation:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/solutions/latest/landing-zone-accelerator-on-aws/developer-guide.html" target="_blank" rel="noopener noreferrer"&gt;Landing Zone Accelerator on AWS Developer Guide&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/codepipeline/" target="_blank" rel="noopener noreferrer"&gt;AWS CodePipeline Documentation&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/codebuild/" target="_blank" rel="noopener noreferrer"&gt;AWS CodeBuild Documentation&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/strategy-cicd-litmus/cicd-best-practices.html" target="_blank" rel="noopener noreferrer"&gt;AWS CI/CD best practices&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Need help?&lt;/strong&gt; Contact your AWS Solutions Architect or visit &lt;a href="https://aws.amazon.com/connect/?trk=65e9520a-73a2-4447-9d32-52a2519df821&amp;amp;sc_channel=ps&amp;amp;trk=65e9520a-73a2-4447-9d32-52a2519df821&amp;amp;sc_channel=ps&amp;amp;ef_id=Cj0KCQiA4eHLBhCzARIsAJ2NZoL1zVKM_H33k1Oh7YGy7cSbf_RP_7NOOK0cl1LDEw1GRmm3jSm0ohQaAkMtEALw_wcB:G:s&amp;amp;s_kwcid=AL!4422!3!527661220940!e!!g!!aws%20cloud%20contact%20center!13513519801!123036835149&amp;amp;gad_campaignid=13513519801&amp;amp;gbraid=0AAAAADjHtp8x-qTSNBMEOZc8XQiGA4Clj&amp;amp;gclid=Cj0KCQiA4eHLBhCzARIsAJ2NZoL1zVKM_H33k1Oh7YGy7cSbf_RP_7NOOK0cl1LDEw1GRmm3jSm0ohQaAkMtEALw_wcB" target="_blank" rel="noopener noreferrer"&gt;AWS Support&lt;/a&gt; to discuss implementation strategies for your specific multi-account requirements.&lt;/p&gt; 
&lt;p&gt;&lt;em&gt;Ready to automate your multi-account infrastructure deployments? Access the complete solution code and extend your Landing Zone Accelerator on AWS with custom CI/CD pipelines today!&lt;/em&gt;&lt;/p&gt; 
&lt;hr&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>Introducing Agent Plugins for AWS</title>
		<link>https://aws.amazon.com/blogs/developer/introducing-agent-plugins-for-aws/</link>
					
		
		<dc:creator><![CDATA[Anita Lewis]]></dc:creator>
		<pubDate>Tue, 17 Feb 2026 19:13:25 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Technical How-to]]></category>
		<guid isPermaLink="false">341fc63b0b33f3b32461d9238310e646dd999490</guid>

					<description>Deploying applications to AWS typically involves researching service options, estimating costs, and writing infrastructure-as-code tasks that can slow down development workflows. Agent plugins extend coding agents with specialized skills, enabling them to handle these AWS-specific tasks directly within your development environment. Today, we’re announcing Agent Plugins for AWS (Agent Plugins), an open source repository of […]</description>
										<content:encoded>&lt;p&gt;Deploying applications to AWS typically involves researching service options, estimating costs, and writing infrastructure-as-code tasks that can slow down development workflows. Agent plugins extend coding agents with specialized skills, enabling them to handle these AWS-specific tasks directly within your development environment.&lt;/p&gt; 
&lt;p&gt;Today, we’re announcing &lt;a href="https://github.com/awslabs/agent-plugins" target="_blank" rel="noopener noreferrer"&gt;Agent Plugins for AWS (Agent Plugins),&lt;/a&gt; an open source repository of agent plugins that provide coding agents with the agent skills to architect, deploy, and operate on AWS.&lt;/p&gt; 
&lt;p&gt;Today’s launch includes an initial &lt;em&gt;deploy-on-aws&lt;/em&gt; agent plugin, which lets developers enter &lt;code&gt;deploy to AWS&lt;/code&gt; and have their coding agent generate AWS architecture recommendations, AWS service cost estimates, and AWS infrastructure-as-code to deploy the application to AWS. We will add additional agent skills and agent plugins in the coming weeks.&lt;/p&gt; 
&lt;p&gt;Agent plugins are currently supported in Claude Code and Cursor (&lt;a href="https://cursor.com/blog/marketplace" target="_blank" rel="noopener noreferrer"&gt;announced February 17&lt;/a&gt;). In this post, we’ll show you how to get started with Agent Plugins for AWS, explore the &lt;em&gt;deploy-on-aws&lt;/em&gt; plugin in detail, and demonstrate how it transforms the deployment experience from hours of configuration to a simple conversation.&lt;/p&gt; 
&lt;h2&gt;&lt;strong&gt;Why agent plugins &lt;/strong&gt;&lt;/h2&gt; 
&lt;p&gt;AI coding agents are increasingly used in software development, helping developers write, review, and deploy code more efficiently. Agent skills and the broader agent plugin packaging model are emerging as best practices for steering coding agents toward reliable outcomes without bloating model context. Instead of repeatedly pasting long AWS guidance into prompts, developers can now encode that guidance as reusable, versioned capabilities that agents invoke when relevant. This improves determinism, reduces context overhead, and makes agent behavior easier to standardize across teams. Agent plugins act as containers that package different types of expertise artifacts together. A single agent plugin can include:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Agent skills&lt;/strong&gt; – Structured workflows and best-practice playbooks that guide AI through complex tasks like deployment, code review, or architecture planning. Agent skills encode domain expertise as step-by-step processes.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;MCP servers&lt;/strong&gt; – Connections to external services, data sources, and APIs. MCP servers give your assistant access to live documentation, pricing data, and other resources at runtime. Learn more about &lt;a href="https://github.com/awslabs/mcp" target="_blank" rel="noopener noreferrer"&gt;AWS MCP servers&lt;/a&gt;.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Hooks&lt;/strong&gt; – Automation and guardrails that run on developer actions. Hooks can validate changes, enforce standards, or trigger workflows automatically.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;References&lt;/strong&gt; – Documentation, configuration defaults, and knowledge that the agent skill can consult. References make agent skills smarter without bloating the prompt.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;As new types of expertise artifacts emerge in this space, they can be packaged into agent plugins, making the evolution transparent to developers.&lt;/p&gt; 
&lt;h2&gt;&lt;strong&gt;The deploy-on-aws plugin&lt;/strong&gt;&lt;/h2&gt; 
&lt;p&gt;The initial release includes the &lt;em&gt;deploy-on-aws&lt;/em&gt; plugin, which gives coding agents the knowledge to deploy applications to AWS with architecture recommendations, cost estimates, and infrastructure-as-code generation.&lt;/p&gt; 
&lt;p&gt;The agent plugin provides AI coding agents with a structured workflow:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Analyze – &lt;/strong&gt;Scan your codebase for framework, database, and dependencies.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Recommend –&lt;/strong&gt;&amp;nbsp;Select optimal AWS services with concise rationale.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Estimate –&lt;/strong&gt;&amp;nbsp;Show projected monthly cost before committing.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Generate – &lt;/strong&gt;Write CDK or CloudFormation infrastructure code.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Deploy –&lt;/strong&gt;&amp;nbsp;Execute your confirmation.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;The initial plugin uses three &lt;a href="https://github.com/awslabs/mcp" target="_blank" rel="noopener noreferrer"&gt;MCP servers for AWS&lt;/a&gt; to provide comprehensive guidance:&lt;/p&gt; 
&lt;table class="styled-table" border="1px" cellpadding="10px"&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;strong&gt;MCP server&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;strong&gt;Purpose&lt;/strong&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;a href="https://github.com/awslabs/mcp/tree/main/src/aws-knowledge-mcp-server" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;AWS Knowledge&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;Documentation, architecture guidance, and best practices&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;a href="https://github.com/awslabs/mcp/tree/main/src/aws-pricing-mcp-server" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;AWS Pricing&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;Real-time service pricing for cost estimates&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;a href="https://github.com/awslabs/mcp/tree/main/src/aws-iac-mcp-server" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;AWS IaC&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;Best practices for&amp;nbsp;AWS Cloud Formation Kit (AWS&amp;nbsp;CDK) and&amp;nbsp;AWS&amp;nbsp;CloudFormation&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h2&gt;&lt;strong&gt;Agent plugins for AWS in action&lt;/strong&gt;&lt;/h2&gt; 
&lt;p&gt;Here’s how &lt;a href="https://github.com/awslabs/agent-plugins" target="_blank" rel="noopener noreferrer"&gt;Agent plugins for AWS&lt;/a&gt; transform the software development experience:&lt;/p&gt; 
&lt;p&gt;A full-stack developer has built an Express.js REST API on their local machine. The application connects to a PostgreSQL database and serves a React frontend. The developer is ready to deploy to AWS but isn’t sure about the best architecture.&lt;/p&gt; 
&lt;p&gt;Using Cursor or Claude Code with the &lt;em&gt;deploy-on-aws&lt;/em&gt; plugin installed, the developer enters:&lt;/p&gt; 
&lt;p&gt;&lt;code&gt;Deploy this Express app to AWS&lt;/code&gt;&lt;/p&gt; 
&lt;p&gt;The plugin immediately springs into action:&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Step 1: Analyze&lt;/strong&gt; – The agent scans the codebase and identifies:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Express.js framework (Node.js 20.x)&lt;/li&gt; 
 &lt;li&gt;PostgreSQL database dependency&lt;/li&gt; 
 &lt;li&gt;Static React build in /public&lt;/li&gt; 
 &lt;li&gt;Environment variables for database connection&lt;/li&gt; 
 &lt;li&gt;Current traffic: ~1000 requests/day expected&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Step 2: Recommend&lt;/strong&gt; – Based on the analysis, the agent recommends:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/apprunner/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;AWS App Runner&lt;/strong&gt;&lt;/a&gt; for the Express.js backend (auto-scaling, managed container service)&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/rds/postgresql/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon RDS PostgreSQL&lt;/strong&gt;&lt;/a&gt; for the database (managed, automated backups)&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/cloudfront/getting-started/S3/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon CloudFront + S3&lt;/strong&gt;&lt;/a&gt; for the React frontend (global CDN, cost-effective static hosting)&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/secrets-manager/" target="_blank" rel="noopener noreferrer"&gt;&lt;strong&gt;AWS Secrets Manager&lt;/strong&gt;&lt;/a&gt; for database credentials&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Step 3: Estimate&lt;/strong&gt; – The agent provides a cost estimate using real-time pricing data from the &lt;a href="https://github.com/awslabs/mcp/tree/main/src/aws-pricing-mcp-server" target="_blank" rel="noopener noreferrer"&gt;AWS Pricing MCP server&lt;/a&gt;, giving you visibility into projected monthly costs before you commit to any infrastructure.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Step 4: Generate&lt;/strong&gt; The developer reviews the estimate and confirms. The agent generates:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;AWS CDK infrastructure code in TypeScript&lt;/li&gt; 
 &lt;li&gt;Dockerfile for the Express app&lt;/li&gt; 
 &lt;li&gt;Database migration scripts&lt;/li&gt; 
 &lt;li&gt;Environment configuration&lt;/li&gt; 
 &lt;li&gt;GitHub Actions workflow for CI/CD&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Step 5: Deploy&lt;/strong&gt; The developer reviews the generated code, makes minor adjustments to database schema, and confirms deployment. The agent:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Provisions all AWS resources via CDK&lt;/li&gt; 
 &lt;li&gt;Builds and deploys the container to App Runner&lt;/li&gt; 
 &lt;li&gt;Creates the Amazon RDS database and runs migrations&lt;/li&gt; 
 &lt;li&gt;Uploads the React build to S3 and configures CloudFront&lt;/li&gt; 
 &lt;li&gt;Stores credentials in Secrets Manager&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Within minutes, the developer’s application is live at a custom App Runner URL, with the React frontend served globally via CloudFront. The agent provides:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Application URLs (backend and frontend)&lt;/li&gt; 
 &lt;li&gt;Database connection details&lt;/li&gt; 
 &lt;li&gt;CloudWatch dashboard links for monitoring&lt;/li&gt; 
 &lt;li&gt;Cost tracking setup&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;What would have taken hours of reading documentation, comparing services, and writing infrastructure code took less than 10 minutes with the &lt;em&gt;deploy-on-aws&lt;/em&gt; plugin. Developers can now focus on building features instead of wrestling with cloud deployment complexity.&lt;/p&gt; 
&lt;h2&gt;&lt;strong&gt;Getting started with Agent Plugins for AWS&lt;/strong&gt;&lt;/h2&gt; 
&lt;h3&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/h3&gt; 
&lt;p&gt;To get started, you need:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://github.com/awslabs/agent-plugins?tab=readme-ov-file#requirements" target="_blank" rel="noopener noreferrer"&gt;An agent plugin compatible AI coding tool&lt;/a&gt; (Claude Code, Cursor, or other compatible tools)&lt;/li&gt; 
 &lt;li&gt;AWS CLI configured with appropriate credentials&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;&lt;strong&gt;Installation&lt;/strong&gt;&lt;/h3&gt; 
&lt;h4&gt;&lt;strong&gt;Claude Code&lt;/strong&gt;&lt;/h4&gt; 
&lt;p&gt;Add the Agent Plugins for AWS marketplace to Claude Code:/plugin marketplace add awslabs/agent-plugins&lt;/p&gt; 
&lt;p&gt;Install the &lt;em&gt;deploy-on-aw&lt;/em&gt;s plugin:&lt;/p&gt; 
&lt;p&gt;&lt;code&gt;/plugin install deploy-on-aws@awslabs-agent-plugins&lt;/code&gt;&lt;/p&gt; 
&lt;h4&gt;&lt;strong&gt;Cursor&lt;/strong&gt;&lt;/h4&gt; 
&lt;p&gt;Cursor announced support for agent plugins on February 17. You can install the &lt;em&gt;deploy-on-aws&lt;/em&gt; plugin directly from the &lt;a href="https://cursor.com/marketplace"&gt;Cursor Marketplace&lt;/a&gt;, or manually in Cursor by:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Open &lt;strong&gt;Cursor Settings&lt;/strong&gt;&lt;/li&gt; 
 &lt;li&gt;Navigate to&amp;nbsp;&lt;strong&gt;Plugins&lt;/strong&gt;,&amp;nbsp;and in the search bar type&amp;nbsp;&lt;code&gt;aws&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;Select the plugin you want to install, and Click&amp;nbsp;&lt;strong&gt;add to cursor&lt;/strong&gt;, then select the scope&lt;/li&gt; 
 &lt;li&gt;Now the plugin should appear under&amp;nbsp;&lt;strong&gt;Plugins&lt;/strong&gt;, &lt;strong&gt;installed&lt;/strong&gt;&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;Learn more in the &lt;a href="https://cursor.com/blog/marketplace"&gt;Cursor Marketplace announcement&lt;/a&gt;.&lt;/p&gt; 
&lt;h3&gt;&lt;strong&gt;Skill triggers&lt;/strong&gt;&lt;/h3&gt; 
&lt;p&gt;The &lt;em&gt;deploy-on-aws&lt;/em&gt; plugin responds to natural language requests like:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;“Deploy to AWS”&lt;/li&gt; 
 &lt;li&gt;“Host on AWS”&lt;/li&gt; 
 &lt;li&gt;“Run this on AWS”&lt;/li&gt; 
 &lt;li&gt;“AWS architecture for this app”&lt;/li&gt; 
 &lt;li&gt;“Estimate AWS cost”&lt;/li&gt; 
 &lt;li&gt;“Generate infrastructure”&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;&lt;strong&gt;Best practices for plugin-assisted development&lt;/strong&gt;&lt;/h2&gt; 
&lt;p&gt;To maximize the benefits of plugin-assisted development while maintaining security and code quality, follow these essential guidelines:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Always review generated code&lt;/strong&gt; before deployment (for example, against your constraints for security, cost, resilience)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Use plugins as accelerators&lt;/strong&gt;, not replacements for developer judgment and expertise.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Keep plugins updated&lt;/strong&gt; to benefit from the latest AWS best practices.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Follow the principle of least privilege&lt;/strong&gt; when configuring AWS credentials.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Run security scanning tools&lt;/strong&gt; on generated infrastructure code.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/h2&gt; 
&lt;p&gt;In this post, we showed how Agent Plugins for AWS extend coding agents with skills for deploying applications to AWS. Using the &lt;em&gt;deploy-on-aws&lt;/em&gt; plugin, you can generate architecture recommendations, cost estimates, and infrastructure-as-code directly from your coding agent.&lt;/p&gt; 
&lt;p&gt;Beyond deployments, agent plugins can help with other AWS workflows; more agent plugins for AWS are launching soon. You can also use &lt;a href="https://github.com/awslabs/mcp" target="_blank" rel="noopener noreferrer"&gt;AWS MCP servers&lt;/a&gt; to give your coding agent access to specialized tools to build on AWS.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Visit the &lt;a href="https://github.com/awslabs/agent-plugins" target="_blank" rel="noopener noreferrer"&gt;Agent Plugins for AWS repository&lt;/a&gt; to install and configure your agent plugins&lt;/li&gt; 
 &lt;li&gt;Install the &lt;em&gt;deploy-on-aws&lt;/em&gt; plugin from the &lt;a href="https://cursor.com/marketplace"&gt;Cursor Marketplace&lt;/a&gt; and start deploying from your editor&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;About the authors&lt;/h3&gt; 
&lt;p style="clear: both"&gt;&lt;/p&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>AWS Tools Installer V2 Preview</title>
		<link>https://aws.amazon.com/blogs/developer/aws-tools-installer-v2-preview/</link>
					
		
		<dc:creator><![CDATA[Jonathan Nunn]]></dc:creator>
		<pubDate>Tue, 17 Feb 2026 00:28:42 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS Tools for PowerShell]]></category>
		<category><![CDATA[PowerShell]]></category>
		<guid isPermaLink="false">5553217e5e9ddff5d4c1c44529083f8821178aa0</guid>

					<description>We are excited to offer a preview of AWS Tools Installer V2 which addresses customer feedback for faster and more reliable bulk installation of AWS Tools for PowerShell modules.</description>
										<content:encoded>&lt;p&gt;If you’ve been managing multiple &lt;a href="https://aws.amazon.com/powershell/"&gt;AWS Tools for PowerShell&lt;/a&gt; modules individually, you know how time-consuming installations and updates can be. Today, we’re excited to announce AWS Tools Installer V2, which provides performance improvements and additional guidance.&lt;/p&gt; 
&lt;h2&gt;Improvements in V2&lt;/h2&gt; 
&lt;p&gt;Standard PowerShell installation commands like &lt;code&gt;Install-Module&lt;/code&gt; and &lt;code&gt;Install-PSResource&lt;/code&gt; download modules that are packaged individually. To maximize performance when installing hundreds of modules, AWS Tools Installer V2 installs modules that are packaged together from &lt;a href="https://aws.amazon.com/cloudfront/"&gt;Amazon CloudFront&lt;/a&gt;. We recommend installing all AWS Tools for PowerShell modules together as a complete set so that commands are available for immediate use and to avoid module import conflicts that arise when all modules have not been updated to the same version.&lt;/p&gt; 
&lt;p&gt;The following example shows how to install all AWS Tools for PowerShell modules at once by running &lt;code&gt;Install-AWSToolsModule&lt;/code&gt;. Use the &lt;code&gt;-Version&lt;/code&gt; parameter to limit updates to minor versions of V5 of AWS Tools for PowerShell to reduce the risk of breaking changes.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-powershell"&gt;Install-AWSToolsModule -Version 5.*&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;AWS Tools Installer V2 also introduces the self-update commands &lt;code&gt;Install-AWSToolsInstaller&lt;/code&gt; and &lt;code&gt;Uninstall-AWSToolsInstaller&lt;/code&gt; so that the installer module itself is easier to update.&lt;/p&gt; 
&lt;p&gt;The following example shows how to install the latest minor version updates for AWS Tools Installer V2 to reduce the risk of breaking changes.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-powershell"&gt;Install-AWSToolsInstaller -Version 2.*&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Bugs Fixed&lt;/h2&gt; 
&lt;p&gt;V2 resolves two key issues:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Previously, latest modules would no longer be installed on systems where discontinued modules were installed. V2 now successfully updates all modules that are not discontinued.&lt;/li&gt; 
 &lt;li&gt;During the daily release window, installations sometimes failed because modules were still being published. V2 avoids this by only installing versions of modules that have completed the publishing process.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;h2&gt;New Features&lt;/h2&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Support for Offline Installation:&lt;/strong&gt; A new parameter called &lt;code&gt;-SourceZipPath&lt;/code&gt; enables installation from locally-staged files in offline or air-gapped environments.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Support for Prerelease Installation:&lt;/strong&gt; A new parameter called &lt;code&gt;-Prerelease&lt;/code&gt; enables installation of preview builds of AWS Tools for PowerShell and AWS Tools Installer.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Installer Update Notifications:&lt;/strong&gt; When importing AWS Tools Installer, it writes a message to the host to notify users if a new version is available.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Support for Standard Removal of PowerShell Module:&lt;/strong&gt;&amp;nbsp;AWS Tools Installer now adds metadata to enable module removal through standard PowerShell commands (&lt;code&gt;Uninstall-Module&lt;/code&gt; and &lt;code&gt;Uninstall-PSResource&lt;/code&gt;).&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Removal of Legacy Modules:&lt;/strong&gt; AWS Tools Installer V2 supports uninstalling the legacy AWSPowerShell and AWSPowerShell.NetCore modules from your PSModulePaths. Unlike these legacy modules, which load everything at once, AWS Tools for PowerShell modules import quickly and automatically as needed.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;pre&gt;&lt;code class="lang-powershell"&gt;Uninstall-AWSToolsModule -CleanUpLegacyScope 'CurrentUser'
&lt;/code&gt;&lt;/pre&gt; 
&lt;pre&gt;&lt;code class="lang-powershell"&gt;Install-AWSToolsModule -CleanUpLegacyScope 'CurrentUser'&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Breaking Changes&lt;/h2&gt; 
&lt;p&gt;You must plan for the following potential issues when upgrading:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;AWS Tools Installer V2 uses the endpoint &lt;a href="https://sdk-for-net.amazonwebservices.com/ps/"&gt;https://sdk-for-net.amazonwebservices.com/ps/&lt;/a&gt; to download zip files. You might need to update your firewall rules to allow access to this endpoint.&lt;/li&gt; 
 &lt;li&gt;The parameters &lt;code&gt;-Proxy&lt;/code&gt; and &lt;code&gt;-ProxyCredential&lt;/code&gt; have been removed. Please &lt;a href="https://learn.microsoft.com/en-us/powershell/azure/az-powershell-proxy?view=azps-15.1.0"&gt;configure proxy settings for your environment if necessary&lt;/a&gt;.&lt;/li&gt; 
 &lt;li&gt;The &lt;code&gt;-Force&lt;/code&gt; parameter has been removed for &lt;code&gt;Uninstall-AWSToolsModule&lt;/code&gt;. Using &lt;code&gt;Install-AWSToolsModule -Force&lt;/code&gt; will confirm that any existing module will be overwritten even if it has the same name and version compared to the module that was downloaded. To skip interactive confirmation prompts when using cmdlets, use the following syntax: &lt;code&gt;-Confirm:$false&lt;/code&gt;.&lt;/li&gt; 
 &lt;li&gt;The parameters &lt;code&gt;-SkipUpdate&lt;/code&gt;, &lt;code&gt;-SkipPublisherCheck&lt;/code&gt;, and &lt;code&gt;-AllowClobber&lt;/code&gt; are now ignored and have no effect. While these parameters are still accepted for backward compatibility, they no longer perform any function. Scripts relying on &lt;code&gt;-SkipUpdate&lt;/code&gt; to prevent updates of other installed modules will now have all installed modules updated.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;Note: The original behavior of cmdlets from V1 is preserved in the additional cmdlets: &lt;code&gt;Install-AWSToolsModuleV1&lt;/code&gt;, &lt;code&gt;Uninstall-AWSToolsModuleV1&lt;/code&gt;, and &lt;code&gt;Update-AWSToolsModuleV1&lt;/code&gt;. An alias can be defined for each cmdlet so that all legacy scripts running on a system map back to the original functionality after upgrading to V2, for example:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-powershell"&gt;Set-Alias -Name Install-AWSToolsModule -Value Install-AWSToolsModuleV1&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Getting Started&lt;/h2&gt; 
&lt;p&gt;To obtain the preview build of AWS Tools Installer V2 you can install from PSGallery using &lt;code&gt;Install-Module&lt;/code&gt;:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-powershell"&gt;Install-Module -Name AWS.Tools.Installer -MinimumVersion 2.0.0 -AllowPrerelease&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Please note that this a preview build and is not intended for use in production environments.&lt;/p&gt; 
&lt;h2&gt;Review Help Documentation&lt;/h2&gt; 
&lt;p&gt;Once installed, run the following PowerShell command to list AWS Tools Installer commands.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-powershell"&gt;Get-Command -Module AWS.Tools.Installer&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Use the &lt;code&gt;Get-Help&lt;/code&gt; command to view the documentation for a particular AWS Tools Installer command. For example:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-powershell"&gt;Get-Help Install-AWSToolsModule -Full&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Provide Feedback&lt;/h2&gt; 
&lt;p&gt;We’re looking forward to hearing what you think about the new version of AWS Tools Installer. Please share any feedback you have on the &lt;a href="https://github.com/aws/aws-tools-for-powershell/issues/411"&gt;AWS Tools Installer V2 Preview Tracker&lt;/a&gt; at the &lt;a href="https://github.com/aws/aws-tools-for-powershell"&gt;AWS Tools for PowerShell GitHub repository&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;GA Release Following Preview&lt;/h2&gt; 
&lt;p&gt;We are planning to release V2 for general availability once we’ve had an opportunity to collect feedback. To avoid breaking changes, please make sure that your production workloads are configured to NOT install major version updates:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-powershell"&gt;Install-Module -Name AWS.Tools.Installer -MaximumVersion '1.9.999'&lt;/code&gt;&lt;/pre&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>Introducing Multipart Download Support for AWS SDK for .NET Transfer Manager</title>
		<link>https://aws.amazon.com/blogs/developer/introducing-multipart-download-support-for-aws-sdk-for-net-transfer-manager/</link>
					
		
		<dc:creator><![CDATA[Garrett Beatty]]></dc:creator>
		<pubDate>Mon, 09 Feb 2026 16:27:06 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS .NET Development]]></category>
		<category><![CDATA[AWS SDK for .NET]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[aws-sdk-for-net]]></category>
		<category><![CDATA[S3]]></category>
		<guid isPermaLink="false">0fa7c56e5c65c4812e45dd77717ac482012fa16c</guid>

					<description>The new multipart download support in AWS SDK for .NET Transfer Manager improves the performance of downloading large objects from Amazon Simple Storage Service (Amazon S3). Customers are looking for better performance and parallelization of their downloads, especially when working with large files or datasets. The AWS SDK for .NET Transfer Manager (version 4 only) […]</description>
										<content:encoded>&lt;p&gt;The new multipart download support in &lt;a href="https://aws.amazon.com/sdk-for-net/"&gt;AWS SDK for .NET&lt;/a&gt; Transfer Manager improves the performance of downloading large objects from &lt;a href="https://docs.aws.amazon.com/s3/"&gt;Amazon Simple Storage Service (Amazon S3)&lt;/a&gt;. Customers are looking for better performance and parallelization of their downloads, especially when working with large files or datasets. The AWS SDK for .NET Transfer Manager (&lt;a href="https://aws.amazon.com/blogs/developer/general-availability-of-aws-sdk-for-net-v4-0/"&gt;version 4 only&lt;/a&gt;) now delivers faster download speeds through automatic multipart coordination, eliminating the need for complex code to manage concurrent connections, handle retries, and coordinate multiple download streams.&lt;/p&gt; 
&lt;p&gt;In this post, we’ll show you how to configure and use these new multipart download capabilities, including downloading objects to files and streams, managing memory usage for large transfers, and migrating from existing download methods.&lt;/p&gt; 
&lt;h2&gt;Parallel download using part numbers and byte-ranges&lt;/h2&gt; 
&lt;p&gt;For download operations, the Transfer Manager now supports both &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/optimizing-performance-guidelines.html"&gt;part number and byte-range fetches&lt;/a&gt;. Part number fetches download the object in parts, using the part number assigned to each object part during upload. Byte-range fetches download the object with byte ranges and work on all objects, regardless of whether they were originally uploaded using &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html"&gt;multipart upload&lt;/a&gt; or not. The transfer manager splits your &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html"&gt;GetObject&lt;/a&gt; request into multiple smaller requests, each of which retrieves a specific portion of the object. The transfer manager executes your requests through concurrent connections to Amazon S3.&lt;/p&gt; 
&lt;h2&gt;Choosing between part numbers and byte-range strategies&lt;/h2&gt; 
&lt;p&gt;Choose between part number and byte-range downloads based on your object’s structure. Part number downloads (the default) work best for objects uploaded with standard multipart upload part sizes. If the object is a non-multipart object, choose byte-range downloads. Range downloads enable greater parallelization when objects have large parts (for example, splitting a 5GB part into multiple 50MB range requests for concurrent transfer) and work with any S3 object regardless of how it was uploaded.&lt;/p&gt; 
&lt;p&gt;Keep in mind that smaller range sizes result in more S3 requests. Each API call incurs a cost beyond the data transfer itself, so balance parallelism benefits against the number of requests for your use case.&lt;/p&gt; 
&lt;p&gt;Now that you understand the download strategies, let’s set up your development environment.&lt;/p&gt; 
&lt;h2&gt;Getting started&lt;/h2&gt; 
&lt;p&gt;To get started with multipart downloads in the AWS SDK for .NET Transfer Manager, follow these steps:&lt;/p&gt; 
&lt;h3&gt;Add the dependency to your .NET project&lt;/h3&gt; 
&lt;p&gt;Update your project to use the latest AWS SDK for .NET:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-csharp"&gt;dotnet add package AWSSDK.S3 -v 4.0.17&amp;nbsp;&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Or add the PackageReference to your .csproj file:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-csharp"&gt;&amp;lt;PackageReference Include="AWSSDK.S3" Version="4.0.17" /&amp;gt;; &lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Initialize the Transfer Manager&lt;/h3&gt; 
&lt;p&gt;You can initialize a Transfer Manager with default settings for typical use cases:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-csharp"&gt;var s3Client = new AmazonS3Client(); 
var transferUtility = new TransferUtility(s3Client); &lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;You can customize the following options:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-csharp"&gt;// Create custom Transfer Manager configuration 
var config = new TransferUtilityConfig 
{ 
    ConcurrentServiceRequests = 20,  // Maximum number of concurrent HTTP requests 
    BufferSize = 8192  // Buffer size in bytes for file I/O and HTTP responses 
}; 
 
// Create Transfer Manager with custom configuration 
var transferUtility = new TransferUtility(s3Client, config); &lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Experiment with these values to find the optimal configuration for your use case. Factors like object size, available network bandwidth, and your application’s memory constraints will influence which settings work best. For more information about configuration options, please refer to the documentation on &lt;code&gt;TransferUtilityConfig&lt;/code&gt;.&lt;/p&gt; 
&lt;h3&gt;Download an object to file&lt;/h3&gt; 
&lt;p&gt;To download an object from an Amazon S3 bucket to a local file, use the &lt;code&gt;DownloadWithResponseAsync&lt;/code&gt; method. You must provide the source bucket, the S3 object key, and the destination file path.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-csharp"&gt;// Download large file with multipart support (Part GET strategy) 
var downloadResponse = await transferUtility.DownloadWithResponseAsync( 
    new TransferUtilityDownloadRequest 
    { 
        BucketName = "amzn-s3-demo-bucket", 
        Key = "large-dataset.zip", 
        FilePath = @"C:\downloads\large-dataset.zip", 
        MultipartDownloadType = MultipartDownloadType.PART  // Default - uses S3 part numbers 
    }); &lt;/code&gt;&lt;/pre&gt; 
&lt;pre&gt;&lt;code class="lang-csharp"&gt;// Download using Range GET strategy (works with any S3 object) 
var downloadResponse = await transferUtility.DownloadWithResponseAsync( 
    new TransferUtilityDownloadRequest 
    { 
        BucketName = "amzn-s3-demo-bucket", 
        Key = "any-object.dat", 
        FilePath = @"C:\downloads\any-object.dat", 
        MultipartDownloadType = MultipartDownloadType.RANGE,  // Uses HTTP byte ranges 
        PartSize = 16 * 1024 * 1024  // 16MB parts (default is 8MB) 
    }); &lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Download an object to stream&lt;/h3&gt; 
&lt;p&gt;To download an object from Amazon S3 directly to a stream, use the &lt;code&gt;OpenStreamWithResponseAsync&lt;/code&gt; method. This is useful when you want to process data as it downloads without saving it to disk first. You must provide the source bucket and the S3 object key. The &lt;code&gt;OpenStreamWithResponseAsync&lt;/code&gt; method performs parallel downloads by buffering parts in memory until they are read from the stream. See the configuration options below for how to control memory consumption during buffering.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-csharp"&gt;// Stream large file with multipart coordination and memory control 
var streamResponse = await transferUtility.OpenStreamWithResponseAsync( 
    new TransferUtilityOpenStreamRequest 
    { 
        BucketName = "amzn-s3-demo-bucket", 
        Key = "large-video.mp4", 
        MaxInMemoryParts = 512,  // Maximum number of parts buffered in memory  (default is 1024)
                                  // Total memory = MaxInMemoryParts × PartSize 
        MultipartDownloadType = MultipartDownloadType.PART,  // Uses S3 part numbers 
        ChunkBufferSize = 64 * 1024  // Size of individual memory chunks (64KB) 
                                      // allocated from ArrayPool for buffering. (default is 64KB) 
    }); 
 
using var stream = streamResponse.ResponseStream; 
// Process stream data as it downloads concurrently 
var buffer = new byte[8192]; 
int bytesRead = await stream.ReadAsync(buffer, 0, buffer.Length); &lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Memory management for streaming downloads&lt;/strong&gt;: The &lt;code&gt;MaxInMemoryParts&lt;/code&gt; parameter controls how many parts can be buffered simultaneously, and &lt;code&gt;ChunkBufferSize&lt;/code&gt; determines the size of individual memory chunks allocated for buffering. You can experiment with different values for both parameters to find the optimal configuration for your specific use case.&lt;/p&gt; 
&lt;h3&gt;Download a directory&lt;/h3&gt; 
&lt;p&gt;To download multiple objects from an S3 bucket prefix to a local directory, use the &lt;code&gt;DownloadDirectoryWithResponseAsync&lt;/code&gt; method. This method automatically applies multipart download to each individual object in the directory.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-csharp"&gt;// Download entire directory with multipart support for large files 
await transferUtility.DownloadDirectoryWithResponseAsync( 
    new TransferUtilityDownloadDirectoryRequest 
    { 
        BucketName = "amzn-s3-demo-bucket", 
        S3Directory = "datasets/", 
        LocalDirectory = @"C:\data\" 
    }); &lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Migration path&lt;/h2&gt; 
&lt;p&gt;The new &lt;code&gt;WithResponse&lt;/code&gt; methods provide both multipart performance and access to S3 response metadata. Here’s how to migrate your existing code:&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;For file downloads:&amp;nbsp;&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-csharp"&gt;// Existing code (still works, but returns void) 
await transferUtility.DownloadAsync(downloadRequest); 
 
// Enhanced version (new capabilities + metadata access) 
var response = await transferUtility.DownloadWithResponseAsync(downloadRequest); 
Console.WriteLine($"Downloaded {response.ContentLength} bytes"); 
Console.WriteLine($"ETag: {response.ETag}"); &lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;For streaming downloads:&amp;nbsp;&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-csharp"&gt;// Before: direct Stream return 
using var stream = await transferUtility.OpenStreamAsync(streamRequest); 
 
// After: access ResponseStream from response object 
var response = await transferUtility.OpenStreamWithResponseAsync(streamRequest); 
using var stream = response.ResponseStream; 
Console.WriteLine($"Content-Type: {response.ContentType}"); 
Console.WriteLine($"Last Modified: {response.LastModified}"); &lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;The multipart download support in the AWS SDK for .NET Transfer Manager provides performance improvements for downloading large objects from Amazon S3. By using parallel byte-range or part-number fetches, you can reduce transfer times.&lt;/p&gt; 
&lt;p&gt;Key takeaways from this post:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Use &lt;code&gt;DownloadWithResponseAsync&lt;/code&gt; and &lt;code&gt;OpenStreamWithResponseAsync&lt;/code&gt; for downloads with automatic multipart coordination&lt;/li&gt; 
 &lt;li&gt;Choose between PART and RANGE download strategies based on your object’s structure&lt;/li&gt; 
 &lt;li&gt;Customize configuration settings based on your specific environment (memory, network bandwidth, etc.)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Next steps&lt;/strong&gt;: Try implementing multipart downloads in your applications and measure the performance improvements for your specific use cases.&lt;/p&gt; 
&lt;p&gt;To learn more about the AWS SDK for .NET Transfer Manager, visit the &lt;a href="https://docs.aws.amazon.com/sdk-for-net/"&gt;AWS SDK for .NET documentation&lt;/a&gt;. For questions or feedback about this feature, visit the &lt;a href="https://github.com/aws/aws-sdk-net/issues"&gt;GitHub issues&lt;/a&gt; page.&lt;/p&gt; 
&lt;p&gt;&amp;nbsp;&lt;/p&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>CLI v1 Maintenance Mode Announcement</title>
		<link>https://aws.amazon.com/blogs/developer/cli-v1-maintenance-mode-announcement/</link>
					
		
		<dc:creator><![CDATA[Anna-Karin Salander]]></dc:creator>
		<pubDate>Thu, 15 Jan 2026 17:38:02 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS CLI]]></category>
		<guid isPermaLink="false">e25635c44e24cfffe8b3a0d0f75fd7d02ac972fa</guid>

					<description>In alignment with our SDKs and Tools Maintenance Policy, version 1 of the AWS Command Line Interface (AWS CLI v1)&amp;nbsp;will enter maintenance mode on July 15, 2026 and reach end-of-support on July 15, 2027.</description>
										<content:encoded>&lt;p&gt;In alignment with our &lt;a href="https://docs.aws.amazon.com/sdkref/latest/guide/maint-policy.html" target="_blank" rel="noopener noreferrer"&gt;SDKs and Tools Maintenance Policy&lt;/a&gt;, version 1 of the AWS Command Line Interface (&lt;a href="https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-welcome.html" target="_blank" rel="noopener noreferrer"&gt;AWS CLI v1&lt;/a&gt;)&amp;nbsp;will enter maintenance mode on July 15, 2026 and reach end-of-support on July 15, 2027.&lt;/p&gt; 
&lt;p&gt;Scripts and workflows made for AWS CLI v1 will continue working during this period, unless an AWS service makes fundamental changes. Such large changes are uncommon, and we will communicate them broadly.&amp;nbsp;Between&amp;nbsp;July 15, 2026 and end-of-support on July 15, 2027, AWS CLI v1 will only receive critical bug fixes and security updates. We will not add new features to the AWS&amp;nbsp;CLI v1, including changes to AWS CLI v1 itself,&amp;nbsp;new AWS services, existing services, or expansions of regions and endpoints.&lt;/p&gt; 
&lt;p&gt;The following table outlines the level of support for each phase of the SDK lifecycle.&lt;/p&gt; 
&lt;table class="styled-table" border="1px" cellpadding="10px"&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;strong&gt;SDK Lifecycle Phase&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;strong&gt;Start Date&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;strong&gt;End Date&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;&lt;strong&gt;Support Level&lt;/strong&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;General Availability&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;11/19/2015&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;7/14/2026&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;During this phase, AWS CLI v1 is fully supported. AWS will provide regular releases that include support for new services, API updates for existing services, as well as bug and security fixes.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;Maintenance mode&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;7/15/2026&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;7/14/2027&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;AWS will limit releases to address critical bug fixes and security issues only. The&amp;nbsp;AWS CLI v1 will not receive API updates for new or existing services, or be updated to support new regions.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;End-of-support&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;7/15/2027&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;N/A&lt;/td&gt; 
   &lt;td style="padding: 10px;border: 1px solid #dddddd"&gt;AWS CLI v1&amp;nbsp;will no longer receive updates or releases. Previously published releases will continue to be available via public package managers and the code will remain on GitHub.&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;We recommend that customers of AWS CLI v1 migrate to&amp;nbsp;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html" target="_blank" rel="noopener noreferrer"&gt;AWS CLI&amp;nbsp;v2&lt;/a&gt;. The AWS CLI v2 provides improved features, enhanced performance, and continued support from AWS. By adopting the latest version of the AWS CLI, developers ensure the security, compatibility, and stability of their solutions on AWS. Updating enables you to leverage the latest services and innovations from AWS. To learn more, refer to the following resources:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;The&amp;nbsp;&lt;a href="https://aws.amazon.com/cli/" target="_blank" rel="noopener noreferrer"&gt;AWS CLI&amp;nbsp;landing page &lt;/a&gt;contains links to the getting started guide, key features, examples, and links to additional resources.&lt;/li&gt; 
 &lt;li&gt;The&amp;nbsp;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html" target="_blank" rel="noopener noreferrer"&gt;Migration guide for the AWS CLI version 2&lt;/a&gt;&amp;nbsp;in the &lt;em&gt;AWS CLI User Guide&lt;/em&gt; explains the changes between the two versions and instructions for migrating.&lt;/li&gt; 
 &lt;li&gt;You can use CLI v1’s &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-upgrade-debug-mode.html" target="_blank" rel="noopener noreferrer"&gt;upgrade debug mode&lt;/a&gt;&amp;nbsp;introduced in 1.44.0 to highlight features that have changed from v1 to v2 when executing your commands or scripts, as well as the&amp;nbsp;AWS CLI v1-to-v2 Migration Tool to lint and upgrade scripts.&lt;/li&gt; 
 &lt;li&gt;The&amp;nbsp;&lt;a href="https://aws.amazon.com/blogs/developer/aws-cli-v2-is-now-generally-available/" target="_blank" rel="noopener noreferrer"&gt;AWS CLI Version 2 – General Availability&lt;/a&gt;&amp;nbsp;announcement post in the &lt;em&gt;AWS Developer Tools Blog&lt;/em&gt;&amp;nbsp;outlines the motivation for launching AWS CLI v2 and includes the benefits over AWS CLI v1.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Additional features of AWS CLI v2&lt;/h2&gt; 
&lt;p&gt;The AWS CLI v2 includes an embedded version of Python and no longer requires, or interacts with, your system-wide version of Python. A major theme for version 2 is the interactive features such as&amp;nbsp;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-completion.html" target="_blank" rel="noopener noreferrer"&gt;server-side&amp;nbsp;command completion&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-parameters-prompting.html" target="_blank" rel="noopener noreferrer"&gt;auto-prompt&lt;/a&gt;, and &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-wizard.html" target="_blank" rel="noopener noreferrer"&gt;wizards&lt;/a&gt;&amp;nbsp;that guide you through constructing and running AWS CLI commands. V2 also supports &lt;a href="https://aws.amazon.com/blogs/security/simplified-developer-access-to-aws-with-aws-login/" target="_blank" rel="noopener noreferrer"&gt;simplified developer access with aws login,&lt;/a&gt;&amp;nbsp;&amp;nbsp;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html" target="_blank" rel="noopener noreferrer"&gt;AWS IAM Identify Center authentication&lt;/a&gt;, and new commands under&amp;nbsp;&lt;code&gt;aws configure&lt;/code&gt; for managing profiles and credentials.&lt;/p&gt; 
&lt;p&gt;While we strived to keep the two versions of AWS CLI compatible, there are features introduced in version 2 that have not been back ported to version 1. The most notable are&amp;nbsp;&lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ddb/index.html" target="_blank" rel="noopener noreferrer"&gt;high-level DynamoDB commands&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-connect-methods.html#connect-linux-inst-eic-cli-ssh" target="_blank" rel="noopener noreferrer"&gt;EC2 Instance Connect with an SSH client,&lt;/a&gt; and the &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs_LiveTail.html#CloudWatchLogs_LiveTail_session_CLI" target="_blank" rel="noopener noreferrer"&gt;interactive mode for CloudWatch Logs Live Tail&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Feedback&lt;/h2&gt; 
&lt;p&gt;If you need migration assistance or have feedback, reach out to your usual AWS support contacts. You can also open a &lt;a href="https://github.com/aws/aws-cli/discussions" target="_blank" rel="noopener noreferrer"&gt;discussion&lt;/a&gt; or &lt;a href="https://github.com/aws/aws-cli/issues" target="_blank" rel="noopener noreferrer"&gt;issue&lt;/a&gt; on GitHub. Thank you for using the AWS CLI!&lt;/p&gt; 
&lt;hr&gt; 
&lt;h3&gt;About the authors&lt;/h3&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>AWS SDK for JavaScript aligns with Node.js release schedule</title>
		<link>https://aws.amazon.com/blogs/developer/aws-sdk-for-javascript-aligns-with-node-js-release-schedule/</link>
					
		
		<dc:creator><![CDATA[Trivikram Kamat]]></dc:creator>
		<pubDate>Mon, 08 Dec 2025 17:32:10 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS SDK for JavaScript in Node.js]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[JavaScript]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[aws-sdk]]></category>
		<category><![CDATA[aws-sdk-js-v3]]></category>
		<category><![CDATA[Node.js]]></category>
		<category><![CDATA[SDK]]></category>
		<category><![CDATA[typescript]]></category>
		<guid isPermaLink="false">fab9c55059b3cd57a6ec0680b3d9920581d21409</guid>

					<description>This post is about AWS SDK for JavaScript v3 announcing end of support for Node.js versions based on Node.js release schedule, and it is not about AWS Lambda. For the latter, refer to the Lambda runtime deprecation policy. In the second week of January 2026, the AWS SDK for JavaScript v3 (JS SDK) will start […]</description>
										<content:encoded>&lt;blockquote&gt;
 &lt;p&gt;&lt;em&gt;This post is about AWS SDK for JavaScript v3 announcing end of support for Node.js versions based on Node.js release schedule, and it is not about AWS Lambda. For the latter, refer to the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html#runtime-support-policy"&gt;Lambda runtime deprecation policy&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;p&gt;In the second week of January 2026, the &lt;a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/"&gt;AWS SDK for JavaScript v3&lt;/a&gt; (JS SDK) will start following the &lt;a href="https://github.com/nodejs/Release"&gt;Node.js release schedule&lt;/a&gt; for ending support for Node.js and ECMAScript versions. The JS SDK versions will be tested on all Long-Term Support (LTS) versions of Node.js, with an additional 8 months of support for the most recent end-of-life (EOL) version. When we drop support for specific Node.js EOL version, we will also drop support for equivalent ECMAScript version in browsers.&lt;/p&gt; 
&lt;p&gt;The Node.js release schedule states that LTS versions reach EOL in April, and there are three LTS versions supported at any point in time. For example, Node.js 18.x was supported until April 2025 (until Node.js 24.x was published), and Node.js 20.x will be supported until April 2026 (until the next LTS version of Node.js will be published). In accordance with the &lt;a href="https://docs.aws.amazon.com/sdkref/latest/guide/maint-policy.html#dep-life-cycle"&gt;AWS SDKs and Tools maintenance policy&lt;/a&gt; for language runtimes, the JS SDK will support each Node.js major version for an additional 8 months past the Node.js EOL date. (AWS reserves the right to drop support for unsupported Node.js versions earlier to address critical security issues.)&lt;/p&gt; 
&lt;p&gt;Along with dropping support for the Node.js EOL version, the JS SDK will also drop support for the equivalent ECMAScript version in browsers. The following table summarizes the upcoming dates.&lt;/p&gt; 
&lt;table style="border-collapse: collapse" border="1px solid black"&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th style="padding: 10px 5px"&gt;Node.js Version&lt;/th&gt; 
   &lt;th style="padding: 10px 5px"&gt;Release Date&lt;/th&gt; 
   &lt;th style="padding: 10px 5px"&gt;Node.js End of Life&lt;/th&gt; 
   &lt;th style="padding: 10px 5px"&gt;JS SDK End of Support&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px 5px"&gt;18.x&lt;/td&gt; 
   &lt;td style="padding: 10px 5px"&gt;April 2022&lt;/td&gt; 
   &lt;td style="padding: 10px 5px"&gt;April 2025&lt;/td&gt; 
   &lt;td style="padding: 10px 5px"&gt;January 2026: Migrate to 20.x+ and ES2023+&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px 5px"&gt;20.x&lt;/td&gt; 
   &lt;td style="padding: 10px 5px"&gt;April 2023&lt;/td&gt; 
   &lt;td style="padding: 10px 5px"&gt;April 2026*&lt;/td&gt; 
   &lt;td style="padding: 10px 5px"&gt;January 2027*: Migrate to 22.x+ and ES2024+&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px 5px"&gt;22.x&lt;/td&gt; 
   &lt;td style="padding: 10px 5px"&gt;April 2024&lt;/td&gt; 
   &lt;td style="padding: 10px 5px"&gt;April 2027*&lt;/td&gt; 
   &lt;td style="padding: 10px 5px"&gt;January 2028*: Migrate to 24.x+ and ES2025+&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style="padding: 10px 5px"&gt;24.x&lt;/td&gt; 
   &lt;td style="padding: 10px 5px"&gt;April 2025&lt;/td&gt; 
   &lt;td style="padding: 10px 5px"&gt;April 2028*&lt;/td&gt; 
   &lt;td style="padding: 10px 5px"&gt;January 2029*: Migrate to 26.x+ and ES2026+&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;* &lt;em&gt;These dates are speculative based on Node.js release schedule.&lt;/em&gt;&lt;/p&gt; 
&lt;h2&gt;Benefits of upgrading Node.js versions&lt;/h2&gt; 
&lt;p&gt;Your applications’ security depends on staying current with the JavaScript ecosystem. When Node.js versions reach EOL, they no longer receive security patches or bug fixes, exposing your applications to vulnerabilities.&lt;/p&gt; 
&lt;p&gt;By aligning with the Node.js release schedule, you can make sure you have access to the following:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;A secure, up-to-date runtime that protects your applications from known vulnerabilities.&lt;/li&gt; 
 &lt;li&gt;Ongoing performance improvements that keep your applications running optimally.&lt;/li&gt; 
 &lt;li&gt;Predictable support timelines to plan future upgrades without disruption.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;We strongly recommend upgrading to a supported Node.js version—preferably the latest LTS.&lt;/p&gt; 
&lt;h2&gt;What to expect&lt;/h2&gt; 
&lt;p&gt;If you’re using the latest JS SDK version with the Node.js LTS version that is EOL, the following message will be shown when you create an instance of any client:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="language-js"&gt;// test.mjs or test.js with type:module
import { DynamoDB } from "@aws-sdk/client-dynamodb";

const client = new DynamoDB({});&lt;/code&gt;&lt;/pre&gt; 
&lt;pre&gt;&lt;code class="language-console"&gt;$ node test.mjs
...
NodeDeprecationWarning: The AWS SDK for JavaScript (v3) will
no longer support Node.js v18.20.8 in January 2026.

To continue receiving updates to AWS services, bug fixes, and security
updates please upgrade to supported version of Node.js.

More information can be found at: https://a.co/c895JFp
...
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;In the second week of January, the GitHub and npm releases for the JS SDK will contain release notes stating the end of support for the Node.js EOL version. The exact SDK version will be included in the same release notes.&lt;/p&gt; 
&lt;p&gt;If you are using the Node.js EOL version, installing the later versions of the SDK will cause an engine deprecation warning to appear. If you have set &lt;code&gt;engine-strict=true&lt;/code&gt;, an npm installation error with code &lt;code&gt;ENOTSUP&lt;/code&gt; will occur as follows:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="language-console"&gt;$ node --version
v18.20.8

$ npm install @aws-sdk/client-s3
...
npm ERR! code ENOTSUPnpm ERR! notsup Unsupported engine for @aws-sdk/client-s3@&amp;lt;version&amp;gt;: wanted: {"node":"&amp;gt;=20.0.0"} (current: {"node":"18.20.8","npm":"10.8.2"})
...
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The JS SDK versions released in the second week of January may continue to work on the Node.js EOL version. This does not imply a continuation of support. You can continue to use older versions of the JS SDK released before the second week of January with the Node.js EOL version.&lt;/p&gt; 
&lt;p&gt;Along with dropping support for the Node.js EOL version, we will also drop support for the equivalent ECMAScript version in browsers. This doesn’t impact most applications because new versions of browsers are released at a much faster pace (usually every 4–6 weeks), and they are automatically updated. Also, most browser applications use bundlers, where the ECMAScript version is specified in the application bundler configuration and the bundler will transpile all dependencies to that target. For applications that need to support very old browsers, you must provide polyfills.&lt;/p&gt; 
&lt;h2&gt;Maintenance policies&lt;/h2&gt; 
&lt;p&gt;We followed the Node.js release schedule and AWS SDKs and Tools maintenance policy to arrive at the end-of-support cadence for Node.js and equivalent ECMAScript version in browsers.&lt;/p&gt; 
&lt;h3&gt;Node.js Release Schedule&lt;/h3&gt; 
&lt;p&gt;Refer to the &lt;a href="https://github.com/nodejs/Release#release-schedule"&gt;Node.js release schedule&lt;/a&gt; for a complete list of Node.js versions and their maintenance status.&lt;/p&gt; 
&lt;p&gt;&lt;img loading="lazy" class="aligncenter wp-image-12028 size-large" src="https://d2908q01vomqb2.cloudfront.net/0716d9708d321ffb6a00818614779e779925365c/2025/12/02/nodejs-release-schedule-jul-2025-1-1024x535.png" alt="Node.js release schedule July 2025" width="1024" height="535"&gt;&lt;/p&gt; 
&lt;p&gt;The new even-numbered versions (e.g. v20.x, v22.x, v24.x, and so on) are released in April, whereas odd-numbered versions (e.g. v21.x, v23.x) are released in October. When a new odd-numbered release is available, the previous even-numbered version transitions to LTS.&lt;/p&gt; 
&lt;h3&gt;AWS SDKs and Tools&lt;/h3&gt; 
&lt;p&gt;For more information regarding maintenance and deprecation for AWS SDKs, see the &lt;a href="https://docs.aws.amazon.com/sdkref/latest/guide/maint-policy.html"&gt;AWS SDKs and Tools maintenance policy&lt;/a&gt;. Our policy is to continue supporting SDK dependencies for at least 6 months after the community or vendor ends support for the dependency.&lt;/p&gt; 
&lt;h2&gt;Feedback&lt;/h2&gt; 
&lt;p&gt;Your feedback is greatly appreciated. You can engage with the AWS SDK for JavaScript team directly by opening a discussion or issue on our &lt;a href="https://github.com/aws/aws-sdk-js-v3/"&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>Introducing Amazon S3 Transfer Manager for Swift (Developer Preview)</title>
		<link>https://aws.amazon.com/blogs/developer/introducing-amazon-s3-transfer-manager-for-swift-developer-preview/</link>
					
		
		<dc:creator><![CDATA[Chan Yoo]]></dc:creator>
		<pubDate>Fri, 21 Nov 2025 21:02:48 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS SDK for Swift]]></category>
		<guid isPermaLink="false">cacea36791ff8b0308a7a115637924a6be1d78f2</guid>

					<description>e are pleased to announce the Developer Preview release of the Amazon S3 Transfer Manager&amp;nbsp;for Swift —a high-level file and directory transfer utility for 
&lt;a href="https://aws.amazon.com/s3/" target="_blank" rel="noopener noreferrer"&gt;Amazon Simple Storage Service&lt;/a&gt; (Amazon S3) built with the 
&lt;a href="https://aws.amazon.com/sdk-for-swift" target="_blank" rel="noopener noreferrer"&gt;AWS SDK for Swift&lt;/a&gt;.</description>
										<content:encoded>&lt;p&gt;We are pleased to announce the Developer Preview release of the Amazon S3 Transfer Manager&amp;nbsp;for Swift —a high-level file and directory transfer utility for &lt;a href="https://aws.amazon.com/s3/" target="_blank" rel="noopener noreferrer"&gt;Amazon Simple Storage Service&lt;/a&gt; (Amazon S3) built with the &lt;a href="https://aws.amazon.com/sdk-for-swift" target="_blank" rel="noopener noreferrer"&gt;AWS SDK for Swift&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;By using the S3 Transfer Manager API, you can now perform accelerated uploads of local files and directories to Amazon S3 and accelerated downloads of objects and buckets from Amazon S3. The concurrent transfers of a set of small parts from a single object provide enhanced throughput and reliability. The S3 Transfer Manager is built on top of the AWS SDK for Swift and uses Amazon S3 multipart upload and byte-range or part-number fetches for parallel transfers. You can also track the progress of transfers in real-time.&lt;/p&gt; 
&lt;p&gt;In this post we’ll explain how to use Amazon S3 Transfer Manager for Swift.&lt;/p&gt; 
&lt;h2&gt;Parallel upload using multipart upload&lt;/h2&gt; 
&lt;p&gt;For the upload operation, the Transfer Manager uses the Amazon S3 &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html" target="_blank" rel="noopener noreferrer"&gt;multipart upload API&lt;/a&gt;; it sends multiple &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html" target="_blank" rel="noopener noreferrer"&gt;UploadPart&lt;/a&gt;&amp;nbsp; requests concurrently behind the scenes to achieve high performance.&lt;/p&gt; 
&lt;h2&gt;Parallel download using byte-ranges or part numbers&lt;/h2&gt; 
&lt;p&gt;For the download operation, the Transfer Manager utilizes &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/range-get-olap.html" target="_blank" rel="noopener noreferrer"&gt;byte-range fetches&lt;/a&gt; or part number fetches. Byte-range fetches download the object with byte ranges and works on all objects, regardless of whether it was originally uploaded using &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html" target="_blank" rel="noopener noreferrer"&gt;multipart upload&lt;/a&gt; or not. Part number fetches download the object in parts, using the part number assigned to each object part during upload. The transfer manager splits one &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html" target="_blank" rel="noopener noreferrer"&gt;GetObject&lt;/a&gt; request to multiple smaller requests, each of which retrieves a specific portion of the object. Those requests are also executed through concurrent connections to Amazon S3.&lt;/p&gt; 
&lt;h2&gt;Getting started&lt;/h2&gt; 
&lt;p&gt;To get started with Amazon S3 Transfer Manager for Swift, complete the following steps:&lt;/p&gt; 
&lt;h3&gt;Add the dependency to your Xcode project&lt;/h3&gt; 
&lt;ol&gt; 
 &lt;li&gt;Open your project in Xcode and choose your &lt;code&gt;.xcodeproj&lt;/code&gt; file, located at the top of the file navigator on the left pane.&lt;/li&gt; 
 &lt;li&gt;Choose the project name that appears on the left pane of the &lt;code&gt;.xcodeproj&lt;/code&gt; file window.&lt;/li&gt; 
 &lt;li&gt;Choose the &lt;strong&gt;Package Dependencies&lt;/strong&gt; tab, and choose the &lt;strong&gt;+&lt;/strong&gt; button.&lt;/li&gt; 
 &lt;li&gt;In the &lt;strong&gt;Search or Enter Package URL&lt;/strong&gt; search bar, enter &lt;code&gt;git@github.com:aws/aws-sdk-swift-s3-transfer-manager.git&lt;/code&gt;.&lt;/li&gt; 
 &lt;li&gt;Wait for package to load, and once it’s loaded, choose the target you want to add the &lt;code&gt;S3TransferManager&lt;/code&gt; module to.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;h3&gt;Add the dependency to your Swift package&lt;/h3&gt; 
&lt;ol&gt; 
 &lt;li&gt;Add the below to your package definition:&lt;/li&gt; 
&lt;/ol&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-code"&gt;dependencies: [
&amp;nbsp; &amp;nbsp; .package(
&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;url: "https://github.com/aws/aws-sdk-swift-s3-transfer-manager.git",
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;from: "0.1.0"
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;)
],&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;ol start="2"&gt; 
 &lt;li&gt;Add an&amp;nbsp;&lt;code&gt;S3TransferManager&lt;/code&gt; module dependency to the target that needs it. For example:&lt;/li&gt; 
&lt;/ol&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-code"&gt;targets: [
&amp;nbsp; &amp;nbsp; .target(
&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;name: "YourTargetThatUsesS3TM",
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;dependencies: [
&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;.product(
&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;name: "S3TransferManager",
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;package: "aws-sdk-swift-s3-transfer-manager"
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;]
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;)
]&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;h3&gt;Initialize the S3 Transfer Manager&lt;/h3&gt; 
&lt;p&gt;You can initialize an S3 Transfer Manager instance with all-default settings with the following:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-javascript"&gt;// Creates and uses default S3TM config &amp;amp; S3 client.
let s3tm = try await S3TransferManager()&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;Or you can pass a configuration object to the initializer to customize the S3 Transfer Manager instance, for example:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-javascript"&gt;// Create the custom S3 client config that you want S3TM to use.
let customS3ClientConfig = try S3Client.S3ClientConfiguration(
&amp;nbsp; &amp;nbsp; region: "us-west-2",
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;. . . custom S3 client configurations . . .
)

// Create the custom S3TM config with the S3 client config initialized above.
let s3tmConfig = try await S3TransferManagerConfig(
&amp;nbsp; &amp;nbsp; s3ClientConfig: customS3ClientConfig,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;targetPartSizeBytes: 10 * 1024 * 1024, // 10MB part size.
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;multipartUploadThresholdBytes: 100 * 1024 * 1024, // 100MB threshold.
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;multipartDownloadType: .part
)

// Finally, create the S3TM using the custom S3TM config.
let s3tm = S3TransferManager(config: s3tmConfig)&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;For more information about what each configuration does, please refer to &lt;a href="https://github.com/aws/aws-sdk-swift-s3-transfer-manager/blob/main/Sources/S3TransferManager/S3TransferManagerConfig.swift" target="_blank" rel="noopener noreferrer"&gt;the documentation comments on S3TransferManagerConfig&lt;/a&gt;.&lt;/p&gt; 
&lt;h3&gt;Upload an object&lt;/h3&gt; 
&lt;p&gt;To upload a file to an Amazon S3 bucket, you need to provide the input struct &lt;code&gt;UploadObjectInput&lt;/code&gt;, which contains a subset of &lt;code&gt;PutObjectInput&lt;/code&gt; struct properties and an array of transfer listeners. You must provide the destination bucket, the S3 object key to use, and the object body.&lt;/p&gt; 
&lt;p&gt;When the object being uploaded is bigger than the threshold configured by &lt;code&gt;multipartUploadThresholdBytes&lt;/code&gt; (16MB default), the S3 Transfer Manager breaks them down into parts, each with the part size configured by &lt;code&gt;targetPartSizeBytes&lt;/code&gt; (8MB default), and uploads them concurrently using the &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html#mpu-process" target="_blank" rel="noopener noreferrer"&gt;multipart upload feature&lt;/a&gt; from Amazon S3&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-javascript"&gt;// Construct UploadObjectInput.
let uploadObjectInput = UploadObjectInput(
&amp;nbsp;&amp;nbsp; &amp;nbsp;body: ByteStream.stream(
&amp;nbsp;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;FileStream(fileHandle: try FileHandle(forReadingFrom: URL(string: "file-to-upload.txt")!))
&amp;nbsp;&amp;nbsp; &amp;nbsp;),
&amp;nbsp;&amp;nbsp; &amp;nbsp;bucket: "destination-bucket",
&amp;nbsp;&amp;nbsp; &amp;nbsp;key: "some-key"
)

// Call .uploadObject and save the returned task.
let uploadObjectTask = try s3tm.uploadObject(input: uploadObjectInput)
let uploadObjectOutput = try await uploadObjectTask.value&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;h3&gt;Download an object&lt;/h3&gt; 
&lt;p&gt;To download an object from an Amazon S3 bucket, you need to provide the input struct &lt;code&gt;DownloadObjectInput&lt;/code&gt;, which contains the download destination, a subset of &lt;code&gt;GetObjectInput&lt;/code&gt; struct properties, and an array of transfer listeners. The download destination is an instance of &lt;a href="https://developer.apple.com/documentation/foundation/outputstream" target="_blank" rel="noopener noreferrer"&gt;Swift’s Foundation.OutputStream&lt;/a&gt;. You must provide the download destination, the source bucket, and the S3 object key of the object to download.&lt;/p&gt; 
&lt;p&gt;When the object being downloaded is bigger than the size of a single part configured by &lt;code&gt;targetPartSizeBytes&lt;/code&gt; &amp;nbsp;(8MB default), the S3 Transfer Manager downloads the object in parts concurrently using either part numbers or byte ranges as configured by &lt;code&gt;multipartDownloadType&lt;/code&gt; (&lt;code&gt;.part&lt;/code&gt; default).&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-javascript"&gt;// Construct DownloadObjectInput.
let downloadObjectInput = DownloadObjectInput(
&amp;nbsp;&amp;nbsp; &amp;nbsp;outputStream: OutputStream(toFileAtPath: "destination-file.txt", append: true)!,
&amp;nbsp;&amp;nbsp; &amp;nbsp;bucket: "source-bucket",
&amp;nbsp;&amp;nbsp; &amp;nbsp;key: "s3-object.txt"
)

// Call .downloadObject and save the returned task.
let downloadObjectTask = try s3tm.downloadObject(input: downloadObjectInput)
let downloadObjectOutput = try await downloadObjectTask.value&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;h2&gt;&amp;nbsp;Conclusion&lt;/h2&gt; 
&lt;p&gt;To learn more about how to use the Amazon S3 Transfer Manager for Swift including how to upload a directory, download a bucket, and track transfer progress, visit our &lt;a href="https://github.com/aws/aws-sdk-swift-s3-transfer-manager" target="_blank" rel="noopener noreferrer"&gt;README.md&lt;/a&gt;&amp;nbsp;on GitHub.&lt;/p&gt; 
&lt;p&gt;Try out the new S3 Transfer Manager today and let us know what you think via the &lt;a href="https://github.com/aws/aws-sdk-swift-s3-transfer-manager/issues" target="_blank" rel="noopener noreferrer"&gt;GitHub issues page&lt;/a&gt;!&lt;/p&gt; 
&lt;hr&gt; 
&lt;h3&gt;About the authors&lt;/h3&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>What’s New in the AWS Deploy Tool for .NET</title>
		<link>https://aws.amazon.com/blogs/developer/whats-new-in-the-aws-deploy-tool-for-net/</link>
					
		
		<dc:creator><![CDATA[Philippe El Asmar]]></dc:creator>
		<pubDate>Tue, 14 Oct 2025 13:25:42 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS .NET Development]]></category>
		<category><![CDATA[AWS SDK for .NET]]></category>
		<category><![CDATA[AWS Toolkit for Visual Studio]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[Visual Studio]]></category>
		<category><![CDATA[deploy]]></category>
		<category><![CDATA[deployment]]></category>
		<category><![CDATA[dotnet]]></category>
		<guid isPermaLink="false">07b4c30f7a263e83983e6268074d3ee12431e225</guid>

					<description>Version 2.0 of the AWS Deploy Tool for .NET is now available. This new major version introduces several foundational upgrades to improve the deployment experience for .NET applications on AWS. The tool comes with new minimum runtime requirements. We have upgraded it to require .NET 8 because the predecessor, .NET 6, is now out of […]</description>
										<content:encoded>&lt;p&gt;Version 2.0 of the &lt;a href="https://aws.github.io/aws-dotnet-deploy/"&gt;AWS Deploy Tool for .NET&lt;/a&gt; is now available. This new major version introduces several foundational upgrades to improve the deployment experience for .NET applications on AWS.&lt;/p&gt; 
&lt;p&gt;The tool comes with new minimum runtime requirements. We have upgraded it to require &lt;strong&gt;.NET 8&lt;/strong&gt; because the predecessor, .NET 6, is now out of official support from Microsoft. The tool also requires &lt;strong&gt;Node.js 18.x or later&lt;/strong&gt; because this version of Node.js is the new minimum version that the &lt;a href="https://aws.amazon.com/cdk/"&gt;AWS Cloud Development Kit (CDK)&lt;/a&gt; supports, which is a dependency.&lt;/p&gt; 
&lt;p&gt;Outside of these prerequisites, there are no other breaking changes to the tool’s commands or your existing deployment configurations. We expect a smooth upgrade for most users. Let’s get into the details.&lt;/p&gt; 
&lt;h2&gt;Breaking Changes&lt;/h2&gt; 
&lt;p&gt;This section details the mandatory changes required to use version 2.0.&lt;/p&gt; 
&lt;h3&gt;.NET 8 Runtime Requirement&lt;/h3&gt; 
&lt;p&gt;The AWS Deploy Tool for .NET is now built on .NET 8, replacing the previous .NET 6 runtime. As noted in the introduction, we made this change because .NET 6 is now out of official support from Microsoft.&lt;/p&gt; 
&lt;p&gt;To use this new version, you must have the .NET 8 installed on your development machine. This mandatory upgrade ensures that the deploy tool itself remains on a secure, stable, and supported foundation for the future.&lt;/p&gt; 
&lt;h3&gt;Node.js 18 Prerequisite&lt;/h3&gt; 
&lt;p&gt;We also updated the minimum required Node.js version for the deploy tool to 18.x (from 14.x). This is necessary because Node.js 18 is the new minimum version for the CDK, which is one of the underlying dependencies for the deploy tool. Please ensure that you have Node.js 18 or later installed on your development machine.&lt;/p&gt; 
&lt;h2&gt;New Features and Key Updates&lt;/h2&gt; 
&lt;h3&gt;Container engine flexibility with support for Podman&lt;/h3&gt; 
&lt;p&gt;In addition to Docker, the deploy tool now includes support for &lt;a href="https://docs.podman.io/"&gt;Podman&lt;/a&gt; as a container engine. The deploy tool now automatically detects both Docker and Podman on your machine. To ensure a consistent experience for existing users, the tool defaults to Docker if it is running. If Docker is not running, the tool then checks for an available Podman installation and uses that as the container engine. This gives you more flexibility in your container workflow while maintaining predictable behavior.&lt;/p&gt; 
&lt;h3&gt;.NET 10 deployment support&lt;/h3&gt; 
&lt;p&gt;To ensure adoption of the latest .NET versions as they become available, this release adds support for deploying .NET 10 applications.&lt;/p&gt; 
&lt;p&gt;For deployment targets such as &lt;a href="https://aws.amazon.com/elasticbeanstalk/"&gt;AWS Elastic Beanstalk&lt;/a&gt; that might not have a native .NET 10 managed runtime at the time of its release, the deploy tool automatically publishes your project as a self-contained deployment bundle. This bundle includes the .NET 10 runtime and all necessary dependencies alongside your application code. This approach allows your .NET 10 application to run on the target environment without requiring a pre-installed runtime, providing a smooth path forward as you upgrade your projects.&lt;/p&gt; 
&lt;h3&gt;Other Notable Updates&lt;/h3&gt; 
&lt;p&gt;This release also includes other important foundational and dependency updates:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Optimized Dockerfile Generation:&lt;/strong&gt; When deploying to a container-based service such as &lt;a href="https://aws.amazon.com/ecs/"&gt;Amazon Elastic Container Service (Amazon ECS)&lt;/a&gt;, the deploy tool generates a Dockerfile if one doesn’t already exist. Previously, to run Single Page Applications (SPAs), the generated Dockerfile included steps to install Node.js in the container’s build stage. This is no longer the default behavior. By removing the Node.js installation from the build image, you will see improved container build times and a reduced number of dependencies to manage during the build process. If your application requires Node.js for its build (for example, an Angular or React frontend), you must now add the required installation steps to the generated Dockerfile.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Upgraded CLI Foundation:&lt;/strong&gt; The command-line handling library has been switched to &lt;a href="https://github.com/spectresystems/spectre.cli"&gt;Spectre.CLI&lt;/a&gt;. This provides the foundation for future improvements like interactive guided deployments and enhanced output formatting.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;AWS CDK:&lt;/strong&gt; We’ve updated the AWS Cloud Development Kit (CDK) library to version 2.194.0 and the CDK CLI to 2.1013.0.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;AWS SDK for .NET V4:&lt;/strong&gt; The tool now leverages version 4 of the &lt;a href="https://aws.amazon.com/blogs/developer/general-availability-of-aws-sdk-for-net-v4-0/"&gt;AWS SDK for .NET&lt;/a&gt;, bringing in the latest features in performance-optimized packages.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Microsoft Templating Engine:&lt;/strong&gt; We also updated the engine that powers our project recipes from .NET 5 to .NET 8, improving the reliability of the templating experience.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;How to Get the New Version&lt;/h2&gt; 
&lt;p&gt;Ready to get started? The new version is available for both .NET CLI and Visual Studio.&lt;/p&gt; 
&lt;h3&gt;For the .NET CLI:&lt;/h3&gt; 
&lt;p&gt;To update to the latest version, simply run the following command:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-csharp"&gt;dotnet tool update -g AWS.Deploy.Tools&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;If you’re a new user, use this command to install the tool:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-csharp"&gt;dotnet tool install -g AWS.Deploy.Tools&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;For Visual Studio:&lt;/h3&gt; 
&lt;p&gt;These deployment features are integrated into the &lt;a href="https://docs.aws.amazon.com/aws-toolkit-visual-studio"&gt;AWS Toolkit for Visual Studio&lt;/a&gt;. To get the latest updates:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Open Visual Studio&lt;/li&gt; 
 &lt;li&gt;Go to &lt;strong&gt;Extensions&lt;/strong&gt; &amp;gt; &lt;strong&gt;Manage Extensions&lt;/strong&gt;.&lt;/li&gt; 
 &lt;li&gt;In the &lt;strong&gt;Updates&lt;/strong&gt; tab on the left pane, find the &lt;strong&gt;AWS Toolkit for Visual Studio&lt;/strong&gt; and choose &lt;strong&gt;Update&lt;/strong&gt;.&lt;/li&gt; 
 &lt;li&gt;You will need to close Visual Studio for the update to be installed.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;If you don’t already have the AWS Toolkit installed, see the installation &lt;a href="https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/setup.html"&gt;instructions&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;What’s Next?&lt;/h2&gt; 
&lt;p&gt;We will continue to expand the feature scope to make sure that deploying .NET applications to AWS is as easy as possible. Please install or upgrade to the latest version of this deployment tool (CLI or toolkit), try a few deployments, and let us know what you think by opening a GitHub issue.&lt;/p&gt; 
&lt;p&gt;To learn more, check out our &lt;a href="https://aws.github.io/aws-dotnet-deploy/"&gt;Developer guide&lt;/a&gt;. The .NET CLI tooling is open source and our &lt;a href="https://github.com/aws/aws-dotnet-deploy"&gt;Github repo&lt;/a&gt; is a great place to provide feedback. Bug reports and feature requests are welcome!&lt;/p&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>General Availability Release of the Migration Tool for the AWS SDK for Java 2.x</title>
		<link>https://aws.amazon.com/blogs/developer/general-availability-release-of-the-migration-tool-for-the-aws-sdk-for-java-2-x/</link>
					
		
		<dc:creator><![CDATA[David Ho]]></dc:creator>
		<pubDate>Fri, 26 Sep 2025 16:47:36 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS Java Development]]></category>
		<category><![CDATA[AWS SDK for Java]]></category>
		<category><![CDATA[Java]]></category>
		<category><![CDATA[aws-sdk-java-v2]]></category>
		<category><![CDATA[SDK]]></category>
		<guid isPermaLink="false">5b932d8f9f5f711680461a9a3015a7a28ea7b39a</guid>

					<description>The AWS SDK for Java 1.x (v1) entered maintenance mode on July 31, 2024, and will reach end-of-support on December 31, 2025. We recommend that you migrate to the AWS SDK for Java 2.x (v2) to access new features, enhanced performance, and continued support from AWS. To help you migrate efficiently, we’ve created a migration […]</description>
										<content:encoded>&lt;p&gt;The AWS SDK for Java 1.x (v1) &lt;a href="https://aws.amazon.com/blogs/developer/the-aws-sdk-for-java-1-x-is-in-maintenance-mode-effective-july-31-2024/"&gt;entered maintenance mode&lt;/a&gt; on July 31, 2024, and will reach end-of-support on December 31, 2025. We recommend that you migrate to the &lt;a href="https://github.com/aws/aws-sdk-java-v2"&gt;AWS SDK for Java 2.x&lt;/a&gt; (v2) to access new features, enhanced performance, and continued support from AWS. To help you migrate efficiently, we’ve created a migration tool that automates much of the transition process. This tool uses &lt;a href="https://docs.openrewrite.org/"&gt;OpenRewrite&lt;/a&gt;, an open source automated code refactoring tool, to upgrade supported 1.x code to 2.x code.&lt;/p&gt; 
&lt;p&gt;You can now transform code for all service SDK clients as well as the Amazon Simple Storage Service (S3) &lt;code&gt;&lt;a href="https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html"&gt;TransferManager&lt;/a&gt;&lt;/code&gt; high-level library. The migration tool doesn’t support transforms for other high-level APIs such as &lt;code&gt;&lt;a href="https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/datamodeling/DynamoDBMapper.html"&gt;DynamoDBMapper&lt;/a&gt;&lt;/code&gt;. These unsupported transforms require manual migration. For assistance with migration for those features, check out our &lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration.html"&gt;migration guide&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;In this blog post, we demonstrate the convenience of using the migration tool to begin migrating your application to 2.x, and call out limitations you may run into.&lt;/p&gt; 
&lt;h2&gt;Getting started&lt;/h2&gt; 
&lt;h3&gt;Maven project&lt;/h3&gt; 
&lt;p&gt;For a Maven project, we will use the &lt;a href="https://docs.openrewrite.org/reference/rewrite-maven-plugin"&gt;OpenRewrite Maven plugin&lt;/a&gt;.&lt;/p&gt; 
&lt;h4&gt;Step 1: navigate to your project directory&lt;/h4&gt; 
&lt;p&gt;Open a terminal (command line) window and go to the root directory of your application.&lt;/p&gt; 
&lt;h4&gt;Step 2: run the &lt;code&gt;rewrite&lt;/code&gt; command&lt;/h4&gt; 
&lt;p&gt;There are two modes you can choose: &lt;code&gt;dryRun&lt;/code&gt; and &lt;code&gt;run&lt;/code&gt;.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;code&gt;dryRun&lt;/code&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;In this mode, the plugin generates diff logs in the console and a patch file &lt;code&gt;rewrite.patch&lt;/code&gt; in the &lt;code&gt;target/rewrite&lt;/code&gt; folder. This mode does not modify the source code, so it is helpful to preview the changes that would be made.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;mvn org.openrewrite.maven:rewrite-maven-plugin:&lt;strong&gt;${maven-plugin-version}*&lt;/strong&gt;:dryRun \
  -Drewrite.recipeArtifactCoordinates=software.amazon.awssdk:v2-migration:&lt;strong&gt;${sdkversion}**&lt;/strong&gt; \
  -Drewrite.activeRecipes=software.amazon.awssdk.v2migration.AwsSdkJavaV1ToV2&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;*&lt;/strong&gt;Replace &lt;code&gt;${maven-plugin-version}&lt;/code&gt; with the latest SDK-tested version (&lt;code&gt;6.17.0&lt;/code&gt; at the time of GA release), as specified in the &lt;a href="https://github.com/aws/aws-sdk-java-v2/blob/master/test/v2-migration-tests/src/test/java/software/amazon/awssdk/v2migrationtests/MavenTestBase.java#L54"&gt;SDK Maven test configuration&lt;/a&gt;.&lt;br&gt; &lt;strong&gt;**&lt;/strong&gt;Replace &lt;code&gt;${sdkversion}&lt;/code&gt; with SDK version &lt;code&gt;2.34.0&lt;/code&gt; or newer. See &lt;a href="https://central.sonatype.com/artifact/software.amazon.awssdk/v2-migration"&gt;Maven Central&lt;/a&gt; to find the latest version.&lt;/p&gt; 
&lt;p&gt;Your output will resemble the following:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;div class="hide-language"&gt; 
  &lt;pre class="unlimited-height-code"&gt;&lt;code class="lang-html"&gt;[WARNING] These recipes would make changes to project/src/test/resources/maven/before/pom.xml:
[WARNING]     software.amazon.awssdk.v2migration.AwsSdkJavaV1ToV2
[WARNING]         software.amazon.awssdk.v2migration.UpgradeSdkDependencies
[WARNING]             org.openrewrite.java.dependencies.AddDependency: {groupId=software.amazon.awssdk, artifactId=apache-client, version=2.27.0, onlyIfUsing=com.amazonaws.ClientConfiguration}
[WARNING]             org.openrewrite.java.dependencies.AddDependency: {groupId=software.amazon.awssdk, artifactId=netty-nio-client, version=2.27.0, onlyIfUsing=com.amazonaws.ClientConfiguration}
[WARNING]             org.openrewrite.java.dependencies.ChangeDependency: {oldGroupId=com.amazonaws, oldArtifactId=aws-java-sdk-bom, newGroupId=software.amazon.awssdk, newArtifactId=bom, newVersion=2.27.0}
[WARNING]             org.openrewrite.java.dependencies.ChangeDependency: {oldGroupId=com.amazonaws, oldArtifactId=aws-java-sdk-s3, newGroupId=software.amazon.awssdk, newArtifactId=s3, newVersion=2.27.0}
[WARNING]             org.openrewrite.java.dependencies.ChangeDependency: {oldGroupId=com.amazonaws, oldArtifactId=aws-java-sdk-sqs, newGroupId=software.amazon.awssdk, newArtifactId=sqs, newVersion=2.27.0}
[WARNING] These recipes would make changes to project/src/test/resources/maven/before/src/main/java/foo/bar/Application.java:
[WARNING]     software.amazon.awssdk.v2migration.AwsSdkJavaV1ToV2
[WARNING]         software.amazon.awssdk.v2migration.S3GetObjectConstructorToFluent
[WARNING]             software.amazon.awssdk.v2migration.ConstructorToFluent
[WARNING]         software.amazon.awssdk.v2migration.S3StreamingResponseToV2
[WARNING]         software.amazon.awssdk.v2migration.ChangeSdkType
[WARNING]         software.amazon.awssdk.v2migration.ChangeSdkCoreTypes
[WARNING]             software.amazon.awssdk.v2migration.ChangeExceptionTypes
[WARNING]                 org.openrewrite.java.ChangeType: {oldFullyQualifiedTypeName=com.amazonaws.AmazonClientException, newFullyQualifiedTypeName=software.amazon.awssdk.core.exception.SdkException}
[WARNING]                 org.openrewrite.java.ChangeMethodName: {methodPattern=com.amazonaws.AmazonServiceException getRequestId(), newMethodName=requestId}
[WARNING]                 org.openrewrite.java.ChangeMethodName: {methodPattern=com.amazonaws.AmazonServiceException getErrorCode(), newMethodName=awsErrorDetails().errorCode}
[WARNING]                 org.openrewrite.java.ChangeMethodName: {methodPattern=com.amazonaws.AmazonServiceException getServiceName(), newMethodName=awsErrorDetails().serviceName}
[WARNING]                 org.openrewrite.java.ChangeMethodName: {methodPattern=com.amazonaws.AmazonServiceException getErrorMessage(), newMethodName=awsErrorDetails().errorMessage}
[WARNING]                 org.openrewrite.java.ChangeMethodName: {methodPattern=com.amazonaws.AmazonServiceException getRawResponse(), newMethodName=awsErrorDetails().rawResponse().asByteArray}
[WARNING]                 org.openrewrite.java.ChangeMethodName: {methodPattern=com.amazonaws.AmazonServiceException getRawResponseContent(), newMethodName=awsErrorDetails().rawResponse().asUtf8String}
[WARNING]                 org.openrewrite.java.ChangeType: {oldFullyQualifiedTypeName=com.amazonaws.AmazonServiceException, newFullyQualifiedTypeName=software.amazon.awssdk.awscore.exception.AwsServiceException}
[WARNING]         software.amazon.awssdk.v2migration.NewClassToBuilderPattern
[WARNING]             software.amazon.awssdk.v2migration.NewClassToBuilder
[WARNING]             software.amazon.awssdk.v2migration.V1SetterToV2
[WARNING]         software.amazon.awssdk.v2migration.V1GetterToV2
...
[WARNING]         software.amazon.awssdk.v2migration.V1BuilderVariationsToV2Builder
[WARNING]         software.amazon.awssdk.v2migration.NewClassToBuilderPattern
[WARNING]             software.amazon.awssdk.v2migration.NewClassToBuilder
[WARNING]             software.amazon.awssdk.v2migration.V1SetterToV2
[WARNING]         software.amazon.awssdk.v2migration.HttpSettingsToHttpClient
[WARNING]         software.amazon.awssdk.v2migration.WrapSdkClientBuilderRegionStr
[WARNING] Patch file available:
[WARNING]     project/src/test/resources/maven/before/target/rewrite/rewrite.patch
[WARNING] Estimate time saved: 20m
[WARNING] Run 'mvn rewrite:run' to apply the recipes.&lt;/code&gt;&lt;/pre&gt; 
 &lt;/div&gt; 
&lt;/div&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;code&gt;run&lt;/code&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;In this mode, the plugin modifies the source code on disk to apply the changes directly. Make sure you have a backup of the source code before running the command.&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;mvn org.openrewrite.maven:rewrite-maven-plugin:&lt;strong&gt;6.17.0&lt;/strong&gt;:run \
-Drewrite.recipeArtifactCoordinates=software.amazon.awssdk:v2-migration:&lt;strong&gt;2.34.0&lt;/strong&gt; \
-Drewrite.activeRecipes=software.amazon.awssdk.v2migration.AwsSdkJavaV1ToV2&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;After you run the command, compile your application and run tests to verify the changes. Make manual changes as necessary for unsupported transforms. See the Limitations section below for further details.&lt;/p&gt; 
&lt;h3&gt;Gradle project&lt;/h3&gt; 
&lt;p&gt;For a Gradle project, we will use the &lt;a href="https://docs.openrewrite.org/reference/gradle-plugin-configuration"&gt;OpenRewrite Gradle plugin&lt;/a&gt;.&lt;/p&gt; 
&lt;h4&gt;Step 1: go to the project directory&lt;/h4&gt; 
&lt;p&gt;Open a terminal (command line) window and go to the root directory of your application.&lt;/p&gt; 
&lt;h4&gt;Step 2: create a Gradle init script&lt;/h4&gt; 
&lt;p&gt;Create a &lt;code&gt;init.gradle&lt;/code&gt; file with the following content in your root project directory:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;div class="hide-language"&gt; 
  &lt;pre class="unlimited-height-code"&gt;&lt;code class="lang-html"&gt;initscript {
    repositories {
        maven { url "https://plugins.gradle.org/m2" }
    }
    dependencies {
        classpath("org.openrewrite:plugin:&lt;strong&gt;${gradle-plugin-version}*&lt;/strong&gt;")
    }
}

rootProject {
    plugins.apply(org.openrewrite.gradle.RewritePlugin)
    dependencies {
        rewrite("software.amazon.awssdk:v2-migration:&lt;strong&gt;${sdkversion}&lt;/strong&gt;")
    }

    afterEvaluate {
        if (repositories.isEmpty()) {
            repositories {
                mavenCentral()
            }
        }
    }
}&lt;/code&gt;&lt;/pre&gt; 
 &lt;/div&gt; 
&lt;/div&gt; 
&lt;p&gt;*Replace &lt;code&gt;${gradle-plugin-version}&lt;/code&gt; with with latest SDK-tested version (&lt;code&gt;7.15.0&lt;/code&gt; at the time of GA release), as specified in the &lt;a href="https://github.com/aws/aws-sdk-java-v2/blob/master/test/v2-migration-tests/src/test/resources/software/amazon/awssdk/v2migrationtests/gradle/before/init.gradle#L6"&gt;SDK Gradle test configuration&lt;/a&gt;.&lt;/p&gt; 
&lt;h4&gt;Step 3: run the rewrite command&lt;/h4&gt; 
&lt;p&gt;As with the Maven plugin, you can perform &lt;code&gt;dryRun&lt;/code&gt; or &lt;code&gt;run&lt;/code&gt;.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;code&gt;dryRun&lt;/code&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;pre&gt;&lt;code class="lang-bash"&gt;gradle rewriteDryRun --init-script init.gradle \
  -Drewrite.activeRecipes=software.amazon.awssdk.v2migration.AwsSdkJavaV1ToV2&lt;/code&gt;&lt;/pre&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;code&gt;run&lt;/code&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;pre&gt;&lt;code class="lang-baash"&gt;gradle rewriteRun --init-script init.gradle \
  -Drewrite.activeRecipes=software.amazon.awssdk.v2migration.AwsSdkJavaV1ToV2&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Limitations&lt;/h2&gt; 
&lt;p&gt;While the majority of v1 code is supported by recipes that transform to the v2 equivalent, there are &lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration-tool.html#migration-tool-limitations"&gt;some classes and methods not covered by the migration tool&lt;/a&gt;. You should carefully review the transformed code after the tool applies the recipes. If your code does not compile after running the tool, follow the &lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration-steps.html"&gt;Step-by-step instructions&lt;/a&gt; to manually migrate the remaining v1 code.&lt;/p&gt; 
&lt;h3&gt;Unsupported features&lt;/h3&gt; 
&lt;p&gt;The following features are not covered by the migration tool and require manual migration:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration-ddb-mapper.html"&gt;Amazon DynamoDB object mapper&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration-s3-uri-parser.html"&gt;S3 URI Parsing&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration-imds.html"&gt;Amazon Elastic Compute Cloud (EC2) metadata utility&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration-waiters.html"&gt;Waiters&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration-iam-policy-builder.html"&gt;AWS Identity and Access Management (IAM) Policy Builder&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration-cloudfront-presigning.html"&gt;Amazon CloudFront presigning&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/metrics.html"&gt;SDK metric publishing&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Partially supported S3 transforms&lt;/h3&gt; 
&lt;p&gt;The following S3 components are partially covered by the migration tool and may require manual migration:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration-s3-client.html"&gt;S3 client&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration-s3-transfer-manager.html"&gt;S3 Transfer Manager&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration-s3-event-notification.html"&gt;S3 Event Notifications&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Unsupported code patterns&lt;/h3&gt; 
&lt;p&gt;There are &lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration-tool-unsupported-patterns.html"&gt;common code patterns that the migration tool does not support&lt;/a&gt;, such as request object constructors with parameters, service client methods with individual parameters, request timeout methods, and service client constructors with parameters.&lt;/p&gt; 
&lt;h4&gt;Example: Service client constructors with parameters&lt;/h4&gt; 
&lt;p&gt;Empty service client constructors will be transformed to the v2 equivalent by the migration tool. However, service client constructors with parameters are not covered by the migration tool and require manual migration.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Before (original v1 code):&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-java"&gt;AWSCredentials awsCredentials = new BasicAWSCredentials("akid", "skid");
AmazonSQS sqs = new AmazonSQSClient(awsCredentials);&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;After (migration tool run):&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-java"&gt;AwsCredentials awsCredentials = AwsBasicCredentials.create("akid", "skid");
// Will not compile.
SqsClient sqs = new SqsClient(awsCredentials);&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;In this example, the majority of the v1 code (including imports) is correctly transformed to v2, with the exception of the &lt;code&gt;SqsClient&lt;/code&gt; constructor, which requires manual updates to a &lt;code&gt;Builder&lt;/code&gt; as shown in the following:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-java"&gt;AwsCredentials awsCredentials = AwsBasicCredentials.create("akid", "skid");
// Proper v2 code - manually update.
SqsClient sqs = SqsClient.builder().credentialsProvider(StaticCredentialsProvider.create(awsCredentials));&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;In this blog post, we showed you how to get started with the migration tool and discussed its limitations. To learn more, visit our &lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration-tool.html"&gt;Developer Guide&lt;/a&gt;. We would love to hear your feedback! You can reach out to us by creating a &lt;a href="https://github.com/aws/aws-sdk-java-v2/issues/new/choose"&gt;GitHub issue&lt;/a&gt;.&lt;/p&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>Preview Release of the AWS SDK Java 2.x HTTP Client built on Apache HttpClient 5.5.x</title>
		<link>https://aws.amazon.com/blogs/developer/preview-release-of-theaws-sdk-java-2-x-http-client-built-on-apache-httpclient-5-5-x/</link>
					
		
		<dc:creator><![CDATA[John Viegas]]></dc:creator>
		<pubDate>Fri, 18 Jul 2025 03:36:05 +0000</pubDate>
				<category><![CDATA[Advanced (300)]]></category>
		<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS SDK for Java]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<guid isPermaLink="false">b125e79928709160016cf2c573fca8515b4a2788</guid>

					<description>The AWS SDK for Java 2.x introduces the Apache 5 SDK HTTP client which is built on Apache HttpClient 5.5.x. This new SDK HTTP client is available alongside our existing SDK HTTP clients: Apache HttpClient 4.5.x, Netty, URL Connection, and AWS CRT HttpClient. To differentiate the use of Apache HttpClient 4.5.x and Apache HttpClient 5.5.x, […]</description>
										<content:encoded>&lt;p&gt;The &lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/home.html"&gt;AWS SDK for Java 2.x&lt;/a&gt; introduces the Apache 5 SDK HTTP client which is built on &lt;a href="https://hc.apache.org/httpcomponents-client-5.4.x/"&gt;Apache HttpClient 5.5.x&lt;/a&gt;. This new SDK HTTP client is available alongside our existing SDK HTTP clients: Apache HttpClient 4.5.x, Netty, URL Connection, and AWS CRT HttpClient. To differentiate the use of Apache HttpClient 4.5.x and Apache HttpClient 5.5.x, we introduced a new Apache 5 Maven package and classes for this release.&lt;/p&gt; 
&lt;h2&gt;What’s new&lt;/h2&gt; 
&lt;p&gt;Similar to our existing Apache 4.5.x based client, this new client supports only synchronous operations. This client implementation uses Apache HttpClient 5.5.x, bringing the following key improvements:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;modern Java ecosystem compatibility including virtual thread support for Java 21&lt;/li&gt; 
 &lt;li&gt;active maintenance with regular security updates&lt;/li&gt; 
 &lt;li&gt;enhanced logging flexibility through SLF4J&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;These improvements address critical pain points with the Apache HttpClient 4.5.x based client. Specifically the move to Apache 5 addresses the 4.5.x client’s transition to maintenance mode, problematic JCL logging dependencies that trigger security tool alerts, and limited modern Java support.&lt;/p&gt; 
&lt;h2&gt;Getting started&lt;/h2&gt; 
&lt;h3&gt;Add the Apache 5 client dependency&lt;/h3&gt; 
&lt;p&gt;To begin using the Apache 5 HTTP client implementation, add the following dependency to your project:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-xml"&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;software.amazon.awssdk&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;apache5-client&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;2.32.0-PREVIEW&amp;lt;/version&amp;gt;
&amp;lt;/dependency&amp;gt;&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Configure your AWS service client&lt;/h3&gt; 
&lt;p&gt;You can easily configure any AWS service client to use the new AWS SDK Apache 5 HTTP client:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-java"&gt;S3Client s3Client = S3Client.builder()
    .httpClient(Apache5HttpClient.create())
    .build();&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Advanced configuration example&lt;/h3&gt; 
&lt;pre&gt;&lt;code class="lang-java"&gt;Apache5HttpClient httpClient = Apache5HttpClient.builder()
    .connectionTimeout(Duration.ofSeconds(30))
    .maxConnections(100)
    .build();

DynamoDbClient dynamoDbClient = DynamoDbClient.builder()
    .httpClient(httpClient)
    .build();&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Migrate from Apache 4.5.x&lt;/h2&gt; 
&lt;p&gt;If you’re currently using the default Apache HTTP Client (4.5.x), migration is straightforward – add the new dependency and update your client configuration with an Apache 5 HTTP client as shown above. The API remains consistent with other HTTP client implementations in the SDK. Note that like the existing Apache 4.5.x HTTP client, this implementation supports synchronous service clients only.&lt;/p&gt; 
&lt;h2&gt;Developer Preview release&lt;/h2&gt; 
&lt;p&gt;This is a Developer Preview release intended for evaluation and feedback purposes. While we tested the implementation, we recommend using it in development and testing environments before deploying to production. Your feedback during this preview period is invaluable in helping us refine the implementation before general availability.&lt;/p&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;In this blog post, we showed you how to get started with the new Apache 5 HTTP client in the AWS SDK for Java 2.x, which uses Apache HttpClient 5.5.x. We want your feedback on this new HTTP client during the Developer Preview phase. Please share your experience and any feature requests by opening a &lt;a href="https://github.com/aws/aws-sdk-java-v2/issues"&gt;GitHub issue.&lt;/a&gt;&lt;/p&gt;</content:encoded>
					
					
			
		
		
			</item>
		<item>
		<title>AWS .NET Distributed Cache Provider for Amazon DynamoDB now Generally Available</title>
		<link>https://aws.amazon.com/blogs/developer/aws-net-distributed-cache-provider-for-amazon-dynamodb-now-generally-available/</link>
					
		
		<dc:creator><![CDATA[Garrett Beatty]]></dc:creator>
		<pubDate>Thu, 03 Jul 2025 13:49:25 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[Advanced (300)]]></category>
		<category><![CDATA[Announcements]]></category>
		<category><![CDATA[AWS .NET Development]]></category>
		<category><![CDATA[AWS SDK for .NET]]></category>
		<category><![CDATA[Developer Tools]]></category>
		<category><![CDATA[dotnet]]></category>
		<guid isPermaLink="false">c287f39bf72f95090df03c4f6d93c1b149223380</guid>

					<description>Today, we are excited to announce the general availability of the AWS .NET Distributed Cache Provider for Amazon DynamoDB. This is a seamless, serverless caching solution that enables .NET developers to efficiently manage their caching needs across distributed systems. Consistent caching is a difficult problem in distributed architectures, where maintaining data integrity and performance across […]</description>
										<content:encoded>&lt;p&gt;Today, we are excited to announce the general availability of the &lt;a href="https://github.com/aws/aws-dotnet-distributed-cache-provider"&gt;AWS .NET Distributed Cache Provider&lt;/a&gt; for &lt;a href="https://aws.amazon.com/dynamodb/"&gt;Amazon DynamoDB&lt;/a&gt;. This is a seamless, serverless caching solution that enables .NET developers to efficiently manage their caching needs across distributed systems.&lt;/p&gt; 
&lt;p&gt;Consistent caching is a difficult problem in distributed architectures, where maintaining data integrity and performance across multiple application instances can be complex. The AWS .NET Distributed Cache Provider for Amazon DynamoDB addresses this challenge by leveraging the robust and globally distributed infrastructure of DynamoDB to provide a reliable and scalable caching mechanism.&lt;/p&gt; 
&lt;h2&gt;What is the AWS .NET Distributed Cache Provider for DynamoDB?&lt;/h2&gt; 
&lt;p&gt;This provider implements the ASP.NET Core &lt;a href="https://learn.microsoft.com/en-us/aspnet/core/performance/caching/distributed?view=aspnetcore-9.0#idistributedcache-interface"&gt;IDistributedCache&lt;/a&gt; interface, letting you integrate the fully managed and durable infrastructure of DynamoDB into your caching layer with minimal code changes. A distributed cache can improve the performance and scalability of an ASP.NET Core app, especially when the app is hosted by a cloud service or a server farm.&lt;/p&gt; 
&lt;h2&gt;Using the Distributed Cache Provider&lt;/h2&gt; 
&lt;p&gt;Consider a hypothetical ASP.NET Core web application that displays a local weather forecast. Generating the forecast might be computationally expensive relative to rendering the page, so you want to cache the current forecast for 24 hours and share the cached forecast across multiple servers that host your application.&lt;/p&gt; 
&lt;p&gt;To get started, install the &lt;a href="https://www.nuget.org/packages/AWS.AspNetCore.DistributedCacheProvider/"&gt;AWS.AspNetCore.DistributedCacheProvider&lt;/a&gt; package from NuGet.org. Then configure the cache provider in your &lt;code&gt;Program.cs&lt;/code&gt; file:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-csharp"&gt;builder.Services.AddAWSDynamoDBDistributedCache(options =&amp;gt;
{
options.TableName = "weather_cache";
options.PartitionKeyName = "id";
options.TTLAttributeName = "cache_ttl";
});&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;In your webpage, leverage the injected &lt;code&gt;IDistributedCache&lt;/code&gt; interface to retrieve an existing cached weather forecast. If no cached forecast exists, the application generates a new forecast and stores it in the cache for future use.&lt;/p&gt; 
&lt;pre class="unlimited-height-code"&gt;&lt;code class="lang-csharp"&gt;
// WeatherForecast.razor
@page "/weatherforecast"
@inject IDistributedCache DistributedCache

&amp;lt;h1&amp;gt;Weather Forecast&amp;lt;/h1&amp;gt;

@if (CurrentForecast != null)
{
    // Display your forecast data here
}

@code {
    private WeatherForecast CurrentForecast;

    protected override async Task OnInitializedAsync()
    {
        // Load the previous weather forecast from the cache
        var cachedForecastBytes = await DistributedCache.GetAsync("weatherForecast");
    
        // If there was a cache entry, convert it from the cached bytes
        if (cachedForecastBytes != null)
        {
            CurrentForecast = ForecastConverter.FromBytes(cachedForecastBytes);
        }
        else
        {
            // Compute a new forecast
            CurrentForecast = WeatherPredictor.GenerateNewForecast();
            
            var options = new DistributedCacheEntryOptions()
            {
                AbsoluteExpiration = DateTimeOffset.UtcNow.AddHours(24)
            };
            
            // Store the new forecast in the cache
            await DistributedCache.SetAsync("weatherForecast", 
                ForecastConverter.ToBytes(CurrentForecast), 
                options);    
        }
    }
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;After loading this page, you can see the DynamoDB item in the table. The &lt;code&gt;value&lt;/code&gt; attribute contains the serialized weather forecast. The &lt;code&gt;cache_ttl&lt;/code&gt;, &lt;code&gt;ttl_deadline&lt;/code&gt; , and &lt;code&gt;ttl_window&lt;/code&gt; attributes are used internally to support the various expiration settings that are available in the &lt;code&gt;DistributedCacheEntryOptions&lt;/code&gt; object.&lt;/p&gt; 
&lt;div id="attachment_11858" style="width: 1045px" class="wp-caption alignnone"&gt;
 &lt;a href="https://d2908q01vomqb2.cloudfront.net/0716d9708d321ffb6a00818614779e779925365c/2025/07/02/image1.png"&gt;&lt;img aria-describedby="caption-attachment-11858" loading="lazy" class="size-full wp-image-11858" src="https://d2908q01vomqb2.cloudfront.net/0716d9708d321ffb6a00818614779e779925365c/2025/07/02/image1.png" alt="Figure 1: A screenshot showing the weatherForecast entry in the DynamoDB table." width="1035" height="256"&gt;&lt;/a&gt;
 &lt;p id="caption-attachment-11858" class="wp-caption-text"&gt;Figure 1: A screenshot showing the weatherForecast entry in the DynamoDB table.&lt;/p&gt;
&lt;/div&gt; 
&lt;h3&gt;Configuration&lt;/h3&gt; 
&lt;p&gt;For production applications, we recommend configuring the cache provider with the following options:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;TableName&lt;/strong&gt; – The name of the DynamoDB table that is used to store the cache data.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;PartitionKeyName&lt;/strong&gt; – The name of the partition key of the DynamoDB table. If this option is not set, a &lt;code&gt;DescribeTable&lt;/code&gt; service call is made at startup to determine the name of the partition key.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;TTLAttributeName&lt;/strong&gt; – The &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html"&gt;Time To Live (TTL) feature&lt;/a&gt; of DynamoDB is used to remove expired cache items from a table. The &lt;strong&gt;TTLAttributeName&lt;/strong&gt; option specifies the attribute name that is used to store the TTL timestamp. If this option is not set, a &lt;code&gt;DescribeTimeToLive&lt;/code&gt; service call is made to determine the name of the TTL attribute.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;The table must use a &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.PrimaryKey"&gt;partition key&lt;/a&gt; of type &lt;code&gt;string&lt;/code&gt; and not use a sort key, otherwise an exception is thrown during the first cache operation. The Time to Live feature of DynamoDB must be enabled for the table, otherwise expired cached entries are not deleted.&lt;/p&gt; 
&lt;p&gt;The partition keys of cached items always start with ‘dc:’ and can be further prefixed with an additional configurable &lt;code&gt;PartitionKeyPrefix&lt;/code&gt;. This can help to avoid collisions if the cache entries are mixed with other items in a table, and allows you to use &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/specifying-conditions.html"&gt;IAM policy conditions for fine-grained access control&lt;/a&gt; to limit access to just the cache entries.&lt;/p&gt; 
&lt;p&gt;You can set &lt;code&gt;CreateTableIfNotExists&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt; to allow the library to create a table automatically if it doesn’t already exist. This is recommended only for development or testing purposes because it requires additional permissions and adds latency to the first cache operation.&lt;/p&gt; 
&lt;p&gt;Refer to the &lt;a href="https://github.com/aws/aws-dotnet-distributed-cache-provider#configuration"&gt;Configuration section&lt;/a&gt; in the project README for the full set of options, and the &lt;a href="https://github.com/aws/aws-dotnet-distributed-cache-provider#permissions"&gt;Permissions section&lt;/a&gt; for the minimum required IAM permissions for different scenarios.&lt;/p&gt; 
&lt;h2&gt;Using the Distributed Cache Provider with Hybrid Cache&lt;/h2&gt; 
&lt;p&gt;What’s even more exciting is the ability to integrate this distributed cache provider with the new &lt;a href="https://learn.microsoft.com/en-us/aspnet/core/performance/caching/hybrid?view=aspnetcore-9.0"&gt;.NET 9 HybridCache&lt;/a&gt;, which provides a unified in-process and out-of-process caching solution for .NET applications.&lt;/p&gt; 
&lt;p&gt;Since computing weather forecasts can be resource-intensive, we want to cache the forecast for 15 minutes in DynamoDB. To optimize performance and reduce the load on DynamoDB, we’ll configure &lt;code&gt;HybridCache&lt;/code&gt; to maintain a local in-memory cache for 1 minute on each server. Configure the cache provider and hybrid cache in your Program.cs file as follows:&lt;/p&gt; 
&lt;pre&gt;&lt;code class="lang-csharp"&gt;builder.Services.AddHybridCache(options =&amp;gt;
{
    options.DefaultEntryOptions = new HybridCacheEntryOptions
    {
        // Sets the DDB TTL entry to expire in 15 minutes
        Expiration = TimeSpan.FromMinutes(15),
        // Local cache expires after 1 minute
        LocalCacheExpiration = TimeSpan.FromMinutes(1)
    };
});

builder.Services.AddAWSDynamoDBDistributedCache(options =&amp;gt;
{
    options.TableName = "weather_cache";
    options.PartitionKeyName = "id";
    options.TTLAttributeName = "cache_ttl";
});&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;In the page model, leverage the injected &lt;code&gt;HybridCache&lt;/code&gt; to retrieve an existing cached weather forecast. If no cached forecast exists, the application generates a new forecast and stores it in the cache for future use.&lt;/p&gt; 
&lt;pre class="unlimited-height-code"&gt;&lt;code class="lang-csharp"&gt;// WeatherForecast.razor
@page "/weatherforecast"
@inject HybridCache Cache

&amp;lt;h1&amp;gt;Weather Forecast&amp;lt;/h1&amp;gt;

@if (CurrentForecast != null)
{
    // Display your forecast data here
}

@code {
    private WeatherForecast CurrentForecast;

    protected override async Task OnInitializedAsync()
    {
        // Try to get the cached forecast from the HybridCache
        CurrentForecast = await Cache.GetOrCreateAsync(
            "weatherForecast",
            async (token) =&amp;gt;
            {
                // If the forecast is not cached, generate a new one
                return WeatherPredictor.GenerateNewForecast();
            },
            cancellationToken: CancellationToken.None
        );
    }
}&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;In this code snippet, we’re using &lt;code&gt;HybridCache&lt;/code&gt; to efficiently manage our weather forecast data. When &lt;code&gt;GetOrCreateAsync&lt;/code&gt; is called, it first checks the local in-memory cache (configured for 1-minute duration) for the fastest possible access. If the forecast isn’t found locally, it checks the distributed DynamoDB cache (configured for 15-minute duration). If the forecast isn’t found in either cache, the provided factory method generates a new forecast, which is then automatically stored in both the local and distributed caches. This tiered caching approach optimizes performance by reducing both API calls to generate new forecasts and DynamoDB requests, while ensuring that all application servers maintain reasonably fresh weather data.&lt;/p&gt; 
&lt;h2&gt;Using the Distributed Cache Provider for ASP.NET Core Session State&lt;/h2&gt; 
&lt;p&gt;A common application of an &lt;code&gt;IDistributedCache&lt;/code&gt; implementation is to store &lt;a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/app-state?#session-state"&gt;session state&lt;/a&gt; in an ASP.NET Core application. Unlike the previous example, these cache entries are user-specific and tied to the session ID that is maintained by ASP.NET Core. In addition, the expiration timestamps for all of the cache entries are now controlled by the &lt;code&gt;IdleTimeout&lt;/code&gt; option on the session configuration as opposed to creating a &lt;code&gt;DistributedCacheEntryOptions&lt;/code&gt; object for each cache entry as is done in the previous example.&lt;/p&gt; 
&lt;p&gt;To get started, install the &lt;a href="https://www.nuget.org/packages/AWS.AspNetCore.DistributedCacheProvider/"&gt;AWS.AspNetCore.DistributedCacheProvider&lt;/a&gt; package from NuGet.org. Then configure the cache provider and session state behavior in your &lt;code&gt;Program.cs&lt;/code&gt; file:&lt;/p&gt; 
&lt;pre class="unlimited-height-code"&gt;&lt;code class="lang-csharp"&gt;var builder = WebApplication.CreateBuilder(args);

builder.Services.AddAWSDynamoDBDistributedCache(options =&amp;gt;
{
    options.TableName = "session_cache_table";
    options.PartitionKeyName = "id";
    options.TTLAttributeName = "ttl_date";
});

builder.Services.AddSession(options =&amp;gt;
{
    options.IdleTimeout = TimeSpan.FromSeconds(90);
    options.Cookie.IsEssential = true;
});

var app = builder.Build();
app.UseSession();
...
app.Run();&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Now, method calls on the &lt;a href="https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.isession"&gt;&lt;code&gt;ISession&lt;/code&gt;&lt;/a&gt; interface that are accessible from the HttpContext such as &lt;code&gt;Get/Set&lt;/code&gt; and &lt;code&gt;GetString/SetString&lt;/code&gt; will load and save data to DynamoDB. The following is a hypothetical page that stores a user-specific page-hit counter:&lt;/p&gt; 
&lt;pre class="unlimited-height-code"&gt;&lt;code class="lang-csharp"&gt;// PageCount.razor
@page "/pagecount"
@inject IHttpContextAccessor HttpContextAccessor

&amp;lt;div class="text-center"&amp;gt;
    Number of page views: @PageCount
&amp;lt;/div&amp;gt;

@code {
    private int PageCount;
    private const string PageCountKey = "pageCount";

    protected override void OnInitialized()
    {
        // Load the old page count for this session, or start at 0
        // if there isn't an existing entry in the cache
        PageCount = HttpContextAccessor.HttpContext.Session.GetInt32(PageCountKey) ?? 0;

        PageCount += 1;

        // Save the incremented count in the cache
        HttpContextAccessor.HttpContext.Session.SetInt32(PageCountKey, PageCount);
    }
}&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;After loading this page once, you can see the DynamoDB item in the table. Unlike the first example shown earlier, the &lt;code&gt;id&lt;/code&gt; and TTL attributes are managed by the session middleware of ASP.NET Core. You only need to set and get the value.&lt;/p&gt; 
&lt;div id="attachment_11859" style="width: 996px" class="wp-caption alignnone"&gt;
 &lt;a href="https://d2908q01vomqb2.cloudfront.net/0716d9708d321ffb6a00818614779e779925365c/2025/07/02/image2.png"&gt;&lt;img aria-describedby="caption-attachment-11859" loading="lazy" class="size-full wp-image-11859" src="https://d2908q01vomqb2.cloudfront.net/0716d9708d321ffb6a00818614779e779925365c/2025/07/02/image2.png" alt="Figure 2: A screenshot showing the session state entry in the DynamoDB table." width="986" height="214"&gt;&lt;/a&gt;
 &lt;p id="caption-attachment-11859" class="wp-caption-text"&gt;Figure 2: A screenshot showing the session state entry in the DynamoDB table.&lt;/p&gt;
&lt;/div&gt; 
&lt;h2&gt;Conclusion&lt;/h2&gt; 
&lt;p&gt;In this post, we demonstrated how the AWS .NET Distributed Cache Provider for Amazon DynamoDB simplifies distributed caching for .NET developers. By leveraging the serverless infrastructure of DynamoDB, you can now easily implement high-performance, scalable caching across your applications.&lt;/p&gt; 
&lt;p&gt;Next steps:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Explore our &lt;a href="https://github.com/aws/aws-dotnet-distributed-cache-provider/tree/main/SampleApp"&gt;sample applications on GitHub&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;Download the &lt;a href="https://www.nuget.org/packages/AWS.AspNetCore.DistributedCacheProvider"&gt;AWS.AspNetCore.DistributedCacheProvider&lt;/a&gt; package from NuGet.org to try it out, and refer to the &lt;a href="https://github.com/aws/aws-dotnet-distributed-cache-provider/blob/main/README.md"&gt;README&lt;/a&gt; for more documentation.&lt;/li&gt; 
 &lt;li&gt;Don’t hesitate to create an &lt;a href="https://github.com/aws/aws-dotnet-distributed-cache-provider/issues"&gt;issue&lt;/a&gt; or a &lt;a href="https://github.com/aws/aws-dotnet-distributed-cache-provider/pulls"&gt;pull request&lt;/a&gt; if you have ideas for improvements.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;footer&gt;&lt;/footer&gt;</content:encoded>
					
					
			
		
		
			</item>
	</channel>
</rss>