<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

<channel>
	<title>AWS Public Sector Blog</title>
	<atom:link href="https://aws.amazon.com/blogs/publicsector/feed/" rel="self" type="application/rss+xml"/>
	<link>https://aws.amazon.com/blogs/publicsector/</link>
	<description>Innovating in the Public Sector</description>
	<lastBuildDate>Sat, 16 May 2026 15:33:43 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>A faster, more resilient digital repository: Migrating DSpace to AWS</title>
		<link>https://aws.amazon.com/blogs/publicsector/a-faster-more-resilient-digital-repository-migrating-dspace-to-aws/</link>
		
		<dc:creator><![CDATA[Kai Xu]]></dc:creator>
		<pubDate>Sat, 16 May 2026 15:33:43 +0000</pubDate>
				<category><![CDATA[Amazon CloudWatch]]></category>
		<category><![CDATA[Amazon Elastic Container Service]]></category>
		<category><![CDATA[Amazon EventBridge]]></category>
		<category><![CDATA[Amazon Q Developer]]></category>
		<category><![CDATA[Amazon Simple Storage Service (S3)]]></category>
		<category><![CDATA[AWS Fargate]]></category>
		<category><![CDATA[Public Sector]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">c1e555bda8363866e55417573382372595dc372d</guid>

					<description>Learn more about the Digital Research and Curation Center (DRCC), the group within the Sheridan Libraries that builds and manages digital infrastructure for open scholarship, migrated DSpace to the cloud with Amazon Web Services (AWS).</description>
										<content:encoded>&lt;p&gt;
 &lt;!-- WordPress Blog Post: A faster, more resilient digital repository: Migrating DSpace to AWS --&gt;&lt;/p&gt; 
&lt;p&gt;&lt;img class="alignleft size-full wp-image-31058" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/14/A-faster-more-resilient-digital-repository-Migrating-DSpace-to-AWS.png" alt="A faster, more resilient digital repository: Migrating DSpace to AWS" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;Automated bot traffic has surged across academic digital repositories, creating real performance problems for institutions that make research openly accessible. At &lt;a href="https://www.jhu.edu/" target="_blank" rel="noopener"&gt;Johns Hopkins University (JHU)&lt;/a&gt;, the problem was compounding an already difficult situation. The &lt;a href="https://www.library.jhu.edu/" target="_blank" rel="noopener"&gt;Sheridan Libraries’&lt;/a&gt; installation of &lt;a href="https://dspace.org/" target="_blank" rel="noopener"&gt;DSpace&lt;/a&gt;—an open-source digital repository system used by thousands of institutions worldwide—was running on on-premises infrastructure that the team could no longer update without significant manual work. The system was many versions behind the latest release, and the single-server setup required significant dedicated resources to handle frequent traffic spikes.&lt;/p&gt; 
&lt;p&gt;These challenges made modernizing DSpace a necessity to support the university community. The &lt;a href="https://drcc.library.jhu.edu/" target="_blank" rel="noopener"&gt;Digital Research and Curation Center (DRCC)&lt;/a&gt;, the group within the Sheridan Libraries that builds and manages digital infrastructure for open scholarship, migrated DSpace to the cloud with &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS)&lt;/a&gt;. Using &lt;a href="https://aws.amazon.com/ecs/" target="_blank" rel="noopener noreferrer"&gt;Amazon Elastic Container Service (Amazon ECS)&lt;/a&gt; with &lt;a href="https://aws.amazon.com/fargate/" target="_blank" rel="noopener noreferrer"&gt;AWS Fargate&lt;/a&gt;, the team achieved a faster, more scalable repository without the operational burden of maintaining on-premises infrastructure.&lt;/p&gt; 
&lt;h2&gt;Frozen infrastructure and surging traffic&lt;/h2&gt; 
&lt;p&gt;DSpace, known at Johns Hopkins as &lt;a href="https://jscholarship.library.jhu.edu/" target="_blank" rel="noopener"&gt;JScholarship&lt;/a&gt;, is the central repository for the university’s research and cultural collections, housing over 150 collections that include research papers, theses, dissertations, historical documents, newsletters, articles, images, audio, video, sheet music, and maps.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“JScholarship users range from students depositing theses, to government employees researching state land maps, to musicians searching for historic sheet music compositions,” said Digital Repositories Manager for the Sheridan Libraries, &lt;strong&gt;Allison Fischbach&lt;/strong&gt;. JScholarship also supports the university’s open-access policy, providing faculty with a place to make their research publicly discoverable. “DSpace is used to maintain a permanent record of university scholarship,” said Hodson Director of the DRCC and Open Source Programs Office, &lt;strong&gt;Bill Branan&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;p&gt;But running DSpace on-premises had become unsustainable. Recent licensing changes resulted in increased hosting costs, and much of the maintenance and administration for supporting DSpace relied on manual processes. Pushing code updates into production took months.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“Updates hadn’t really been done to DSpace for a very long time because there was a lack of confidence in the process,” explained Senior Cloud Engineer in the DRCC, &lt;strong&gt;Steven Miklovic&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;p&gt;Meanwhile, automated bot traffic—driven largely by AI companies scraping open-access research—had surged, and the infrastructure needed frequent manual intervention to keep up.&lt;/p&gt; 
&lt;p&gt;Before the migration, the DRCC team evaluated whether to replace DSpace entirely. They determined that the software was still the right fit, but it was necessary to increase the speed of moving software changes into production, and the deployment environment needed to be able to scale to meet demand without manual intervention. These requirements pointed to a container-based deployment with an automated build pipeline.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“Given the on-premises architecture, deploying changes in a timely manner would have been very difficult,” said Senior Software Engineer in the DRCC, &lt;strong&gt;Russell Poetker&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;h2&gt;Building on an existing modernization effort&lt;/h2&gt; 
&lt;p&gt;The DRCC team brought relevant experience to the project. Engineers on the team had already modernized the &lt;a href="https://pass.jhu.edu/" target="_blank" rel="noopener"&gt;Public Access Submission System (PASS)&lt;/a&gt;, a custom application that allows researchers to deposit articles into DSpace, using a similar containerized architecture. The team also drew on experience with &lt;a href="https://lyrasis.org/dspace-direct/" target="_blank" rel="noopener"&gt;DSpaceDirect&lt;/a&gt;, a hosted service run through &lt;a href="https://www.lyrasis.org/" target="_blank" rel="noopener noreferrer"&gt;Lyrasis&lt;/a&gt;, the organizational home of the DSpace open-source project. That prior work showed that hosting DSpace in the cloud could deliver consistency, repeatability, and resiliency.&lt;/p&gt; 
&lt;p&gt;Throughout the project, the DRCC team worked with AWS through a consultation-based approach, meeting at key milestones for architectural reviews. Those sessions validated the architecture and surfaced important security features and optimizations.&lt;/p&gt; 
&lt;h2&gt;Six months from architecture to production&lt;/h2&gt; 
&lt;p&gt;The technical implementation spanned about six months. The first three to four months focused on defining the initial architecture, including a significant data migration sub-project. The on-premises environment for DSpace stored files differently than &lt;a href="https://aws.amazon.com/s3/" target="_blank" rel="noopener noreferrer"&gt;Amazon Simple Storage Service (Amazon S3)&lt;/a&gt;, so the team went through several iterations of migrating data, validating it, and refining scripts.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“Steps like that are how you build the confidence in the cloud for people,” &lt;strong&gt;Miklovic&lt;/strong&gt; noted.&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;p&gt;This phase also included the DRCC’s creation of infrastructure as code (IaC) to automate the deployment process, laying a repeatable foundation for future migrations.&lt;/p&gt; 
&lt;p&gt;Once the production environment was created using IaC tooling, the team performed testing and validation prior to a final production launch on January 12, 2026. Post-launch, they tuned scaling policies and optimized resource allocation to handle bot traffic spikes, followed by additional efficiency improvements.&lt;/p&gt; 
&lt;h2&gt;A serverless architecture built for maintainability&lt;/h2&gt; 
&lt;p&gt;Moving to a serverless architecture was more complex than a straightforward lift-and-shift, but the DRCC team chose that path deliberately. An earlier attempt at JHU to run a different application in a more advanced container orchestration environment had proven too burdensome. Amazon ECS with AWS Fargate offered a managed middle path.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“We wanted to really simplify the operational burden of an advanced architecture and focus on the developers being the primary support for the application,” said &lt;strong&gt;Miklovic&lt;/strong&gt;. By shifting infrastructure management to AWS-managed services, the team could redirect their focus from operational maintenance to development, effectively adopting a DevOps model where the engineers who build the application also own its deployment and observability.&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;p&gt;DSpace naturally breaks into several components, including a front end, back-end API, search index, and scheduled jobs, which the team split into separate containers so each can scale independently. The architecture includes &lt;a href="https://aws.amazon.com/rds/" target="_blank" rel="noopener"&gt;Amazon Relational Database Service (Amazon RDS)&lt;/a&gt; for PostgreSQL configured for high availability; Amazon &lt;a href="https://aws.amazon.com/s3" target="_blank" rel="noopener"&gt;S3&lt;/a&gt; for the DSpace asset store; &lt;a href="https://aws.amazon.com/waf/" target="_blank" rel="noopener"&gt;AWS WAF&lt;/a&gt; in combination with &lt;a href="https://www.cloudflare.com/" target="_blank" rel="noopener"&gt;Cloudflare&lt;/a&gt; for application security and bot traffic management; &lt;a href="https://aws.amazon.com/elasticloadbalancing/" target="_blank" rel="noopener"&gt;Elastic Load Balancing&lt;/a&gt; using Application Load Balancers for public and internal traffic;&lt;a href="https://aws.amazon.com/eventbridge/" target="_blank" rel="noopener"&gt; Amazon EventBridge&lt;/a&gt; for scheduled tasks; and &lt;a href="https://aws.amazon.com/cloudwatch/" target="_blank" rel="noopener"&gt;Amazon CloudWatch&lt;/a&gt; for monitoring. The team also used &lt;a href="https://aws.amazon.com/q/developer/" target="_blank" rel="noopener"&gt;Amazon Q Developer&lt;/a&gt; for the first time to support architectural decisions.&lt;/p&gt; 
&lt;p&gt;The migration also gave back to the open-source community. The team found that DSpace’s Amazon S3 storage integration relied on an outdated version of the AWS software development kit, upgraded it, and contributed the fix upstream.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“That’s one of the nice things about working in open source,” said &lt;strong&gt;Branan&lt;/strong&gt;. “If we find something that’s a problem, not only can we fix it, but we can push it back up for anyone else who needs to use it.”&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;h2&gt;Faster performance, faster deployments, and greater confidence&lt;/h2&gt; 
&lt;p&gt;Since launching, the new environment has reached stable performance after an anticipated tuning period. The public-facing load balancer typically averages &lt;strong&gt;400,000 to 500,000 requests per day&lt;/strong&gt;, while a second, internal load balancer handles &lt;strong&gt;over 2 million&lt;/strong&gt;, which reflects the volume of communication between DSpace’s internal components.&lt;/p&gt; 
&lt;p&gt;The difference has been immediate for the people who use DSpace every day. Students searching for dissertations, faculty accessing research, and staff managing collections all noticed faster response times as soon as the cutover happened. Where the old single-server setup left the repository vulnerable to bot traffic spikes, the new architecture absorbs surges without degrading the experience for real users.&lt;/p&gt; 
&lt;p&gt;Centralized logging and alerts now give the DRCC team real-time visibility across the environment, replacing the reactive troubleshooting of the old setup. The serverless nature of the deployment also gives engineers more time to focus on improving the application itself.&lt;/p&gt; 
&lt;p&gt;The new deployment pipeline has also shortened the path from code change to a testable environment.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“Verification of application changes in a pre-production environment now happens within a few minutes after a PR is merged. This is a big improvement for our development and test cycle,” said &lt;strong&gt;Poetker&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;p&gt;With faster performance for users and a streamlined workflow for developers, the DSpace migration has given the DRCC team confidence that they can apply the same approach with other applications in their portfolio. Stakeholders for other library systems are eager for similar transitions.&lt;/p&gt; 
&lt;h2&gt;A roadmap for other institutions&lt;/h2&gt; 
&lt;p&gt;The DRCC team is already migrating more applications into a similar architecture and exploring how AI can support DevOps visibility over time. Other academic libraries and cultural institutions considering this type of migration can draw on the team’s experience: start with a managed service like Amazon ECS as a pathway into the cloud; take small steps to build confidence; and use what others have built.&lt;/p&gt; 
&lt;p&gt;To that end, the DRCC team published an &lt;a href="https://github.com/jhu-library-devops/terraform-aws-jhu-drcc" target="_blank" rel="noopener"&gt;open-source reference architecture for DSpace on AWS on GitHub&lt;/a&gt;, which also breaks out components that other institutions can reuse for different applications, so they don’t have to build it all from scratch.&lt;/p&gt; 
&lt;p&gt;With bot traffic continuing to grow and on-premises infrastructure increasingly difficult to maintain, modernizing digital collections in the cloud is becoming a practical necessity. Explore how &lt;a href="https://aws.amazon.com/education/" target="_blank" rel="noopener noreferrer"&gt;AWS helps institutions build secure, scalable solutions for higher education&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Read related stories on the AWS Public Sector Blog&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/publicsector/reimagining-university-libraries-with-aws-university-of-marylands-six-month-cloud-migration/" target="_blank" rel="noopener noreferrer"&gt;Reimagining university libraries with AWS: University of Maryland’s six-month cloud migration&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/publicsector/old-dominion-university-helps-to-modernize-quantum-chemistry-software-for-140000-researchers-with-aws/" target="_blank" rel="noopener noreferrer"&gt;Old Dominion University helps to modernize quantum chemistry software for 140,000 researchers with AWS&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/publicsector/seattle-universitys-8-year-cloud-journey-key-lessons-wins-and-a-new-path-forward/" target="_blank" rel="noopener noreferrer"&gt;Seattle University’s 8-year cloud journey: Key lessons, wins, and a new path forward&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/publicsector/macquarie-university-accelerates-cloud-transformation-with-aws/" target="_blank" rel="noopener noreferrer"&gt;Macquarie University accelerates cloud transformation with AWS&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>How to effectively use AWS Support for public sector organizations</title>
		<link>https://aws.amazon.com/blogs/publicsector/how-to-effectively-use-aws-support-for-public-sector-organizations/</link>
		
		<dc:creator><![CDATA[Caleb Grode]]></dc:creator>
		<pubDate>Thu, 14 May 2026 23:03:48 +0000</pubDate>
				<category><![CDATA[Amazon Q]]></category>
		<category><![CDATA[AWS Personal Health Dashboard]]></category>
		<category><![CDATA[AWS Trusted Advisor]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Public Sector]]></category>
		<guid isPermaLink="false">0e167c07c4b1578667c32a0be14d9809f4e3517e</guid>

					<description>Learn how AWS Support can be contacted through three channels: web (email), live chat, and phone. Each method will open a support case. AWS addresses support cases based on severity of the reported issue and the human response times for your support tier.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-31068 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/14/How-to-effectively-use-AWS-Support-for-public-sector-organizations.png" alt="How to effectively use AWS Support for public sector organizations" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;For public sector organizations running workloads on &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS)&lt;/a&gt;, &lt;a href="https://aws.amazon.com/premiumsupport/" target="_blank" rel="noopener"&gt;AWS Support&lt;/a&gt; helps troubleshoot issues and provides expert technical guidance. Getting the most out of AWS Support goes beyond merely opening a ticket. Knowing how to choose the right severity level, communicate through the right channel, and provide the right information can significantly reduce your time to resolution. In this post, we explore ways to best use your AWS Support.&lt;/p&gt; 
&lt;h3&gt;Choosing the right severity level&lt;/h3&gt; 
&lt;p&gt;AWS Support can be contacted through three channels: web (email), live chat, and phone. Each method will open a support case. AWS addresses support cases based on severity of the reported issue and the human response times for your &lt;a href="https://aws.amazon.com/premiumsupport/plans/" target="_blank" rel="noopener"&gt;support tier&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;The severity level you choose signals the time and mission or business sensitivity of your issue. Understanding these levels is important because customers frequently underestimate the severity of their cases, which can delay response.&lt;/p&gt; 
&lt;p&gt;The severity levels are:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;General guidance [low]&lt;/strong&gt; – You have a general question about an AWS service or feature.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;System impaired [normal]&lt;/strong&gt; – Noncritical functions of your application are behaving abnormally, or you have a time-sensitive development question.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Production system impaired [high]&lt;/strong&gt; – Important functions of your application are impaired or degraded.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Production system down [urgent]&lt;/strong&gt; – Your business is significantly impacted. Critical functions of your application are unavailable.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Business-critical system down [critical]&lt;/strong&gt; – Your business is at risk. Critical functions of your application are unavailable.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Choosing the correct severity helps your case be prioritized appropriately. If your situation worsens after opening a case, you can and should increase the severity.&lt;/p&gt; 
&lt;h3&gt;Selecting your communication channel&lt;/h3&gt; 
&lt;p&gt;Engaging support through live chat or phone will connect you to the next available support engineer. For both high-severity and low-severity but blocking issues, we recommend contacting support over live chat or phone. This can significantly reduce initial response time and lead to a faster time to resolution. The AWS Support team has engineers who are assigned directly to monitor the queues for live chat and phone. During a mission-impacting event, this can further accelerate the time to response in addition to case severity. For issues that aren’t high severity, live chat or phone support can reduce time to response from hours to minutes even if the event doesn’t impact production or the organization as a whole.&lt;/p&gt; 
&lt;h3&gt;Escalating to service teams&lt;/h3&gt; 
&lt;p&gt;Support engagements can sometimes involve complex solutions. Issues that can’t be solved through direct support engineer troubleshooting or are of the highest urgency can escalate to a service’s engineering team. Service team engagement is driven and prioritized by issue criticality. Your support engineer manages escalation to service teams acting as an intermediary and advocate.&lt;/p&gt; 
&lt;h3&gt;Providing business impact information&lt;/h3&gt; 
&lt;p&gt;When creating a support case for a critical issue, it’s important to provide the right data on business impact so your support agent can accurately advocate for you. Providing this information proactively can also save vital time for the support agent in determining impact when it becomes clear an issue needs to be addressed by the service team.&lt;/p&gt; 
&lt;p&gt;For a support case that might require escalation or is of the highest priority, be sure to include the end user impact, scale, mission and monetary impact, and internal visibility level. For example, a law enforcement agency supporting a large metro area might include the following data:&lt;/p&gt; 
&lt;p&gt;&lt;em&gt;400 officers across 12 precincts unable to access real-time crime center feeds impairing all active patrol and dispatch operations. The issue constitutes a direct public safety risk across the state capital region supporting 2.3 million constituents. The agency’s CIO and superintendent have been briefed and are monitoring for resolution.&amp;nbsp;&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;At the time of an incident, impact might not be fully measured, but including as much information as available will allow our teams to best prioritize the request.&lt;/p&gt; 
&lt;h3&gt;Sharing the right troubleshooting data&lt;/h3&gt; 
&lt;p&gt;In addition to providing data on impact, providing the right data for troubleshooting is also important. Make sure to provide the &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html" target="_blank" rel="noopener"&gt;Amazon Resource Name (ARN)&lt;/a&gt; of any impacted resource, and providing relevant service logs is also crucial. Copying and pasting the logs is more useful than providing screenshots because the engineer can interact with the text directly. Finally, including the time(s) of when incidents occurred and the observed behavior at those times will assist with troubleshooting.&lt;/p&gt; 
&lt;h3&gt;Keeping your case active&lt;/h3&gt; 
&lt;p&gt;After a support case is opened, keeping it active is important. Cases that go without updates can stall, and support engineers might deprioritize them if there is no indication the issue persists. If you haven’t received a response within a reasonable window, update the case with any new information or increase the severity if the situation has worsened. Additionally, if an issue resurfaces after a case has been marked resolved, you can reopen the case rather than starting from scratch. This preserves the troubleshooting history and context from the original engagement.&lt;/p&gt; 
&lt;h3&gt;Using self-service resources first&lt;/h3&gt; 
&lt;p&gt;Before opening a support case for nonurgent issues, it’s worth checking AWS self-service resources. The &lt;a href="https://docs.aws.amazon.com/health/latest/ug/aws-health-dashboard-status.html" target="_blank" rel="noopener"&gt;AWS Health Dashboard&lt;/a&gt; can tell you whether your issue is part of a known service event, saving you time and giving you useful context to include if you do open a case. Similarly, &lt;a href="https://docs.aws.amazon.com/awssupport/latest/user/trusted-advisor.html" target="_blank" rel="noopener"&gt;AWS Trusted Advisor&lt;/a&gt; can surface configuration issues or service limit concerns that might be the root cause. For general questions or common errors, &lt;a href="https://repost.aws/" target="_blank" rel="noopener"&gt;AWS re:Post&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/" target="_blank" rel="noopener"&gt;AWS documentation&lt;/a&gt;, and AWS &lt;a href="https://aws.amazon.com/generative-ai/" target="_blank" rel="noopener"&gt;generative AI&lt;/a&gt;–powered tools such as &lt;a href="https://aws.amazon.com/q/" target="_blank" rel="noopener"&gt;Amazon Q&lt;/a&gt; can often provide faster answers than waiting for a support case response. Checking these resources first can either resolve your issue outright or give you better information to include when you do engage support.&lt;/p&gt; 
&lt;h3&gt;Using support proactively&lt;/h3&gt; 
&lt;p&gt;AWS Support is not only for break-fix issues. Support provides a way for you to directly engage domain and service experts. AWS customers who effectively use AWS Support create support cases often and even for small issues. When building or working with AWS services, if a problem has been blocking progress for even an hour, use support as an additional troubleshooting resource. In addition to service-specific expertise, support agents can search internally for similar issues that have been observed and solved. This high frequency utilization of AWS Support is an excellent strategy to combine with the selection of chat or phone communication channels for low-severity issues that benefit from quick engagement.&lt;/p&gt; 
&lt;h3&gt;Conclusion&lt;/h3&gt; 
&lt;p&gt;In this post, we discussed ways to effectively use AWS Support that might not be commonly known. Choosing the right communication channel and severity level can speed time to resolution. Checking self-service resources such as AWS Health Dashboard, Trusted Advisor, and Amazon Q before opening a case can save time or provide better context. Providing clear business impact along with comprehensive ARNs and logs can reduce back-and-forth exchanges. Keeping cases active and reopening them when issues resurface keeps continuity. Using support beyond break-glass issues can accelerate development and give insight into how to best use AWS services.&lt;/p&gt; 
&lt;p&gt;To explore your support options and find the right plan for your organization, visit the &lt;a href="https://aws.amazon.com/premiumsupport/plans/" target="_blank" rel="noopener"&gt;AWS Support plans page&lt;/a&gt;. To get started with the self-service tools mentioned in this post, refer to the &lt;a href="https://docs.aws.amazon.com/health/latest/ug/getting-started-health-dashboard.html" target="_blank" rel="noopener"&gt;AWS Health Dashboard documentation&lt;/a&gt;, the &lt;a href="https://docs.aws.amazon.com/awssupport/latest/user/trusted-advisor.html" target="_blank" rel="noopener"&gt;AWS Trusted Advisor documentation&lt;/a&gt;, and &lt;a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/what-is.html" target="_blank" rel="noopener"&gt;Getting started with Amazon Q&lt;/a&gt;.&lt;/p&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Scouting America transforms youth enrollment with generative AI assistant powered by AWS</title>
		<link>https://aws.amazon.com/blogs/publicsector/scouting-america-transforms-youth-enrollment-with-generative-ai-assistant-powered-by-aws/</link>
		
		<dc:creator><![CDATA[Nicolaas Botes]]></dc:creator>
		<pubDate>Wed, 13 May 2026 22:54:24 +0000</pubDate>
				<category><![CDATA[Amazon Bedrock]]></category>
		<category><![CDATA[Amazon DynamoDB]]></category>
		<category><![CDATA[Amazon EC2]]></category>
		<category><![CDATA[AWS Lambda]]></category>
		<category><![CDATA[Public Sector]]></category>
		<guid isPermaLink="false">505720c90c521e92bdc6b7f9a90700ad056ef084</guid>

					<description>In this post, we share how Scouting America used this AI assistant to transform youth enrollment.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-31043 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/13/Scouting-America-transforms-youth-enrollment-with-generative-AI-assistant-powered-by-AWS.png" alt="Scouting America transforms youth enrollment with generative AI assistant powered by AWS" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;For 115 years, &lt;a href="https://www.scouting.org/" target="_blank" rel="noopener"&gt;Scouting America&lt;/a&gt; has shaped generations through service, leadership, and outdoor adventure. Their mission is to prepare young people for lives of impact by instilling values, teaching life skills, and fostering character, through adventure and service in a safe, inclusive environment. With over 1 million youth participants and 628,000 volunteers, they’re not just teaching skills, they’re building character.&lt;/p&gt; 
&lt;p&gt;In May 2025, Scouting America launched &lt;a href="https://scoutly.scouting.org/" target="_blank" rel="noopener"&gt;Scoutly&lt;/a&gt;, a generative AI-powered, multilingual AI assistant built on &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS)&lt;/a&gt;. Scoutly has streamlined their enrollment process by providing conversational support in multiple languages, answering questions, helping families find local units, and guiding them through enrollment with contextual awareness. The platform has delivered remarkable results, reducing registration time from 25 minutes to just 5 minutes through agentic workflows. Scoutly currently averages 2,000 users per day and has answered over 1.9 million questions from visitors.&lt;/p&gt; 
&lt;p&gt;In this post, we share how Scouting America used this AI assistant to transform youth enrollment.&lt;/p&gt; 
&lt;h3&gt;Removing barriers to youth development&lt;/h3&gt; 
&lt;p&gt;The organization faced significant challenges with their registration process that hindered their ability to serve families effectively. The lengthy and confusing enrollment journey led to high abandonment rates, with families often giving up before even starting their Scouting experience. Between parental authorizations, complex pricing structures, and static forms, the registration process was disjointed and frequently abandoned.&lt;/p&gt; 
&lt;p&gt;Every abandoned form represented one less future Scout ready to serve their community. With limited staff serving an audience of 1.5 million people (between members, volunteers, and alumni), Scouting America sought to use technology strategically to make registration easier and more accessible.&lt;/p&gt; 
&lt;p&gt;They also wanted to achieve a strategic goal: ensure relevance to today’s youth as a 115-year-old organization. They needed a tech platform that was straightforward to use and could facilitate digital experiences comparable to modern platforms that young people regularly use.&lt;/p&gt; 
&lt;h3&gt;Building an intelligent solution with AWS&lt;/h3&gt; 
&lt;p&gt;Scoutly is a powerful AI assistant that provides instant, reliable support to leaders, volunteers, parents, and scouts through a comprehensive knowledge base of Scouting America resources. Users can access this interactive assistant around the clock on &lt;a href="https://www.scouting.org/" target="_blank" rel="noopener"&gt;Scouting.org&lt;/a&gt; and &lt;a href="https://beascout.scouting.org/" target="_blank" rel="noopener"&gt;BeAScout.org&lt;/a&gt;, which provides a safe, child-friendly experience. Scoutly also fits into Scouting America’s broader technology strategy, which prioritizes service design, enterprise architecture, data governance, platform thinking, and simplified user experiences. The organization’s business strategy calls for a unified technology foundation, stronger governance, and a data architecture that can support real-time operations and analytics at scale.&lt;/p&gt; 
&lt;p&gt;Working with partner &lt;a href="https://myridius.com/" target="_blank" rel="noopener"&gt;Myridius&lt;/a&gt;, Scouting America built Scoutly on &lt;a href="https://aws.amazon.com/bedrock/" target="_blank" rel="noopener"&gt;Amazon Bedrock&lt;/a&gt; and a comprehensive suite of scalable services, including &lt;a href="https://aws.amazon.com/lambda" target="_blank" rel="noopener"&gt;AWS Lambda&lt;/a&gt;, &lt;a href="https://aws.amazon.com/pm/ec2/" target="_blank" rel="noopener"&gt;Amazon Elastic Compute Cloud (Amazon EC2)&lt;/a&gt;, &lt;a href="https://aws.amazon.com/free/database" target="_blank" rel="noopener"&gt;Amazon Relational Database Service (Amazon RDS)&lt;/a&gt;, &lt;a href="https://aws.amazon.com/dynamodb/" target="_blank" rel="noopener"&gt;Amazon DynamoDB&lt;/a&gt;, &lt;a href="https://aws.amazon.com/s3" target="_blank" rel="noopener"&gt;Amazon Simple Storage Service (Amazon S3)&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/kendra" target="_blank" rel="noopener"&gt;Amazon Kendra&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;The solution is designed to be fast, resilient, and built to scale while engaging users with relevant, accurate, real-time information.&lt;/p&gt; 
&lt;p&gt;Working alongside AWS, Myridius served as the lead architecture and delivery partner for Scoutly, focusing on the technical foundations required to make the experience reliable, scalable, and safe. The team defined the enterprise architecture and integration patterns that enabled real-time orchestration across systems, while confirming the solution could scale seamlessly on AWS.&lt;/p&gt; 
&lt;p&gt;A key focus area was data governance and content control. Myridius helped design the processes to curate, validate, and manage source content, so responses remained accurate, consistent, and aligned with Scouting America’s policies. Guardrails were implemented to control how information is used by the assistant, with a strong emphasis on privacy, safety, and youth protection. The team also led application quality and release readiness, including rigorous testing and a phased rollout approach to minimize risk at launch. This meant that Scoutly could handle production-scale usage from day one while maintaining a high standard of trust and performance.&lt;/p&gt; 
&lt;p&gt;Together, these elements established Scoutly not just as an AI assistant, but as a governed, production-grade digital service aligned with Scouting America’s broader strategy around enterprise architecture, data, and service design.&lt;/p&gt; 
&lt;p&gt;The quality and governance of source content were also critical to the solution’s performance. The content was carefully curated, controlled, and validated before being exposed to users, with safeguards designed to protect privacy and maintain response integrity.&lt;/p&gt; 
&lt;h3&gt;Transforming the Scouting experience&lt;/h3&gt; 
&lt;p&gt;The results exceeded expectations both quantitatively and qualitatively. During the pilot phase, Scouting America experienced a 45% spike in web traffic, demonstrating strong user engagement with the new platform. The agentic workflows successfully reduced the registration process from 25 minutes to just 5 minutes, dramatically improving the user experience.&lt;/p&gt; 
&lt;p&gt;Scoutly now has an average of 2,000 daily users and has successfully answered over 1.9 million questions from nearly 108,000 unique visitors. The platform has established itself as a widely used resource within the Scouting America community, handling substantial query volume while serving a large and diverse user base.&lt;/p&gt; 
&lt;p&gt;Scoutly provides live, natural conversational support in English, Spanish, Arabic, French, and additional languages, making Scouting more accessible to communities across America.&lt;/p&gt; 
&lt;p&gt;Additionally, some interesting and unexpected usage patterns emerged. Although it was originally designed primarily for registration assistance, Scoutly users regularly engage the platform for scouting knowledge, merit badge information, safety guidelines, and operational questions. Users range from youth asking about rank advancement and Eagle Scout requirements to adult volunteers seeking operational guidance for camping events, unit formation, and safety training requirements.&lt;/p&gt; 
&lt;h3&gt;Key insights for nonprofit technology implementation&lt;/h3&gt; 
&lt;p&gt;Mike Bullock, CIO of Scouting America, emphasized the importance of starting with a genuine organizational problem rather than seeking ways to implement AI technology for its own sake. Their approach of arriving at AI as a solution, rather than looking for AI applications, contributed significantly to the project’s success.&lt;/p&gt; 
&lt;p&gt;This problem-first methodology helped the technology truly serve the organization’s mission.&lt;/p&gt; 
&lt;h3&gt;Supporting strategic transformation&lt;/h3&gt; 
&lt;p&gt;Scoutly supports Scouting America’s strategic shift from systems-focused to service and customer experience-focused operations. The platform helps the 115-year-old organization remain relevant to today’s youth by providing familiar digital experiences comparable to modern platforms.&lt;/p&gt; 
&lt;p&gt;Most importantly, Scoutly is making it straightforward for purpose-driven youth to find their way into Scouting and stick with it. By removing usual hurdles and offering quick, helpful support, the platform helps prevent future Scouts or leaders from being left behind. This is more than just making registrations; it’s also about building the next generation of leaders ready to serve, grow, and make a difference in their communities.&lt;/p&gt; 
&lt;p&gt;To learn more about Scouting America and experience &lt;a href="https://scoutly.scouting.org/" target="_blank" rel="noopener"&gt;Scoutly&lt;/a&gt;, visit &lt;a href="https://www.scouting.org/" target="_blank" rel="noopener"&gt;Scouting.org&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;If you are feeling inspired by Scoutly, your nonprofit can harness the power of technology to accelerate your mission too. Learn more about the &lt;a href="https://aws.amazon.com/government-education/nonprofits/aws-imagine-grant-program/" target="_blank" rel="noopener"&gt;AWS Imagine Grant&lt;/a&gt;, a program designed to empower nonprofits to innovate and scale. Applications are open now through June 5, 2026.&lt;/p&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>How governments adopt the cloud for national security and defense in Asia-Pacific</title>
		<link>https://aws.amazon.com/blogs/publicsector/how-governments-adopt-the-cloud-for-national-security-and-defense-in-asia-pacific/</link>
		
		<dc:creator><![CDATA[Neil Beet]]></dc:creator>
		<pubDate>Wed, 13 May 2026 21:23:30 +0000</pubDate>
				<category><![CDATA[Public Sector]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">d92d51929e44369f1c5d675aee5842151daca74d</guid>

					<description>In this blog, learn how governments adopt the cloud for national security and defense in Asia-Pacific.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-30989 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/07/How-governments-adopt-the-cloud-for-national-security-and-defense-in-Asia-Pacific.png" alt="How governments adopt the cloud for national security and defense in Asia-Pacific" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;Governments in the Asia-Pacific region are modernizing their national security operations to maintain strategic advantage in an increasingly complex threat environment. Cloud adoption offers them unprecedented opportunities for innovation, operational effectiveness, and resilience.&lt;/p&gt; 
&lt;p&gt;As noted in the &lt;a href="https://www.iiss.org/" target="_blank" rel="noopener"&gt;International Institute for Strategic Studies (IISS)&lt;/a&gt; paper, &lt;a href="https://www.iiss.org/research-paper/2025/12/cloud-adoption-for-national-security-and-defence-purposes-four-case-studies-from-the-asia-pacific/" target="_blank" rel="noopener"&gt;Cloud Adoption for National-security and Defence Purposes: Four Case Studies from the Asia-Pacific&lt;/a&gt;, which was supported by &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS)&lt;/a&gt;, decision advantage through data mastery is a compelling driver of cloud adoption.&lt;/p&gt; 
&lt;p&gt;Researchers from the &lt;a href="https://www.iiss.org/people/cyber-power-and-future-conflict/" target="_blank" rel="noopener"&gt;IISS Cyber Power and Future Conflict&lt;/a&gt; program looked at approaches to cloud adoption by Japan, the Philippines, Singapore, and Thailand and considered implications and recommendations for other governments.&lt;/p&gt; 
&lt;h3&gt;Why national security and defense agencies are embracing cloud technology&lt;/h3&gt; 
&lt;p&gt;National security and defense (NS&amp;amp;D) organizations need to process vast quantities of data, integrate operations across multiple domains, and respond to rapidly evolving threats. Traditional IT infrastructure such as on-premises data centers struggle to elastically scale and integrate multidomain data at the pace required in today’s threat environment.&lt;/p&gt; 
&lt;p&gt;Modern defense operations generate enormous volumes of intelligence from satellites, sensors, communications intercepts, and open sources. Cloud technology can enable advanced threat detection by using artificial intelligence and machine learning (AI/ML) capabilities for signal prioritization and anomaly detection. It can also support controlled, policy-driven intelligence sharing between allied agencies.&lt;/p&gt; 
&lt;p&gt;Cloud architecture distributes data across multiple locations, reducing vulnerability to both physical attacks and cyber incidents while enabling distributed command and control capabilities. Cloud infrastructure and services are highly scalable and highly available, meaning government agencies can acquire, process, and analyze data at an unprecedented speed and scale.&lt;/p&gt; 
&lt;p&gt;The cloud is an on-demand service that provides dynamic scalability without requiring upfront capital expenditure on physical infrastructure. Agencies benefit from cloud providers like AWS investing in cloud security maintenance and emerging technologies like AI. Organizations responsible for national security and defense can instead focus their resources on mission objectives while relying on defense-in-depth security controls aligned to NS&amp;amp;D requirements.&lt;/p&gt; 
&lt;h3&gt;Four national approaches to cloud adoption&lt;/h3&gt; 
&lt;p&gt;The IISS paper describes how Japan, the Philippines, Singapore, and Thailand tailor cloud adoption to their unique security requirements and digital maturity:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Japan is modernizing its NS&amp;amp;D strategy with hybrid cloud environments, focusing on cross-domain integration and alliance interoperability.&lt;/li&gt; 
 &lt;li&gt;The Philippines is in the early stages of cloud adoption across the government, aiming to enhance interservice coordination and secure cloud management through its &lt;a href="https://cms-cdn.e.gov.ph/DICT/pdf/NCSP-2023-2028-FINAL-DICT.pdf" target="_blank" rel="noopener"&gt;Digital Transformation Strategy&lt;/a&gt;.&lt;/li&gt; 
 &lt;li&gt;Singapore is developing a secure hybrid cloud solution for defense, linking military readiness with national resilience through its &lt;a href="https://www.smartnation.gov.sg/" target="_blank" rel="noopener"&gt;Smart Nation&lt;/a&gt; and &lt;a href="https://www.totaldefence.gov.sg/" target="_blank" rel="noopener"&gt;Total Defence&lt;/a&gt; initiatives.&lt;/li&gt; 
 &lt;li&gt;Thailand is modernizing its government digital infrastructure by using AWS for policing solutions while building internal governance capacity for sensitive data protection.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;h3&gt;Strategic recommendations for NS&amp;amp;D leaders&lt;/h3&gt; 
&lt;p&gt;The IISS report offers guidance for defense and national security leaders embarking on cloud adoption journeys:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Redefine sovereignty-assured governance with transparent control mechanisms rather than technological isolation.&lt;/strong&gt; Experience demonstrates that cloud services can equip governments to maintain effective governance, security, and control over their critical and sensitive data. Governments can use global cloud providers while retaining full visibility and lawful control through technical and governance arrangements including data classification, encryption, and strong access controls. This calibrated approach allows for appropriate levels of protection for different data types. It balances sovereignty and resilience considerations, maximizing the benefits of commercial innovation and maintaining security for the most sensitive operations.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Build institutional maturity and domestic expertise.&lt;/strong&gt; Technical solutions alone can’t bring about successful cloud adoption. Governments must develop the appropriate organizational depth, regulatory clarity, and technical capabilities. For example, they can establish dedicated and resourced cloud adoption teams to drive implementation. Investment in domestic technical expertise strengthens strategic autonomy and effective oversight.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Prioritize securely sharing data with allies.&lt;/strong&gt; We recommend that cloud solutions are designed with international interoperability in mind, facilitating rapid data exchange with trusted partners. This capability has become essential for coalition operations and intelligence sharing. Joint approaches across security agencies can break down silos between military branches and intelligence organizations, creating more effective whole-of-government responses to threats.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Design for resilience and operational continuity.&lt;/strong&gt; Cloud architecture must support operations continuing during crises, including potential relocation of critical data if necessary. Core security controls including encryption, multi-factor authentication (MFA), and zero trust principles serve as the security foundation for all systems, with specialized additional controls implemented based on data sensitivity and classification.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Access innovation and maintain control for national security&lt;/h3&gt; 
&lt;p&gt;For NS&amp;amp;D leaders, cloud adoption is the way to maintain a strategic advantage in an increasingly contested global information environment. Success requires using global expertise and technical capabilities, engineering resilience, and boosting domestic institutional capacity. Governments that master this balance are positioned to protect their citizens and interests in the digital age.&lt;/p&gt; 
&lt;p&gt;To learn about how AWS helps global national and security organizations deliver mission-driven solutions, connect with our &lt;a href="https://aws.amazon.com/government-education/contact/" target="_blank" rel="noopener"&gt;global public sector team&lt;/a&gt; today.&lt;/p&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>University-cloud collaboration in action: Columbia University students transform ideas into AWS powered startups</title>
		<link>https://aws.amazon.com/blogs/publicsector/university-cloud-collaboration-in-action-columbia-university-students-transform-ideas-into-aws-powered-startups/</link>
		
		<dc:creator><![CDATA[Ayush Tripathi]]></dc:creator>
		<pubDate>Wed, 13 May 2026 19:31:51 +0000</pubDate>
				<category><![CDATA[Amazon Bedrock]]></category>
		<category><![CDATA[Amazon DynamoDB]]></category>
		<category><![CDATA[Amazon Simple Storage Service (S3)]]></category>
		<category><![CDATA[AWS Lambda]]></category>
		<category><![CDATA[Public Sector]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">5d9dba91ed215eab9c1b1eac150891ff840226cc</guid>

					<description>In this post, we share the winning projects, lessons learned, and how the experience continues to shape participants’ cloud careers at the Columbia X Amazon Bedrock Innovation Challenge.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-31029 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/12/University-cloud-collaboration-in-action-Columbia-University-students-transform-ideas-into-AWS-powered-startups.png" alt="University-cloud collaboration in action: Columbia University students transform ideas into AWS powered startups" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;The Columbia X Amazon Bedrock Innovation Challenge brought together 173 students to develop innovative solutions using &lt;a href="https://aws.amazon.com/bedrock/" target="_blank" rel="noopener"&gt;Amazon Bedrock&lt;/a&gt; generative AI and agentic capabilities. On November 7, 2025, 47 teams created prototype applications that used foundation models (FMs) to address real-world challenges across five critical industries: Financial Services, Technology &amp;amp; Software, Media &amp;amp; Entertainment, Healthcare &amp;amp; Life Sciences, and Retail &amp;amp; Consumer Goods. To help teams could build sophisticated solutions in this compressed timeframe, we structured the challenge around six strategic tracks: Intelligent Document Processing, Conversational AI Applications, AI-Powered Content Generation, Retrieval Augmented Generation (RAG), Agentic AI, and an Open Innovation Track for boundary-pushing ideas.&lt;/p&gt; 
&lt;p&gt;​​​In this post, we share the winning projects, lessons learned, and how the experience continues to shape participants’ cloud careers.​​&lt;/p&gt; 
&lt;h3&gt;The challenge: Empowering the next generation of AI builders&lt;/h3&gt; 
&lt;p&gt;The hackathon was designed to provide hands-on experience with Amazon Bedrock, a fully managed service on &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS)&lt;/a&gt; for building generative AI applications. Students learned to work with FMs, create AI agents using &lt;a href="https://aws.amazon.com/bedrock/agentcore/" target="_blank" rel="noopener"&gt;Amazon Bedrock AgentCore&lt;/a&gt;, and integrate various AWS services to build production-ready prototypes. The challenge emphasized not just technical implementation, but also practical business applications and user-centered design. Beyond the technical challenge, the event connected students with AWS professionals across a range of roles, from Solutions Architects to Customer Solutions Managers, giving them a window into diverse career paths in cloud technology. AWS awarded $5,000 in AWS credits to each of the three winning teams, supporting them in their journey to experiment with the AWS environment and kickstart their startup ideas.&lt;/p&gt; 
&lt;p&gt;AWS provided support throughout the event, including:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Technical mentorship from AWS Solutions Architects, Customer Solutions Managers, and Software Development Engineers&lt;/li&gt; 
 &lt;li&gt;Hands-on workshops covering Amazon Bedrock capabilities and best practices&lt;/li&gt; 
 &lt;li&gt;Pre-hackathon office hours for troubleshooting and account setup&lt;/li&gt; 
 &lt;li&gt;Access to AWS services and resources for rapid prototyping&lt;/li&gt; 
 &lt;li&gt;Guidance on building secure, scalable, and efficient AI applications&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Innovative solutions from student teams&lt;/h3&gt; 
&lt;p&gt;Judges selected three winning projects based on effective use of Amazon Bedrock and its FMs, solution architecture quality and scalability, technical complexity and completeness of implementation, business impact and market viability, and user experience and deployment readiness.&lt;/p&gt; 
&lt;h3&gt;MyCFO.ai&lt;/h3&gt; 
&lt;p&gt;MyCFO.ai is an agentic AI orchestration system that acts like a CFO to analyze, optimize, and predict business finances in real time. It solves the problem of manual financial analysis by using a multi-agent system (supervisor and specialized collaborator agents) that processes raw financial data and compliance documents to automatically generate structured insights and real-time visualizations. Key technical features include Amazon Bedrock multi-agent orchestration, &lt;a href="https://aws.amazon.com/api-gateway/" target="_blank" rel="noopener"&gt;Amazon API Gateway&lt;/a&gt; for secure request routing, &lt;a href="https://aws.amazon.com/lambda/" target="_blank" rel="noopener"&gt;AWS Lambda&lt;/a&gt; for backend logic, and a knowledge base with &lt;a href="https://aws.amazon.com/s3/" target="_blank" rel="noopener"&gt;Amazon Simple Storage Service&lt;/a&gt; (Amazon S3) for storing company documents and financial data.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/12/Picture1-MyCFO.ai-team.jpg" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-31024 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/12/Picture1-MyCFO.ai-team.jpg" alt="photo of four students presenting" width="1430" height="1059"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 1: MyCFO.ai team: Ritvik Sharma, Aditya Unnikrishnan, Sahethi Depuru Guru, Tejal Bedmutha&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;ROBIN AI&lt;/h3&gt; 
&lt;p&gt;Robin AI is an AI-powered platform developed by Columbia Engineering students that democratizes quantitative trading by automating the translation of financial research into executable trading strategies. Built on Amazon Bedrock with RAG technology (Amazon Titan and Anthropic’s Claude), it features a conversational interface requiring zero coding skills, modular architecture supporting multiple asset classes (equities, crypto, futures), and automated backtesting with verifiable citations. The platform addresses the $50 billion quant research market by reducing strategy development time from weeks to minutes, helping smaller hedge funds and retail investors compete with institutional players while maintaining cost-effectiveness through serverless, pay-as-you-go pricing.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/12/Picture2-ROBIN-AI-team.jpg" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-31025 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/12/Picture2-ROBIN-AI-team.jpg" alt="photo of three students presenting in front of a screen with Robin AI logo" width="1430" height="1146"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 2: ROBIN AI team: Amine Roudani, Martina Paez Berru, Alessandro Massaad&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;PosturePal&lt;/h3&gt; 
&lt;p&gt;PosturePal is an AI-powered posture monitoring system that combines hardware and software components. The solution features a smart chair equipped with four pressure sensors and an IMU sensor that detects user posture in real time. Raw sensor data is transmitted to API Gateway, processed through Lambda functions, and stored as events in &lt;a href="https://aws.amazon.com/dynamodb/" target="_blank" rel="noopener"&gt;Amazon DynamoDB&lt;/a&gt;. The system integrates with Amazon Bedrock to provide ergonomic feedback and enable conversational interactions about posture patterns throughout the day.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/12/Picture3-PosturePal-team.jpg" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-31027 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/12/Picture3-PosturePal-team.jpg" alt="photo of four students presenting in front of a screen that says PosturePal" width="1430" height="962"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 3: PosturePal team: Sahasra Kokkula, James Zhang, Meona Khetrapal, Anh Lam&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;More than just a hackathon&lt;/h3&gt; 
&lt;p&gt;Although the hackathon competition was the centerpiece of the event, participants also benefited from a rich array of professional development opportunities:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;AWS Career Panel with Angela Helfrich (Principal Sales Executive), Freeda Johnson (Customer Solutions Manager), Gary Lu (Data Scientist), David Ding (Customer Solutions Manager), and Alexa Perlov (AI Engineer)&lt;/li&gt; 
 &lt;li&gt;Columbia Engineering Alumni Panel with Professor Hsing-Hsing Li&lt;/li&gt; 
 &lt;li&gt;Builder Studio Tours with Danny Mason, Chris Cassin, and Julia Defilipis&lt;/li&gt; 
 &lt;li&gt;Networking Happy Hour with Columbia Alumni&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Impact and next steps&lt;/h3&gt; 
&lt;p&gt;Three months after the Columbia X Amazon Bedrock Innovation Challenge, we reconnected with the winning teams to understand the lasting impact of the hackathon.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Q: How have you been using your $5,000 AWS credits since the hackathon?&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;“I plan to continue building the project and play around with the other services.”&lt;/p&gt; 
&lt;p&gt;“Utilized the credits for coursework and other projects … also was interested in potentially pursuing the project as a startup opportunity.”&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Q: What did you appreciate most about the hackathon experience?&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;“Technical support. They were able to understand our requirements and what we might want. As participants, we are still in our learning phases, and we won’t be able to communicate issues as well, but the technical support was able to understand what we were saying.”&lt;/p&gt; 
&lt;p&gt;“The judgment panel was good … they asked decent follow-up questions. That is sometimes missing in hackathons. Sometimes judges just walk around and then choose the winner they feel like, but they were very involved and gave good feedback.”&lt;/p&gt; 
&lt;p&gt;“They gave us a lot of freedom to build anything. We were not given too many constraints.”&lt;/p&gt; 
&lt;p&gt;Looking ahead, AWS is committed to expanding relationships with Columbia University and other academic institutions across the country. As Professor Li from Columbia University noted, “The hackathon allowed students to collaborate in an interdisciplinary manner and it was exciting to see the ideas they came up with. The Engineering Alumni Panel gave students, alumni, and the AWS team an opportunity to interact, network, and discuss future collaborations.”&lt;/p&gt; 
&lt;p&gt;She also highlighted that “It was fantastic that AWS team members, who are also Columbia alumni, got so involved. It showcased how our communities come full circle to support each other.”&lt;/p&gt; 
&lt;p&gt;We plan to host additional innovation challenges that provide students with hands-on experience using cutting-edge AWS technologies while creating meaningful connections between academia and industry. For the winning teams, we’re excited to offer continued mentorship opportunities and technical guidance as they transform their prototypes into viable startups. These university collaborations represent a strategic investment in developing the next generation of cloud innovators who will shape the future of technology.&lt;/p&gt; 
&lt;h3&gt;Conclusion&lt;/h3&gt; 
&lt;p&gt;AWS is deeply invested in helping develop the next generation of cloud innovators through hands-on learning experiences with cutting-edge technologies. Are you interested in establishing similar collaborations with your university? For future hackathons with AWS, fill out our &lt;a href="https://pulse.amazon/survey/BICM6W1Y" target="_blank" rel="noopener"&gt;interest form&lt;/a&gt; to explore how we can work together to create innovative learning experiences for your students.&lt;/p&gt; 
&lt;h3&gt;Learn more&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/bedrock/" target="_blank" rel="noopener"&gt;Amazon Bedrock&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/bedrock/agents/" target="_blank" rel="noopener"&gt;Amazon Bedrock Agents&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/education/" target="_blank" rel="noopener"&gt;AWS for Education&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>AWS Health Equity Initiative accelerates modern public health access with SmartTracker</title>
		<link>https://aws.amazon.com/blogs/publicsector/aws-health-equity-initiative-accelerates-modern-public-health-access-with-smarttracker/</link>
		
		<dc:creator><![CDATA[Dr. Dawn Heisey-Grove]]></dc:creator>
		<pubDate>Tue, 12 May 2026 14:44:25 +0000</pubDate>
				<category><![CDATA[Amazon Bedrock]]></category>
		<category><![CDATA[Amazon Transcribe]]></category>
		<category><![CDATA[Public Sector]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[health equity]]></category>
		<guid isPermaLink="false">aa1967ee5cd3b4719a3023bcf00e33d701816fbe</guid>

					<description>This blog discusses the AWS Health Equity Initiative, a commitment to supporting organizations developing solutions to advance health equity worldwide.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-31003 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/08/AWS-Health-Equity-Initiative-accelerates-modern-public-health-access-with-SmartTracker.png" alt="AWS Health Equity Initiative accelerates modern public health access with SmartTracker" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;Jose Dueñas grew up in South Florida, the son of migrant working parents who knew what it meant to navigate a health system that wasn’t built for families like theirs. Long waits, language barriers, stacks of disconnected paperwork, and missed appointments because reaching the clinic was too difficult were regular features of his childhood experience with the healthcare system. Challenges like these can often determine whether a family ends up in crisis or gets the help they need in time.&lt;/p&gt; 
&lt;p&gt;After 16 years as a technology architect redesigning large-scale health and human services electronic health record (EHR) systems, Dueñas recognized that many of these barriers were baked into the technology. Public health departments across the United States were working to protect vulnerable populations from disease outbreaks while relying on fragmented legacy systems that weren’t designed for the speed and complexity the modern public health environment demands. Paper-based intake, manual referral processes, and disconnected data silos trapped critical health information in systems that couldn’t communicate with one another.&lt;/p&gt; 
&lt;p&gt;The COVID-19 pandemic exposed these gaps at an unprecedented scale. Spreadsheets and standalone databases buckled under the simultaneous demands of patient registration, vaccine tracking, and real-time reporting, making it clear that incremental fixes weren’t sufficient. Addressing the crisis required modern cloud infrastructure, purpose-built software, and the institutional commitment to match.&lt;/p&gt; 
&lt;p&gt;Dueñas founded &lt;a href="https://smarttracker.ai/" target="_blank" rel="noopener"&gt;SmartTracker&lt;/a&gt; to meet that need. It’s a modern, cost-effective solution designed specifically for public sector health organizations operating with limited budgets and small teams. Built entirely on &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS)&lt;/a&gt;, Dueñas created SmartTracker for departments that serve communities like the one he grew up in.&lt;/p&gt; 
&lt;p&gt;In 2025, AWS recognized the importance of that mission by selecting SmartTracker for the &lt;a href="https://aws.amazon.com/government-education/nonprofits/global-social-impact/health-equity/" target="_blank" rel="noopener"&gt;AWS Health Equity Initiative&lt;/a&gt;, a commitment to supporting organizations developing solutions to advance health equity worldwide.&lt;/p&gt; 
&lt;h3&gt;From emergency response to everyday operations&lt;/h3&gt; 
&lt;p&gt;SmartTracker is a &lt;a href="https://www.hhs.gov/hipaa/index.html" target="_blank" rel="noopener"&gt;Health Insurance Portability and Accountability Act (HIPAA)&lt;/a&gt;-compliant health and human services solution that unifies real-time disease surveillance, outbreak detection, case management, vaccine tracking, and contact tracing in a single system. With SmartTracker, health departments can track, report, and share data across their entire operation without the manual handoffs and siloed workflows that slow traditional systems down.&lt;/p&gt; 
&lt;p&gt;SmartTracker’s impact is most visible in how quickly public health organizations can respond when conditions change. The Kansas City Health Department (KCHD), which serves more than 500,000 residents, is one of the company’s most compelling proof points. When Mpox emerged as a public health concern, KCHD used SmartTracker to configure vaccine events within days, including patient self-registration, electronic consents, and automated appointment reminders.&lt;/p&gt; 
&lt;p&gt;When KCHD needed Ebola monitoring, SmartTracker deployed a tracking system in 7 days. When measles resurfaced nationally, KCHD was already prepared, running outbreak detection, case management, and communicable disease tracking on the same solution that had powered their COVID-19 and Mpox responses.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“SmartTracker is an excellent tool for public health emergency preparedness,” says Tobias Liu, administrative officer of communicable disease and public health preparedness at KHCD. “By deploying SmartTracker for our COVID vaccine rollout, we were able to register patients, complete screenings, and schedule appointments without the need for dedicated staff support. During our Mpox response, we expanded the functionality to allow for symptom tracking and case management by our epidemiologists.”&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;p&gt;The same solution supports daily operations across the full spectrum of public health services, from clinic scheduling with patient self-service and program management to vaccine and supply inventory, billing and reimbursement, and automated email and SMS outreach powered by &lt;a href="https://aws.amazon.com/ses/" target="_blank" rel="noopener"&gt;Amazon Simple Email Service&lt;/a&gt; (Amazon SES) and &lt;a href="https://aws.amazon.com/end-user-messaging/" target="_blank" rel="noopener"&gt;AWS End User Messaging&lt;/a&gt;, which are configurable to meet state and federal reporting requirements. Every form and communication automatically converts to the patient’s preferred language at registration.&lt;/p&gt; 
&lt;p&gt;That flexibility extends to implementation. When The SPOT, a youth-centered health clinic operated by the Washington University School of Medicine in St. Louis, urgently needed a modern EHR system with specialized workflows and sufficient administrative capacity, SmartTracker delivered a fully customized deployment in 90 days. An intelligent self-scheduling engine guides patients through a brief intake before booking, automatically routing them to the correct service and collecting all required consents and documentation electronically.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“On day one, it was amazing to watch the schedule fill up without the phones ringing,” says Allison Phad, coordinator of quality improvement at The SPOT. “The SmartTracker team answered every question and resolved every issue during go-live. We couldn’t have done it without them.”&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;h3&gt;Recognized by AWS, certified for the future&lt;/h3&gt; 
&lt;p&gt;The collaboration between SmartTracker and AWS extends beyond credentials. Powered by AWS through cloud computing credits, architectural guidance, and hands-on technical expertise, SmartTracker has the infrastructure and insight to build what public health departments actually need faster and at greater scale than would otherwise be possible.&lt;/p&gt; 
&lt;p&gt;That support is powering the development of SmartTalk, a telehealth tool to deliver more secure video consultations, encrypted video storage, and medication administration tracking. &lt;a href="https://aws.amazon.com/transcribe/" target="_blank" rel="noopener"&gt;Amazon Transcribe&lt;/a&gt; handles automated transcription, and &lt;a href="https://aws.amazon.com/bedrock/?trk=7ecf60df-6136-414c-a7c3-6aa4d2d6019f&amp;amp;sc_channel=ps&amp;amp;ef_id=CjwKCAjwnZfPBhAGEiwAzg-VzH_6O-0GLe9KvyN_SeqyaL5_fa1zFYTBqXmBhZCNbFg--ccak3qojxoC4k8QAvD_BwE:G:s&amp;amp;s_kwcid=AL!4422!3!795877020842!e!!g!!amazon%20bedrock!23532472972!194311072004&amp;amp;gad_campaignid=23532472972&amp;amp;gbraid=0AAAAADjHtp9dlwPYLvUSu5CB0nRSSB7Nv&amp;amp;gclid=CjwKCAjwnZfPBhAGEiwAzg-VzH_6O-0GLe9KvyN_SeqyaL5_fa1zFYTBqXmBhZCNbFg--ccak3qojxoC4k8QAvD_BwE" target="_blank" rel="noopener"&gt;Amazon Bedrock&lt;/a&gt; generates clinical insights from patient interactions. Together, these capabilities address the gaps in access, documentation, and treatment compliance that can arise when serving rural and low-income populations.&lt;/p&gt; 
&lt;p&gt;SmartTracker has also achieved the &lt;a href="https://healthit.gov/certification-health-it/" target="_blank" rel="noopener"&gt;Office of the National Coordinator (ONC) for Health Information Technology Certification of Health in IT&lt;/a&gt;, meeting the critical federal compliance standards for interoperability with healthcare providers and state registries. This positions SmartTracker to deliver direct registry connections for real-time public health reporting and transition care across providers.&lt;/p&gt; 
&lt;h3&gt;Infrastructure as a mission&lt;/h3&gt; 
&lt;p&gt;Dueñas set out to build something the market hadn’t prioritized: modern, cost-effective technology built for the public health organizations that serve overlooked communities. SmartTracker is the result of that commitment, shaped as much by his personal experience as his technical expertise.&lt;/p&gt; 
&lt;p&gt;AWS has been central to building that vision. Cloud computing credits removed the financial barriers that have historically kept enterprise-grade infrastructure out of reach for public sector organizations. Architectural guidance and hands-on technical expertise have accelerated development at every stage, from the core solution to SmartTalk. With AWS, SmartTracker has the building blocks to deliver capabilities that would otherwise require resources beyond what a mission-driven startup could sustain.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“We’re not trying to replace the systems built for large hospital networks,” Dueñas says. “We’re trying to make sure that a county health department with a fraction of the budget can still protect its community with modern, real-time technology. That’s the mission.”&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;p&gt;For public health organizations looking to modernize their operations and expand access to care, SmartTracker demonstrates what becomes possible when mission-driven software is built on AWS.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/publicsector/aws-commits-additional-20m-to-tackle-health-equity-disparities-through-cloud-technology/" target="_blank" rel="noopener"&gt;Learn more how AWS supports health equity innovation.&lt;/a&gt;&lt;/p&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Transforming federal IT with Datadog’s FedRAMP Class D (High) solution</title>
		<link>https://aws.amazon.com/blogs/publicsector/transforming-federal-it-with-datadogs-fedramp-high-solution/</link>
		
		<dc:creator><![CDATA[Gina McFarland]]></dc:creator>
		<pubDate>Tue, 12 May 2026 14:15:30 +0000</pubDate>
				<category><![CDATA[Amazon CloudWatch]]></category>
		<category><![CDATA[Amazon EC2]]></category>
		<category><![CDATA[Amazon Elastic Kubernetes Service]]></category>
		<category><![CDATA[AWS CloudFormation]]></category>
		<category><![CDATA[AWS CloudTrail]]></category>
		<category><![CDATA[AWS Lambda]]></category>
		<category><![CDATA[AWS Marketplace]]></category>
		<category><![CDATA[AWS Partner Network]]></category>
		<category><![CDATA[AWS Security Hub]]></category>
		<category><![CDATA[AWS Systems Manager]]></category>
		<category><![CDATA[Public Sector]]></category>
		<category><![CDATA[Regions]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">54039bc0ed0267badfb89e4adde9d972859d0643</guid>

					<description>In this post, we explore how federal agencies can accelerate modernization, improve cybersecurity incident response, and support continuous compliance monitoring using Datadog’s FedRAMP High authorized observability and security platform.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-30994 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/07/Transforming-federal-IT-with-Datadogs-FedRAMP-Class-D-High-solution.png" alt="Transforming federal IT with Datadog's FedRAMP Class D (High) solution" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;Federal agencies are modernizing their digital services to better support citizens, improve reliability, and meet cybersecurity requirements. This involves upgrading legacy applications, implementing zero-trust architectures, accelerating cloud adoption, and maintaining compliance with frameworks such as the &lt;a href="https://aws.amazon.com/compliance/fedramp/" target="_blank" rel="noopener"&gt;Federal Risk and Authorization Management Program (FedRAMP).&lt;/a&gt;&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://www.datadoghq.com/" target="_blank" rel="noopener"&gt;Datadog&lt;/a&gt;, an Advanced Technology Partner in the &lt;a href="https://aws.amazon.com/partners/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS) Partner Network&lt;/a&gt;, provides AI-powered observability and security. With &lt;a href="https://aws.amazon.com/partners/programs/specializations/" target="_blank" rel="noopener"&gt;AWS Competencies&lt;/a&gt; that validate their expertise in 11 categories, Datadog provides comprehensive visibility across hybrid and &lt;a href="https://aws.amazon.com/multicloud/" target="_blank" rel="noopener"&gt;multicloud&lt;/a&gt; environments. Agencies can use this solution to understand and secure their full technology footprint.&lt;/p&gt; 
&lt;p&gt;Modernization requires visibility across evolving infrastructure—from legacy systems to cloud-based services. Agencies must monitor application performance, validate security controls, track resource utilization, and identify risks across distributed systems. Unified observability helps teams to make faster, more informed decisions and maintain a consistent security posture.&lt;/p&gt; 
&lt;p&gt;In this post, we explore how federal agencies can accelerate modernization, improve cybersecurity incident response, and support continuous compliance monitoring using Datadog’s FedRAMP Certified – Class D (High) observability and security platform.&lt;/p&gt; 
&lt;h3&gt;Meeting federal agency needs with Datadog’s FedRAMP Class D (High) platform&lt;/h3&gt; 
&lt;p&gt;With Datadog’s FedRAMP Class D (High) certification, federal agencies can use Datadog solutions to bring sensitive, mission-critical workloads into a unified, secure observability platform. IT, DevOps, and security teams gain real-time visibility across their full infrastructure. Teams can detect issues faster, improve reliability for citizen-facing applications, and align modernization efforts with federal security standards.&lt;/p&gt; 
&lt;p&gt;Datadog supports key agency requirements through enhanced visibility, integrated security operations, and modernization-focused capabilities.&lt;/p&gt; 
&lt;p&gt;Datadog provides comprehensive infrastructure visibility through:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Unified monitoring across on-premises, hybrid, and multicloud environments&lt;/li&gt; 
 &lt;li&gt;High-resolution metrics, logs, and traces&lt;/li&gt; 
 &lt;li&gt;Automated service discovery and dependency mapping for hosts, containers, and services&lt;/li&gt; 
 &lt;li&gt;Correlated telemetry to accelerate root-cause analysis and reduce mean time to resolution&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;With Datadog, agencies can enhance their security operations and compliance monitoring with:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://www.datadoghq.com/dg/security/siem-solution/" target="_blank" rel="noopener"&gt;Cloud Security Information and Event Management (SIEM)&lt;/a&gt; for real-time threat detection and investigation across AWS, on-premises, and containerized workloads&lt;/li&gt; 
 &lt;li&gt;Built-in FedRAMP Class D (High) dashboards for monitoring &lt;a href="https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final" target="_blank" rel="noopener"&gt;NIST 800-53 controls&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;Integrations with &lt;a href="https://aws.amazon.com/config/" target="_blank" rel="noopener"&gt;AWS Config&lt;/a&gt;, &lt;a href="https://aws.amazon.com/cloudtrail/" target="_blank" rel="noopener"&gt;AWS CloudTrail&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/security-hub/" target="_blank" rel="noopener"&gt;AWS Security Hub&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;Continuous audit trail collection for accountability and traceability&lt;/li&gt; 
 &lt;li&gt;Unified interface for operational, security, and compliance signals to reduce alert fatigue&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Datadog helps agencies accelerate modernization initiatives by offering:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://www.datadoghq.com/dg/apm/application-observability/" target="_blank" rel="noopener"&gt;Application Performance Monitoring (APM)&lt;/a&gt; and distributed tracing for microservices, APIs, and serverless workloads&lt;/li&gt; 
 &lt;li&gt;Deep visibility into containerized and Kubernetes environments&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.datadoghq.com/integrations/amazon-web-services/" target="_blank" rel="noopener"&gt;Built-in integrations with 100+ AWS services&lt;/a&gt; including &lt;a href="https://aws.amazon.com/ec2/" target="_blank" rel="noopener"&gt;Amazon Elastic Compute Cloud (Amazon EC2)&lt;/a&gt;, &lt;a href="https://aws.amazon.com/ecs/" target="_blank" rel="noopener"&gt;Amazon Elastic Container Service (Amazon ECS)&lt;/a&gt;, &lt;a href="https://aws.amazon.com/eks/" target="_blank" rel="noopener"&gt;Amazon Elastic Kubernetes Service (Amazon EKS)&lt;/a&gt;, &lt;a href="https://aws.amazon.com/lambda/" target="_blank" rel="noopener"&gt;AWS Lambda&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/cloudwatch/" target="_blank" rel="noopener"&gt;Amazon CloudWatch&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;Service-level objectives (SLOs) and performance baselines to guide modernization decisions&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;These features help agencies accelerate modernization, validate controls, and maintain audit readiness while navigating evolving requirements. The following screenshot shows the &lt;a href="https://docs.datadoghq.com/security/cloud_security_management/misconfigurations/frameworks_and_benchmarks/supported_frameworks/" target="_blank" rel="noopener"&gt;Datadog Cloud Security dashboard&lt;/a&gt; tracking NIST 800-53 compliance.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/02/22/Figure-1-Datadog-Cloud-Security-dashboard.png" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30075 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/02/22/Figure-1-Datadog-Cloud-Security-dashboard.png" alt="Datadog cloud security compliance dashboard displaying posture score, top failing findings, resources by severity, and control compliance status supporting NIST 800-53 monitoring." width="624" height="386"&gt;&lt;/a&gt;&lt;em&gt;Figure 1: Datadog Cloud Security dashboard&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;AI-driven insights for federal modernization&lt;/h3&gt; 
&lt;p&gt;Datadog applies AI-driven analytics, such as &lt;a href="https://docs.datadoghq.com/watchdog" target="_blank" rel="noopener"&gt;Watchdog anomaly detection&lt;/a&gt;, to help teams identify issues earlier. Watchdog automatically flags outliers in metrics, logs, and traces, highlighting unusual patterns that can indicate performance or security risks. It correlates signals across services to accelerate root-cause analysis. These insights help agencies maintain resilient operations for mission-critical workloads.&lt;/p&gt; 
&lt;h3&gt;Reference architecture: AWS and Datadog integration for federal workloads&lt;/h3&gt; 
&lt;p&gt;Federal agencies modernizing on AWS require consistent visibility across cloud and hybrid environments. Datadog provides cross-platform observability that unifies these environments in a secure interface.&lt;/p&gt; 
&lt;p&gt;A key component of the reference architecture is the Datadog Agent, deployable through &lt;a href="https://aws.amazon.com/cloudformation/" target="_blank" rel="noopener"&gt;AWS CloudFormation&lt;/a&gt; templates or &lt;a href="https://aws.amazon.com/systems-manager/" target="_blank" rel="noopener"&gt;AWS Systems Manager.&lt;/a&gt; Agencies can use these options to manage the agent securely and at scale across multiple AWS accounts and &lt;a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region" target="_blank" rel="noopener"&gt;Regions&lt;/a&gt;. The agent collects detailed metrics, logs, and traces from AWS services, offering comprehensive insight into cloud infrastructure performance. The following diagram shows the solution architecture.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/02/22/Figure-2-Datadog-for-Government-integration-with-AWS-Cloud-and-customer-data-center-environments.png" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30074 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/02/22/Figure-2-Datadog-for-Government-integration-with-AWS-Cloud-and-customer-data-center-environments.png" alt="Figure 2: Datadog for Government integration with AWS Cloud and customer data center environments" width="1600" height="901"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 2: Datadog for Government integration with AWS Cloud and customer data center environments&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;Datadog’s own FedRAMP journey provides a repeatable model for federal agencies. Datadog’s blog post, &lt;a href="https://www.datadoghq.com/blog/how-we-use-datadog-for-fedramp-compliance/" target="_blank" rel="noopener"&gt;How We Use Datadog to Further Our FedRAMP® Compliance&lt;/a&gt;, outlines best practices—including standardized tagging, centralized telemetry pipelines, and automated monitoring of control families—that agencies can adapt to strengthen their own compliance operations.&lt;/p&gt; 
&lt;p&gt;To meet FedRAMP’s logging and auditing requirements, Datadog integrates with &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html" target="_blank" rel="noopener"&gt;Amazon CloudWatch Logs&lt;/a&gt;. This centralized log ingestion helps agencies satisfy controls such as AU-2: Audit Events. Tagging strategies can enhance reporting, filtering, and compliance monitoring. The following screenshot shows the Datadog &lt;a href="https://docs.datadoghq.com/logs/explorer/" target="_blank" rel="noopener"&gt;Log Explorer&lt;/a&gt; dashboard, integrated with CloudWatch Logs.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/02/22/Figure-3-Datadog-integrates-seamlessly-with-CloudWatch-Logs.png" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30073 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/02/22/Figure-3-Datadog-integrates-seamlessly-with-CloudWatch-Logs.png" alt="Datadog Log Explorer interface displaying CloudWatch Logs integration with time-series visualization and detailed log entries." width="1070" height="666"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 3: Datadog integrates seamlessly with CloudWatch Logs&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;The following screenshot is the Datadog Log Explorer dashboard showing real-time monitoring and log analysis capabilities, with built-in search and filtering for compliance monitoring.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/02/22/Figure-4-Datadogs-Log-Explorer.png" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30072 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/02/22/Figure-4-Datadogs-Log-Explorer.png" alt="Datadog Log Explorer interface displaying real-time log monitoring with time-series bar chart, detailed log entries, and filtering options for compliance tracking." width="1058" height="606"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 4: Datadog’s Log Explorer&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;Datadog’s Cloud SIEM adds real-time threat detection across AWS services, supporting controls such as SI-4: Information System Monitoring. Agencies can create custom rules aligned to FedRAMP-mandated event types and integrate automated alerts with internal incident response workflows. Datadog’s Audit Trail captures platform activity to support accountability and auditing processes.&lt;/p&gt; 
&lt;p&gt;Visit the &lt;a href="https://aws.amazon.com/compliance/fedramp/" target="_blank" rel="noopener"&gt;AWS FedRAMP&lt;/a&gt; page to learn more about the comprehensive requirements to achieve FedRAMP compliance.&lt;/p&gt; 
&lt;h3&gt;The evolution of federal cloud security: FedRAMP 20x and beyond&lt;/h3&gt; 
&lt;p&gt;Federal cloud security is shifting toward automation and continuous validation through the FedRAMP 20x initiative. &lt;a href="https://www.fedramp.gov/20x/goals/" target="_blank" rel="noopener"&gt;FedRAMP 20x&lt;/a&gt; introduces five major changes:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Automated validation –&lt;/strong&gt; Aiming for 80% automation of security requirement validation&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Industry alignment –&lt;/strong&gt; Commercial frameworks to streamline assessments&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Continuous monitoring –&lt;/strong&gt; Replacing periodic checks with continuous validation&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Direct agency relationships –&lt;/strong&gt; Strengthening collaboration for improved outcomes&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Innovation acceleration –&lt;/strong&gt; Streamlined certification for new services through continuous validation&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;Datadog supports these goals through automated controls monitoring, integrations with commercial frameworks, and built-in continuous validation capabilities. This automation-first approach helps agencies to focus on mission impact while sustaining strong security.&lt;/p&gt; 
&lt;h3&gt;USDA DISC: FedRAMP-compliant monitoring implementation&lt;/h3&gt; 
&lt;p&gt;When the&lt;a href="https://www.usda.gov/" target="_blank" rel="noopener"&gt; U.S. Department of Agriculture (USDA)&lt;/a&gt; &lt;a href="https://www.usda.gov/about-usda/general-information/staff-offices/office-chief-information-officer/digital-infrastructure-services-center-disc" target="_blank" rel="noopener"&gt;Digital Infrastructure Services Center (DISC&lt;/a&gt;) needed to modernize monitoring and comply with the &lt;a href="https://www.cisa.gov/topics/cybersecurity-best-practices/executive-order-improving-nations-cybersecurity" target="_blank" rel="noopener"&gt;Executive Order to Improve Cybersecurity&lt;/a&gt;, it partnered with &lt;a href="https://www.datadoghq.com/case-studies/usda/" target="_blank" rel="noopener"&gt;ECCO Select to implement Datadog’s observability platform.&lt;/a&gt;&lt;/p&gt; 
&lt;p&gt;As a federated data center serving 14 departments and bureaus, DISC required a secure, compliant solution capable of supporting a complex hybrid environment. In only 75 days, the team deployed monitoring across thousands of hosts and containers—achieving 95% coverage across cloud and on-premises systems. The implementation included transitioning more than 1,000 monitoring templates while maintaining operational continuity.&lt;/p&gt; 
&lt;p&gt;The impact was clear. As Chris Condon, Director of Enterprise Observability at ECCO Select, explains,&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“We now have a comprehensive solution that not only speeds up root-cause analysis when there’s an issue but continuously provides the visibility we need to keep our systems secure and resilient.”&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;p&gt;DISC’s experience demonstrates how federal agencies can meet rigorous security requirements while accelerating modernization with FedRAMP-compliant observability.&lt;/p&gt; 
&lt;h3&gt;Transform your agency’s observability and security posture today&lt;/h3&gt; 
&lt;p&gt;Federal agencies can modernize efficiently with Datadog’s FedRAMP Certified – Class D (High) platform. Visit Datadog for Government in &lt;a href="https://aws.amazon.com/marketplace/" target="_blank" rel="noopener"&gt;AWS Marketplace&lt;/a&gt; to begin a trial or connect with Datadog’s federal team to strengthen operational resilience, improve security visibility, and support mission-critical workloads.&lt;/p&gt; 
&lt;h3&gt;Learn more&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/marketplace/seller-profile?id=e56c35d0-c5d4-4dac-91d5-ebf57fef6e5c" target="_blank" rel="noopener"&gt;Datadog solutions in AWS Marketplace&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://www.datadoghq.com/blog/datadog-fedramp-high-in-process/" target="_blank" rel="noopener"&gt;Why FedRAMP High Observability Matters for Government IT Teams&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://www.datadoghq.com/solutions/government/" target="_blank" rel="noopener"&gt;Datadog for Government&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;AWS integration guides in &lt;a href="https://docs.datadoghq.com/integrations/guide/#aws-guides" target="_blank" rel="noopener"&gt;Datadog documentation&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>A governance framework for nonprofit agentic AI on AWS</title>
		<link>https://aws.amazon.com/blogs/publicsector/a-governance-framework-for-nonprofit-agentic-ai-on-aws/</link>
		
		<dc:creator><![CDATA[Mike George]]></dc:creator>
		<pubDate>Mon, 11 May 2026 18:13:54 +0000</pubDate>
				<category><![CDATA[Amazon Bedrock AgentCore]]></category>
		<category><![CDATA[Amazon Cognito]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Public Sector]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">ef9ac5551f39873f32f741c7d046bd8c71a51009</guid>

					<description>In this post, I outline a governance framework that addresses these problems through features of Amazon Bedrock AgentCore. By following the practices outlined here, you can gain confidence to run your agentic AI workloads in highly demanding situations.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-31012 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/09/A-governance-framework-for-nonprofit-agentic-AI-on-AWS.png" alt="A governance framework for nonprofit agentic AI on AWS" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;Every business leader is being asked to do more with less, and &lt;a href="https://aws.amazon.com/ai/agentic-ai/" target="_blank" rel="noopener"&gt;agentic AI&lt;/a&gt; promises to help organizations be more productive without requiring additional staff. However, nonprofit organizations face a unique set of challenges, such as high volunteer and staff turnover, board accountability, and compliance and reporting requirements. Accountability and trust problems arise when agents run under uncontrolled accounts, an agent’s decision or action can’t be explained, or an organization is unable to demonstrate that sensitive data was handled appropriately. In this post, I outline a governance framework that addresses these problems through features of &lt;a href="https://aws.amazon.com/bedrock/agentcore/" target="_blank" rel="noopener"&gt;Amazon Bedrock AgentCore&lt;/a&gt;. By following the practices outlined here, you can gain confidence to run your agentic AI workloads in highly demanding situations.&lt;/p&gt; 
&lt;h3&gt;Overview&lt;/h3&gt; 
&lt;p&gt;Governance frameworks for agentic AI exist, but many of those tend to be burdensome to implement, and many don’t tie governance directly to the technology being used. The framework I’ll introduce aligns governance concerns directly with the technology used to solve the concern.&lt;/p&gt; 
&lt;p&gt;At its core, governing agentic AI really comes down to four questions you, your leadership, or your board will want to understand:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;What is the agent allowed to do?&lt;/li&gt; 
 &lt;li&gt;When an agent runs, who is it acting for?&lt;/li&gt; 
 &lt;li&gt;What did the agent do?&lt;/li&gt; 
 &lt;li&gt;What happens when the agent gets something wrong?&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;These are questions that leadership or the board might ask after an agent sends an unintended donor communication, a large donor requests an audit, or you notice that an agent used the access credentials of a former volunteer that hadn’t been revoked.&lt;/p&gt; 
&lt;p&gt;Being able to answer these questions builds trust in agentic AI and your team’s ability to manage it within the organization. This trust framework consists of four pillars that directly map to a capability in Amazon Bedrock AgentCore:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Boundaries&lt;/strong&gt; – This is what agents can and can’t do, and an agent is denied when it attempts to perform an unallowed action.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Identity&lt;/strong&gt; – Agents are designed to act on behalf of an authorized person. When the individual is no longer with the organization, their access must also automatically expire.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Visibility&lt;/strong&gt; – Maintaining a trail of the actions the agent takes is essential.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Evaluation&lt;/strong&gt; – When an agent behaves unexpectedly, you should be able to detect it, stop it (through alerting), and be able to explain it.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;The order here is important. You create boundaries to define what an agent can and can’t do before you identify who can authorize it to do those things. Identity must be defined before visibility so you can record the actions initiated by the authorized user. Finally, visibility happens before evaluation because you can’t respond to events that you’re unable to see. Together these pillars can give you and your leadership confidence in moving from a &lt;a href="https://aws.amazon.com/generative-ai/" target="_blank" rel="noopener"&gt;generative AI&lt;/a&gt; prototype to production.&lt;/p&gt; 
&lt;p&gt;The following graphic illustrates these four pillars and the order they should be enacted.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/09/Figure-1-Agentic-AI-trust-framework.png" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-31010 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/09/Figure-1-Agentic-AI-trust-framework.png" alt="Graphic illustrating the order of the four pillars. Boundaries should be first, then identity, visibility, and finally evaluation." width="664" height="154"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 1: Agentic AI trust framework&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;Pillar 1: Boundaries&lt;/h3&gt; 
&lt;p&gt;Most agentic system failures aren’t caused by models doing unpredictable things. Rather, they’re often due to agents doing exactly what they were told to do, but in contexts or with data that was unexpected. For example, a donor outreach agent might work perfectly fine for a list of 50 lapsed donors but might do something unexpected when it’s run against the full member database.&lt;/p&gt; 
&lt;p&gt;These sorts of problems can be solved by &lt;a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/policy.html" target="_blank" rel="noopener"&gt;Policy in Amazon Bedrock AgentCore&lt;/a&gt;. You can use policies to define what an agent can and can’t do in natural language. The natural language boundary is converted into &lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/saas-multitenant-api-access-authorization/cedar.html" target="_blank" rel="noopener"&gt;Cedar&lt;/a&gt;, an open source policy language from &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS)&lt;/a&gt; for fine-grained permissions. As an example, you might write a boundary statement such as: “This agent may draft donor communications but may not send to more than 100 recipients.” This natural language boundary is converted into a Cedar policy that is validated against your gateway’s &lt;a href="https://modelcontextprotocol.io/docs/getting-started/intro" target="_blank" rel="noopener"&gt;Model Context Protocol (MCP)&lt;/a&gt; tool schema, enabling the policy to map to actual tool parameters and actions.&lt;/p&gt; 
&lt;p&gt;Policy integrates with &lt;a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/gateway.html" target="_blank" rel="noopener"&gt;Amazon Bedrock AgentCore Gateway&lt;/a&gt;, which is a straightforward and secure way for developers to build, deploy, discover, and connect tools at scale. Whenever a tool is called through AgentCore Gateway, the policy is evaluated in real time. This prevents the agent from exceeding its authorized scope regardless of how it was invoked or what it was asked to do. Policies follow the basic pattern of who (which users or roles can perform the action), what (which operations or tools can they use), and when (under what conditions or constraints). You can configure enforcement mode to one of the following:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;code&gt;LOG_ONLY&lt;/code&gt; – Where the policy engine logs whether the action would be allowed or denied without enforcing the decision. This is a great option to track what would happen if you’re not ready to block actions.&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;ENFORCE&lt;/code&gt; – Where the policy engine evaluates the action and enforces the decision by denying agent operations.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;The “who” in this pattern is where the policy connects to identity, which is pillar 2. Policies can be scoped to specific identities or roles, so a board treasurer’s agent might have broader access to financial tools than a volunteer’s agent.&lt;/p&gt; 
&lt;p&gt;You can also use &lt;a href="https://aws.amazon.com/bedrock/guardrails/" target="_blank" rel="noopener"&gt;Amazon Bedrock Guardrails&lt;/a&gt; to filter harmful user inputs and toxic model responses. For example, you might want your agent to redact personally identifiable information (PII) from the &lt;a href="https://aws.amazon.com/what-is/large-language-model/" target="_blank" rel="noopener"&gt;large language model (LLM)&lt;/a&gt; output to protect the privacy of your users or members. You can use a guardrail with the Amazon Bedrock &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-use-converse-api.html" target="_blank" rel="noopener"&gt;Converse API&lt;/a&gt; or the &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-use-independent-api.html" target="_blank" rel="noopener"&gt;ApplyGuardrail API&lt;/a&gt;.&lt;/p&gt; 
&lt;h3&gt;Pillar 2: Identity&lt;/h3&gt; 
&lt;p&gt;Through inconsistent identity management, you could end up in a situation where a volunteer is granted access to the donor customer relationship management (CRM) tool to help with a campaign. They leave the organization and a month later someone discovers that a caller that was provisioned during the campaign is still accessing the CRM agent with the volunteer’s old credentials. Without identity governance, this is a problem that unfortunately many nonprofits experience.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/identity-overview.html" target="_blank" rel="noopener"&gt;Amazon Bedrock AgentCore Identity&lt;/a&gt; uses OAuth-based authorization so agents are designed to act on behalf of a specific, verified identity. That identity can be scoped, audited, and revoked. Nonprofit organizations can use enterprise identity providers such as Okta, Microsoft Entra, and &lt;a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/identity-idps.html" target="_blank" rel="noopener"&gt;many others&lt;/a&gt;. Organizations without an enterprise identity provider can use &lt;a href="https://aws.amazon.com/cognito/" target="_blank" rel="noopener"&gt;Amazon Cognito&lt;/a&gt;. When someone leaves the organization, disabling their identity will automatically disable any agent’s authorization to act on their behalf.&lt;/p&gt; 
&lt;p&gt;With AgentCore Identity, you can control both &lt;a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/runtime-oauth.html" target="_blank" rel="noopener"&gt;inbound authentication and outbound authentication&lt;/a&gt;. With inbound authentication, a nonprofit organization can provide callers with the right access to an agent, tool, runtime, or gateway. With outbound authentication, organizations can use an API key or OAuth to allow agent, tool, or gateway access to downstream resources, such as third-party systems.&lt;/p&gt; 
&lt;h3&gt;Pillar 3: Visibility&lt;/h3&gt; 
&lt;p&gt;To have trust in your AI systems, you need to know what an agent did at each step of its invocation. &lt;a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/observability.html" target="_blank" rel="noopener"&gt;Amazon Bedrock AgentCore Observability&lt;/a&gt; helps you trace, debug, and monitor agent performance across each step of the agent workflow.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/observability-service-provided.html" target="_blank" rel="noopener"&gt;AgentCore services automatically emit&lt;/a&gt; a set of metrics for agents, gateway resources, and memory resources. You can enable log data and spans for memory resources, and you can &lt;a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/observability-configure.html" target="_blank" rel="noopener"&gt;instrument your custom agents&lt;/a&gt; to provide custom trace data. With this observability data, you can troubleshoot performance problems, validate agent tool selection, and identify hard-to-reproduce issues. For example, when a board member asks what the agent did with donor data on a specific date, you can trace the exact sequence of tool calls, inputs, and outputs to provide a complete answer.&lt;/p&gt; 
&lt;h3&gt;Pillar 4: Evaluation&lt;/h3&gt; 
&lt;p&gt;A fundamental feature of agentic systems is that they’re nondeterministic. This property can be frustrating because it can &lt;a href="https://aws.amazon.com/blogs/publicsector/why-your-ai-agents-give-inconsistent-results-and-how-agent-sops-fix-it/" target="_blank" rel="noopener"&gt;lead to inconsistent results&lt;/a&gt;, but nondeterminism is the fundamental property that makes agentic AI useful. Consider a situation where an agent is generating outreach messages to members that are tonally off and inconsistent with the organization’s voice. It’s important to be able to detect that this is happening.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/evaluations.html" target="_blank" rel="noopener"&gt;Amazon Bedrock AgentCore Evaluations&lt;/a&gt; provides automated assessment tools to measure how well your agents are performing their tasks. AgentCore Evaluations uses LLM-as-a-judge or code-based evaluators to determine how your agentic workload is performing. AgentCore Evaluations generates a &lt;a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/prompt-templates-builtin.html#goal-success-rate" target="_blank" rel="noopener"&gt;series of evaluation metrics&lt;/a&gt; such as Goal success rate, Harmfulness, and Stereotyping. These metrics are created as &lt;a href="https://aws.amazon.com/cloudwatch/" target="_blank" rel="noopener"&gt;Amazon CloudWatch&lt;/a&gt; metrics that you can monitor and alert on like any other metric.&lt;/p&gt; 
&lt;p&gt;For example, if the agent’s harmfulness evaluator average (whether the agent’s response includes potentially harmful content, such as insults or hate speech) falls below 1 (meaning the content is harmful), you can alert on the metric and take action to disable the agent while you investigate. Nonprofit organizations can use this approach to quickly respond to events and have confidence in the output of their agents.&lt;/p&gt; 
&lt;h3&gt;Conclusion&lt;/h3&gt; 
&lt;p&gt;Amazon Bedrock AgentCore provides the technology that supports the four pillars of this framework. It places boundaries on what your agents can do, helps verify that your agents are authorized to do what they’re asked to do, provides visibility into their actions, and gives you the tools to set alerts for when something unexpected happens.&lt;/p&gt; 
&lt;p&gt;Two common concerns I hear about generative AI applications are:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;How do we get started?&lt;/li&gt; 
 &lt;li&gt;How do we build the experience to run generative AI workloads in production?&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;Both concerns are addressed by starting small. When building an AI agent, a best practice is to start with something that is internal facing. Start with something that is low risk but provides value.&lt;/p&gt; 
&lt;p&gt;At Amazon, we like to say that &lt;a href="https://aws.amazon.com/blogs/enterprise-strategy/failing-creating-a-culture-of-learning/" target="_blank" rel="noopener"&gt;there is no compression algorithm for experience&lt;/a&gt;. By starting with a small, low-risk application, teams can gain operational experience running agentic AI workloads while keeping the risk low. As an example, an organization might deploy an internal knowledge base agent.&lt;/p&gt; 
&lt;p&gt;When you have one workflow operating under the full framework outlined, expand your efforts to other higher-stakes workflows that generate value. At this point, you understand the framework, so your work shifts from a governance problem to following the same pattern you’ve already established. The framework I outline in this post can be incorporated into the regular work of an organization’s &lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/cloud-center-of-excellence/introduction.html" target="_blank" rel="noopener"&gt;Cloud Center of Excellence (CCoE)&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;The Amazon Bedrock AgentCore services I talked about in this post are charged based on how much you use. For complete details, refer to the &lt;a href="https://aws.amazon.com/bedrock/agentcore/pricing/" target="_blank" rel="noopener"&gt;Amazon Bedrock AgentCore Pricing&lt;/a&gt; page.&lt;/p&gt; 
&lt;p&gt;As a next step, g&lt;a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/agentcore-get-started-cli.html" target="_blank" rel="noopener"&gt;et started&lt;/a&gt; by adding Amazon Bedrock AgentCore features to your agentic workloads. Depending on your use case, &lt;a href="https://github.com/awslabs/agentcore-samples" target="_blank" rel="noopener"&gt;review the AgentCore code examples&lt;/a&gt;, which provide a deeper dive into the concepts I outlined here.&lt;/p&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Stakeholder management for mission-critical cloud migrations: Lessons from the public sector</title>
		<link>https://aws.amazon.com/blogs/publicsector/stakeholder-management-for-mission-critical-cloud-migrations-lessons-from-the-public-sector/</link>
		
		<dc:creator><![CDATA[Mark Scutch]]></dc:creator>
		<pubDate>Wed, 06 May 2026 22:40:02 +0000</pubDate>
				<category><![CDATA[Public Sector]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">ede35b510da504c73c2d7b3e8a4f37a03e919512</guid>

					<description>Learn how public sector organizations have been migrating their mission-critical systems to Amazon Web Services (AWS), spanning multiple government entities, system integrators, independent software vendors (ISVs), and AWS teams.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-32848 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/c5b76da3e608d34edb07244cd9b875ee86906328/2026/04/26/Stakeholder-management-for-mission-critical-cloud-migrations.png" alt="Stakeholder management for mission-critical cloud migrations: Lessons from the public sector" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;When public sector organizations embark on large-scale migration and modernization efforts, particularly those involving regulated workloads, multiple organizational entities, and implementation teams, one of the primary determinants of success is stakeholder management. Stakeholders are individuals and teams who have a vested interest in the success of the migration. These stakeholders include executive sponsors, technical teams, contracted organizations, compliance officers, and end users who will ultimately interact with the modernized system. Mission-critical systems directly impact essential operations. If these systems fail or experience downtime, they can disrupt critical services, affect end users and the population they serve, or create operational or financial consequences.&amp;nbsp;Public sector organizations have been migrating their mission-critical systems to &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS)&lt;/a&gt;, spanning multiple government entities, system integrators, &lt;a href="https://aws.amazon.com/what-is/independent-software-vendor/" target="_blank" rel="noopener"&gt;independent software vendors (ISVs)&lt;/a&gt;, and AWS teams. Although these migrations ultimately delivered improved scalability, resiliency, and operational efficiency, the most impactful lessons emerged from how stakeholders were aligned, governed, and engaged throughout the journey.&lt;/p&gt; 
&lt;p&gt;According to &lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/strategy-enterprise-transformation/introduction.html" target="_blank" rel="noopener"&gt;Accelerating your return on cloud investment&lt;/a&gt; in AWS Prescriptive Guidance, the stakes of getting stakeholder management right are high. Research cited by AWS shows that 88% of cloud transformations that don’t prioritize culture-centric changes fail to result in sustained performance gains after 3 years, underscoring why structured stakeholder engagement isn’t a soft skill but a mission-critical discipline.&lt;/p&gt; 
&lt;p&gt;For organizations preparing for a similar mission-critical migration or modernization, the following stakeholder management principles can significantly reduce risk and improve outcomes.&lt;/p&gt; 
&lt;h3&gt;Establish stakeholder alignment before finalizing architecture&lt;/h3&gt; 
&lt;p&gt;Early stakeholder alignment sets the foundation for delivery success. In complex environments, ambiguity around ownership and decision rights can quickly create downstream delays, even when technical plans are sound. Although speed matters in cloud migrations, rushing architectural decisions without stakeholder alignment often creates costly delays later. The key is establishing rapid decision-making processes up front so that collaboration accelerates rather than impedes progress.&lt;/p&gt; 
&lt;p&gt;In practice, “early” means during the initial planning phase (typically the first 30–60 days of the program), before architectural decisions are locked in and before migration waves begin. This is when you’re still defining the scope, identifying workloads, and establishing governance structures. Prioritize early agreement on:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Clear ownership models across agencies, service providers, and vendors&lt;/li&gt; 
 &lt;li&gt;Decision-making authority and escalation paths&lt;/li&gt; 
 &lt;li&gt;Shared success criteria for each phase of the migration lifecycle&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Aligning stakeholders early grounds architectural decisions, timelines, and operating models in a common understanding of responsibilities and outcomes.&lt;/p&gt; 
&lt;h3&gt;Implement a governance cadence that matches system criticality&lt;/h3&gt; 
&lt;p&gt;Governance refers to the structured decision-making framework that defines who makes decisions, how issues are escalated, and how progress is tracked and reported throughout the project effort. Mission-critical workloads require governance structures that are both rigorous and efficient. Lightweight governance models often fail to surface risks quickly enough in complex, multi-stakeholder environments.&lt;/p&gt; 
&lt;p&gt;For mission-critical systems, those that directly impact citizen services, involve regulated data, or have significant operational dependencies, effective governance typically includes the following elements:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Weekly operational forums&lt;/strong&gt; focused on cross-team blockers and dependencies&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Biweekly executive sponsor reviews&lt;/strong&gt; to drive decisions and manage risk&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Defined escalation mechanisms&lt;/strong&gt; for issues requiring immediate resolution&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;For less critical systems—those with lower operational impact or simpler technical requirements—governance can be lighter weight with biweekly operational check-ins and monthly executive reviews.&lt;/p&gt; 
&lt;p&gt;Structured governance cadences meaningfully accelerate decision-making, with organizations reporting that regular operational forums help surface and resolve blockers significantly faster than improvised approaches.&lt;/p&gt; 
&lt;p&gt;This layered governance approach creates alignment at both the delivery and leadership levels while maintaining momentum.&lt;/p&gt; 
&lt;h3&gt;Continuously evaluate stakeholder delivery capability&lt;/h3&gt; 
&lt;p&gt;Stakeholder readiness directly impacts migration execution quality and timeline predictability. In complex environments involving multiple system integrators, ISVs, and internal teams, capability gaps, or unclear responsibilities can introduce delivery risk. Organizations should continuously evaluate the delivery capability of stakeholders throughout the migration lifecycle.&lt;/p&gt; 
&lt;p&gt;Assessment should focus on both technical capability and operational readiness, including:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Cloud platform expertise aligned to the target architecture and services&lt;/li&gt; 
 &lt;li&gt;Regulatory and compliance experience relevant to the workload or industry&lt;/li&gt; 
 &lt;li&gt;Organizational readiness for post-migration operations, support, and optimization&lt;/li&gt; 
 &lt;li&gt;Technical certifications and cloud delivery experience across participating teams&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;In migrations involving multiple implementation teams, delivery success also depends on effective coordination across organizations. Even experienced teams can introduce risk if communication channels, escalation paths, or ownership boundaries are unclear. Establishing clear governance structures—such as defined escalation paths, communication cadences, and delivery responsibilities—helps confirm that issues are identified early and resolved quickly.&lt;/p&gt; 
&lt;p&gt;Conducting these assessments during the planning phase helps organizations identify capability gaps and coordination risks before execution begins. By validating both stakeholder capability and collaboration readiness, organizations can reduce the likelihood of delays and quality issues during critical migration phases while improving overall delivery predictability.&lt;/p&gt; 
&lt;h3&gt;Design for evolving requirements and regulatory complexity&lt;/h3&gt; 
&lt;p&gt;In regulated environments, requirements frequently evolve as security reviews, compliance audits, and operational validations progress. Treating requirements as static introduces schedule and cost risk.&lt;/p&gt; 
&lt;p&gt;Planning for regulatory complexity means:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Establishing formal technical change control processes with clear impact assessment procedures&lt;/li&gt; 
 &lt;li&gt;Building buffer time into schedules for compliance reviews and security validations&lt;/li&gt; 
 &lt;li&gt;Engaging compliance and security teams early in the architecture design phase&lt;/li&gt; 
 &lt;li&gt;Maintaining transparency on timeline, cost, and scope tradeoffs with all stakeholders&lt;/li&gt; 
 &lt;li&gt;Creating a risk register that tracks regulatory dependencies and mitigation strategies&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Organizations with formal change control processes are better equipped to avoid schedule disruptions and cost overruns by surfacing potential issues early, before they escalate into delivery risks.&lt;/p&gt; 
&lt;p&gt;Being proactive about stakeholder engagement means teams can adapt without destabilizing delivery. When technical changes affect critical milestones, executive stakeholders should be engaged immediately to make informed decisions about tradeoffs.&lt;/p&gt; 
&lt;h3&gt;Maintain consistent executive engagement throughout the lifecycle&lt;/h3&gt; 
&lt;p&gt;Executive engagement is not a periodic checkpoint, but rather a core control mechanism in complex migrations. Consistent leadership involvement accelerates decision-making, resolves ownership disputes, and reinforces priorities across organizations.&lt;/p&gt; 
&lt;p&gt;Executive engagement requires active participation from the most senior leader accountable for the migration’s success. This leader is typically a chief information officer (CIO), chief technical officer (CTO), or agency director. This leader should have the authority to make final decisions, allocate resources, and resolve cross-organizational conflicts.&lt;/p&gt; 
&lt;p&gt;To apply this to your organization, you need to identify who has ultimate accountability for the migration’s success. Make sure this person is visible in governance forums, receives regular status updates, and is available for rapid decision-making when critical issues arise.&lt;/p&gt; 
&lt;p&gt;Successful programs require executive sponsors to:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Demonstrate visible commitment that reinforces priorities across delivery teams&lt;/li&gt; 
 &lt;li&gt;Actively participate in governance forums (at least monthly)&lt;/li&gt; 
 &lt;li&gt;Resolve cross-organizational conflicts quickly&lt;/li&gt; 
 &lt;li&gt;Support timely decisions when constraints compete&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;In multiagency environments, executive sponsors serve as the ultimate decision authority and beacon for organizational alignment.&lt;/p&gt; 
&lt;h3&gt;Use AWS as a stakeholder orchestrator&lt;/h3&gt; 
&lt;p&gt;Beyond infrastructure and services, AWS can play a critical role in convening stakeholders across customers, supporting organizations, and vendors. In complex migrations, this orchestration capability helps maintain alignment and momentum.&lt;/p&gt; 
&lt;p&gt;Organizations can benefit by:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Using AWS to reinforce a unified “one team” delivery model&lt;/li&gt; 
 &lt;li&gt;Engaging AWS security, compliance, and assurance teams early&lt;/li&gt; 
 &lt;li&gt;Using structured support for cutover and go-live planning&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;This collaborative approach helps customers navigate complexity while reducing operational and compliance risk.&lt;/p&gt; 
&lt;h3&gt;Stakeholder engagement self-assessment&lt;/h3&gt; 
&lt;p&gt;Before beginning your migration journey, it’s essential to evaluate your organization’s stakeholder management maturity across dimensions that directly correlate with migration success. A self-assessment provides a brief structured framework to identify strengths and gaps in your current approach. You can then proactively address weaknesses before they become blockers during active migration work.&lt;/p&gt; 
&lt;p&gt;Prioritize addressing the most critical gaps, starting with executive sponsorship and stakeholder alignment because these foundational elements enable progress in other areas.&lt;/p&gt; 
&lt;p&gt;The &lt;a href="https://aws.amazon.com/cloud-adoption-framework/" target="_blank" rel="noopener"&gt;AWS Cloud Adoption Framework (AWS CAF)&lt;/a&gt; provides additional guidance on organizational readiness and stakeholder engagement strategies that can help you develop comprehensive improvement plans across the People, Governance, and Business perspectives.&lt;/p&gt; 
&lt;p&gt;Use this quick assessment to evaluate your migration’s stakeholder management maturity:&lt;/p&gt; 
&lt;p&gt;Executive sponsorship: Do you have an identified executive sponsor who actively participates in governance and can make final decisions?&lt;br&gt; ☐ Effective &amp;nbsp;&amp;nbsp; ☐ Needs improvement &amp;nbsp;&amp;nbsp; ☐ Does not exist&lt;/p&gt; 
&lt;p&gt;Implementor readiness: Have you formally assessed solution provider and contractor cloud capabilities and organizational change management readiness?&lt;br&gt; ☐ Effective &amp;nbsp;&amp;nbsp; ☐ Needs improvement &amp;nbsp;&amp;nbsp; ☐ Does not exist&lt;/p&gt; 
&lt;p&gt;Governance structure: Do you have regular operational and executive forums with clear escalation paths?&lt;br&gt; ☐ Effective &amp;nbsp;&amp;nbsp; ☐ Needs improvement &amp;nbsp;&amp;nbsp; ☐ Does not exist&lt;/p&gt; 
&lt;p&gt;Stakeholder alignment: Have all parties agreed on ownership, decision rights, and success criteria?&lt;br&gt; ☐ Effective &amp;nbsp;&amp;nbsp; ☐ Needs improvement &amp;nbsp;&amp;nbsp; ☐ Does not exist&lt;/p&gt; 
&lt;p&gt;Change management: Do you have processes to handle evolving requirements and regulatory complexity?&lt;br&gt; ☐ Effective &amp;nbsp;&amp;nbsp; ☐ Needs improvement &amp;nbsp;&amp;nbsp; ☐ Does not exist&lt;/p&gt; 
&lt;p&gt;If you answered “Needs improvement” or “Does not exist ” to any question, prioritize addressing that area before beginning active migration work.&lt;/p&gt; 
&lt;p&gt;Use Amazon Quick or Amazon Bedrock to generate a comprehensive assessment&lt;br&gt; AI-powered tools can dramatically accelerate your migration readiness evaluation, transforming what traditionally takes weeks of manual analysis into a comprehensive assessment generated in minutes. By using &lt;a href="https://aws.amazon.com/quick/" target="_blank" rel="noopener"&gt;Amazon Quick&lt;/a&gt; or &lt;a href="https://aws.amazon.com/bedrock/" target="_blank" rel="noopener"&gt;Amazon Bedrock&lt;/a&gt;, organizations can quickly evaluate their stakeholder management maturity, identify gaps, and receive actionable recommendations tailored to their specific context.&lt;/p&gt; 
&lt;h3&gt;Use the following prompt to generate a customized readiness assessment:&lt;/h3&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-json"&gt;I am the [CIO/CTO/CISO] of [Organization Name], and we are planning a mission-critical
cloud [migration/modernization] involving [X] applications, [Y] servers, and multiple stakeholder groups
including [list: agencies, contractors, vendors, internal teams]. This migration will impact
[describe scope: regulated workloads, citizen services, business operations].

Please conduct a comprehensive stakeholder management and migration readiness assessment
that evaluates our organization across the following dimensions:

1. EXECUTIVE SPONSORSHIP &amp;amp; GOVERNANCE
   - Identify our executive sponsor and assess their level of engagement
   - Evaluate our governance structure (meeting cadence, decision-making authority,
     escalation paths)
   - Assess whether we have clear RACI matrices defining roles and responsibilities
   - Determine if we have appropriate executive review forums established

2. SERVICE PROVIDER &amp;amp; VENDOR READINESS
   - Evaluate our solution implementors' cloud delivery experience for regulated workloads
   - Assess implementor certifications and AWS platform expertise
   - Review implementor escalation paths and support mechanisms
   - Identify organizational change management capabilities across implementor ecosystem

3. STAKEHOLDER ALIGNMENT &amp;amp; COMMUNICATION
   - Map all key stakeholders (internal teams, external support organizations, vendors)
   - Assess clarity of ownership models and decision rights
   - Evaluate communication effectiveness and information flow
   - Identify potential conflicts or misalignments in expectations

4. TECHNICAL &amp;amp; OPERATIONAL READINESS
   - Assess our migration scope definition and workload inventory
   - Evaluate our business case strength (cost, value, drivers)
   - Review our migration execution plan completeness
   - Assess our AWS environment readiness (Landing Zone, security, compliance)
   - Evaluate our skills and training preparedness

5. RISK &amp;amp; CHANGE MANAGEMENT
   - Identify regulatory complexity and compliance requirements
   - Assess our change control processes and impact assessment procedures
   - Evaluate our risk mitigation strategies
   - Review our approach to handling evolving requirements

6. MIGRATION COMPLEXITY FACTORS
   - Number of applications/servers in scope: [X]
   - Regulatory requirements: [list]
   - Number of stakeholder organizations: [Y]
   - Timeline constraints: [dates]
   - Budget parameters: [range]

For each dimension, please:
- Provide a maturity rating (Effective/Needs Improvement/Does Not Exist)
- Identify specific gaps and risks
- Recommend concrete actions with owners and timelines
- Suggest AWS mechanisms or resources that could help (EBAs, LNAs, Migration
  Readiness Assessments)
- Prioritize recommendations by impact and urgency

Additionally, please:
- Generate a readiness statement calculated across five key dimensions: Migration Scope, Business Case, Technical Capabilities, Organizational Change Management, and Migration Planning
- Provide a migration health forecast (On Track/Behind Plan/Elevated Risk/Stalled)
- Generate a 90-day action plan with specific milestones
- Identify early warning indicators we should monitor
- Recommend governance cadence appropriate for our complexity level

Output the assessment in a format suitable for executive presentation, including:
- Executive summary with key findings and recommendations
- Detailed assessment by dimension with supporting evidence
- Risk heat map showing priority areas
- Action plan with clear ownership and timelines
- Success metrics and KPIs to track progress

&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;For more in-depth and context-specific guidance, project plans and responsible, accountable, consulted, informed (RACI) matrices can be uploaded for analysis.&lt;/p&gt; 
&lt;p&gt;By using AI to generate this assessment, you’ll receive a customized evaluation that considers your unique organizational context, compliance landscape, and stakeholder ecosystem that you can use to proactively address gaps before they become delivery risks.&lt;/p&gt; 
&lt;h3&gt;Key takeaways for mission-critical migrations&lt;/h3&gt; 
&lt;p&gt;Organizations planning large-scale, mission-critical migrations should prioritize stakeholder management as a first-order design consideration. Successful programs consistently demonstrate:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Early and explicit stakeholder alignment&lt;/li&gt; 
 &lt;li&gt;Proactive implementation readiness and enablement&lt;/li&gt; 
 &lt;li&gt;Structured, multilevel governance&lt;/li&gt; 
 &lt;li&gt;Anticipation of evolving requirements&lt;/li&gt; 
 &lt;li&gt;Continuous executive engagement&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Cloud technology enables transformation, but stakeholder management enables delivery. By applying these principles, organizations can better manage complexity, reduce risk, and achieve successful outcomes for their most critical workloads.&lt;/p&gt; 
&lt;h3&gt;Learn more&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/experience-based-acceleration/" target="_blank" rel="noopener"&gt;Experience Based Acceleration (EBA)&lt;/a&gt; for the details you need to get started&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://pages.awscloud.com/global-traincert-AWS-learning-needs-analysis-request-assessment.html" target="_blank" rel="noopener"&gt;Learning Needs Analysis (LNA)&lt;/a&gt; in AWS Training and Certification to request an assessment&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/government-education/contact/" target="_blank" rel="noopener"&gt;AWS Public Sector team&lt;/a&gt; for more information on how AWS can help with stakeholder management&lt;/li&gt; 
&lt;/ul&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Integrating subject matter experts into generative AI evaluation with the AWS Generative AI Innovation Center</title>
		<link>https://aws.amazon.com/blogs/publicsector/integrating-subject-matter-experts-into-generative-ai-evaluation-with-the-aws-generative-ai-innovation-center/</link>
		
		<dc:creator><![CDATA[Taylor McNally]]></dc:creator>
		<pubDate>Wed, 06 May 2026 00:38:12 +0000</pubDate>
				<category><![CDATA[Amazon Bedrock]]></category>
		<category><![CDATA[Amazon SageMaker Ground Truth]]></category>
		<category><![CDATA[Best Practices]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">fe3208319ea9185671c4dc6abc0cab6a5372d797</guid>

					<description>Learn how the Amazon Web Services (AWS) Generative AI Innovation Center has helped dozens of organizations develop evaluation strategies that bridge this “articulation gap.”</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-30648 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/10/Integrating-subject-matter-experts-into-generative-AI-evaluation-with-the-AWS-Generative-AI-Innovation-Center.jpg" alt="Integrating subject matter experts into generative AI evaluation with the AWS Generative AI Innovation Center" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;You’ve coded your &lt;a href="https://aws.amazon.com/generative-ai/" target="_blank" rel="noopener"&gt;generative AI&lt;/a&gt; proof of concept. The demo impresses stakeholders. Then a subject matter expert (SME) sits down with the output and spots a problem no automated metric would catch. The response was fluent, well-structured, and wrong in a way that only someone with deep domain knowledge could identify.&lt;/p&gt; 
&lt;p&gt;This moment plays out across industries as organizations move from prototype to production with generative AI. It’s the disconnect between what domain experts instinctively know is right and what they can express as measurable evaluation criteria.&lt;/p&gt; 
&lt;p&gt;Through our work in the &lt;a href="https://aws.amazon.com/ai/generative-ai/innovation-center/." target="_blank" rel="noopener"&gt;Amazon Web Services (AWS) Generative AI Innovation Center&lt;/a&gt;, we’ve helped dozens of organizations develop evaluation strategies that bridge this “articulation gap.” The approach has four phases that act together as a flywheel. First, define what good looks like. Second, build ground truth alongside automated metrics. Third, shift from manual to automated evaluation as the system matures. Fourth, identify new evaluation needs.&lt;/p&gt; 
&lt;p&gt;The following graphic illustrates the flow through these phases. SME involvement evolves from hands-on scoring to periodic calibration as automated metrics mature.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/12/flywheel-hq.png" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30667 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/12/flywheel-hq.png" alt="Graphic of a flywheel with phase one at the top, phase two at the 90-degree position, phase 3 at 180 degrees, and phase 4 at the 270-degree position. In the center it says, “bridging the articulation gap: continuous improvement.” The phases are explained in detail in the text." width="2160" height="1920"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 1: The evaluation flywheel&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;Phase 1. Define what good looks like&lt;/h3&gt; 
&lt;p&gt;Before writing an evaluation script, the most important step is structured conversations with SMEs. The goal is to translate expert intuition into concrete, repeatable criteria. This can be harder than it sounds. Experts often struggle to articulate why one output feels right and another doesn’t, especially when both are factually correct.&lt;/p&gt; 
&lt;p&gt;Start by gathering preliminary requirements. What does the system need to do? What are the highest-risk failure modes? Where would a wrong answer cause real harm? Then move to evaluation design. We find that showing SMEs real model outputs and asking, “What’s wrong with this?” is more productive than asking them to define quality in the abstract. Their reactions reveal implicit standards that would otherwise go unspoken.&lt;/p&gt; 
&lt;p&gt;Consider an education technology company building an AI tutoring assistant. An instructional designer reviewing outputs might flag that the AI gives correct answers but fails to establish the student’s prerequisite knowledge. A math tutor that jumps to derivatives without confirming the student understands limits is technically accurate but pedagogically harmful. That feedback can be used to develop a rubric dimension such as scaffolding appropriateness, scored on a 1–5 scale with a clear definition and anchor example(s) at each level. The rubric becomes the shared language between domain experts and engineers for the rest of the project. The following table shows an example rubric.&lt;/p&gt; 
&lt;table border="3"&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Score&lt;/td&gt; 
   &lt;td&gt;Label&lt;/td&gt; 
   &lt;td&gt;Definition&lt;/td&gt; 
   &lt;td&gt;Anchor example&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;5&lt;/td&gt; 
   &lt;td&gt;Excellent&lt;/td&gt; 
   &lt;td&gt;Builds on confirmed prior knowledge, introduces concepts in logical sequence, checks understanding before advancing&lt;/td&gt; 
   &lt;td&gt;“Before we look at derivatives, let’s make sure you’re comfortable with limits. Can you tell me what happens to f(x) = 1/x as x approaches infinity?”&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;4&lt;/td&gt; 
   &lt;td&gt;Good&lt;/td&gt; 
   &lt;td&gt;Follows a logical sequence and references prerequisites, but doesn’t actively verify understanding&lt;/td&gt; 
   &lt;td&gt;“Derivatives build on the concept of limits. Remember that a limit describes the value a function approaches. Now, a derivative measures…”&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;3&lt;/td&gt; 
   &lt;td&gt;Adequate&lt;/td&gt; 
   &lt;td&gt;Correct and organized but assumes prerequisite knowledge without acknowledging it&lt;/td&gt; 
   &lt;td&gt;“The derivative of f(x) measures the rate of change. To find it, we use the limit definition…”&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;2&lt;/td&gt; 
   &lt;td&gt;Poor&lt;/td&gt; 
   &lt;td&gt;Skips prerequisites and uses jargon or notation the learner might not recognize&lt;/td&gt; 
   &lt;td&gt;“Apply the power rule: d/dx[x^n] = nx^(n-1). So for f(x) = x³, f’(x) = 3x².”&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;1&lt;/td&gt; 
   &lt;td&gt;Harmful&lt;/td&gt; 
   &lt;td&gt;Introduces concepts out of order or gives answers that actively confuse the learner&lt;/td&gt; 
   &lt;td&gt;“Use L’Hôpital’s rule here. Take the derivative of the numerator and denominator separately…” (student has not learned derivatives yet)&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h3&gt;Phase 2. Improve ground truth and build automated metrics&lt;/h3&gt; 
&lt;p&gt;Then you can begin building your evaluation dataset. &lt;a href="https://aws.amazon.com/sagemaker/data-labeling/" target="_blank" rel="noopener"&gt;Amazon SageMaker Ground Truth&lt;/a&gt; provides managed labeling workflows where SMEs can score model outputs against your rubric at scale. These human-scored examples become the ground truth that all automated metrics are validated against. Try to cover the full range of quality levels and edge cases your system will encounter.&lt;/p&gt; 
&lt;p&gt;Simultaneously, develop automated evaluation pipelines based on SME feedback. You can use &lt;a href="https://aws.amazon.com/bedrock/evaluations/" target="_blank" rel="noopener"&gt;Amazon Bedrock Evaluations&lt;/a&gt; to run&lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/evaluation-judge.html" target="_blank" rel="noopener"&gt; large language model (LLM)-as-judge&lt;/a&gt; assessments with custom metrics. Keep in mind automated metrics earn trust by agreeing with expert judgment, not the other way around. Measure the correlation between automated and human scores. When they diverge, investigate. Sometimes the automated metric needs refinement. Sometimes it reveals inconsistency in human scoring. Both outcomes improve the system.&lt;/p&gt; 
&lt;h3&gt;Phase 3. Shift from manual to automated evaluation&lt;/h3&gt; 
&lt;p&gt;As confidence in automated metrics grows, the SME role shifts from scoring every output to validating the scoring system itself. You can now automatically evaluate at production scale with limited human involvement. SMEs can first conduct periodic calibration checks between human and automated scores, then check cases the automated system flags as uncertain.&lt;/p&gt; 
&lt;p&gt;For the education company example, automated metrics might handle 90% of evaluation volume, covering factual accuracy, response format, and reading level. SMEs review a rotating sample of outputs during each sprint plus any outputs where the automated scorer’s confidence falls below a threshold, particularly for scaffolding and pedagogical quality. The SMEs are no longer bottlenecking every release but are calibrating the system that evaluates at scale.&lt;/p&gt; 
&lt;p&gt;The following graphic illustrates how SME effort evolves from one phase to the next. Total effort decreases, but the nature of the work shifts from defining criteria to calibrating automated systems.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/12/sme-effort-shift-hq.png" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30668 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/12/sme-effort-shift-hq.png" alt="Stacked area chart showing how SME effort shifts from defining rubrics to scoring outputs to calibrating automated systems across the three phases" width="2340" height="1380"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 2: How SME effort evolves across phases&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;Phase 4. Review the metrics&lt;/h3&gt; 
&lt;p&gt;As your product evolves, so should your evaluation framework. Phase 4 brings SMEs back to assess whether the rubric, ground truth, and automated metrics still reflect what “good” looks like today.&lt;/p&gt; 
&lt;p&gt;Start by having SMEs review the current rubric against recent model outputs. Are the dimensions still relevant? For the education company, if the tutoring assistant now supports multi-turn conversations, the original rubric might miss how well the assistant builds on prior exchanges.&lt;/p&gt; 
&lt;p&gt;SMEs should also flag where automated scores are diverging from expert judgment. Consistent disagreements from phase 3 calibration checks signal that metrics need updating.&lt;/p&gt; 
&lt;p&gt;After gaps and outdated criteria are identified, cycle back to phase 1 with fresh structured conversations, current outputs, and an updated rubric. Then add to the ground truth, revalidate automated metrics, and return to phase 4. This is the flywheel: teams that treat evaluation as a continuous loop maintain alignment with what “good” means as it changes over time.&lt;/p&gt; 
&lt;h3&gt;Getting started&lt;/h3&gt; 
&lt;p&gt;After working on hundreds of generative AI projects, we’ve consistently seen that teams who build effective feedback loops into their development process can evaluate at scale with confidence, ship faster, and clear the gap between prototype and production that stalls so many generative AI initiatives.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Are you ready to learn more or get started?&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/government-education/contact/" target="_blank" rel="noopener"&gt;Contact your AWS account team or the AWS Public Sector team.&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/generative-ai/innovation-center/" target="_blank" rel="noopener"&gt;Learn more about the AWS Generative AI Innovation Center.&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://builder.aws.com/" target="_blank" rel="noopener"&gt;Join the AWS Builder community.&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Sovereign Intelligence: How AWS Enables Global Health Security Without Compromising Data Privacy</title>
		<link>https://aws.amazon.com/blogs/publicsector/sovereign-intelligence-how-aws-enables-global-health-security-without-compromising-data-privacy/</link>
		
		<dc:creator><![CDATA[Dr. Dawn Heisey-Grove]]></dc:creator>
		<pubDate>Wed, 06 May 2026 00:33:26 +0000</pubDate>
				<category><![CDATA[Amazon Bedrock]]></category>
		<category><![CDATA[Amazon EC2]]></category>
		<category><![CDATA[Public Sector]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">de1343cb2465f68e4cf80576df99d032f527b2d4</guid>

					<description>In this post, we explore how AWS enables global pathogen surveillance and outbreak intelligence through sovereign-by-design platforms like PathGen, which allow countries to collaborate on infectious disease tracking while maintaining control over their sensitive health data within national borders.</description>
										<content:encoded>&lt;p&gt;&lt;em&gt;*Part 2 of 3: Democratizing Access to Genomic Data and Analytics*&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;When infectious diseases emerge, rapid pathogen identification and tracking saves lives. Yet for decades, this critical work has been hampered by a fundamental tension: the need to share genomic data across borders against the imperative to protect national data sovereignty and patient privacy.&lt;/p&gt; 
&lt;p&gt;Amazon Web Services (AWS) is helping resolve this tension through innovative platforms that enable global collaboration while keeping sensitive data secure. This is the next frontier in democratizing genomic data: making outbreak intelligence accessible to researchers and health practitioners across all countries, regardless of available technical infrastructure or economic resources.&lt;/p&gt; 
&lt;p&gt;In this post, we explore how AWS enables global pathogen surveillance and outbreak intelligence through sovereign-by-design platforms like PathGen, which allow countries to collaborate on infectious disease tracking while maintaining control over their sensitive health data within national borders.&lt;/p&gt; 
&lt;h3&gt;The Global Challenge: Outbreak Detection in a Connected World&lt;/h3&gt; 
&lt;p&gt;The COVID-19 pandemic starkly illustrated the power and limitations of global genomic surveillance. While rapid sequencing and data sharing enabled unprecedented scientific collaboration, many countries—particularly in low- and middle-income regions—lacked the infrastructure to participate fully. Others hesitated to share data due to sovereignty concerns or fear of travel restrictions.&lt;/p&gt; 
&lt;p&gt;The challenge is clear: how do we enable real-time global outbreak intelligence while respecting each nation’s right to control its own health data?&lt;/p&gt; 
&lt;h3&gt;PathGen: AI-Powered Outbreak Intelligence with Data Sovereignty&lt;/h3&gt; 
&lt;p&gt;The &lt;a href="https://www.duke-nus.edu.sg/cop/asia-pathogen-genomics-initiative" target="_blank" rel="noopener"&gt;Asia Pathogen Genomics Initiative&lt;/a&gt; unveiled &lt;a href="https://pathgen.ai/" target="_blank" rel="noopener"&gt;PathGen&lt;/a&gt; in December 2025—an AI-enabled integrated surveillance platform designed to support public health decision-making across 14 Asian countries. PathGen represents a breakthrough in sovereign-by-design health technology, combining pathogen surveillance data with contextual information needed to provide countries with timely, secure, and actionable decision support. Critically, all analysis occurs in-country, without raw data leaving national borders.&lt;/p&gt; 
&lt;h3&gt;Technical Architecture: Sovereignty Meets Collaboration&lt;/h3&gt; 
&lt;p&gt;PathGen’s architecture leverages &lt;a href="https://aws.amazon.com/bedrock/" target="_blank" rel="noopener"&gt;Amazon Bedrock&lt;/a&gt; for secure access to large language models for AI-generated summaries and insights, ensuring sensitive health data remains encrypted and under the control of each country’s health ministry; and &lt;a href="https://aws.amazon.com/ec2/" target="_blank" rel="noopener"&gt;Amazon Elastic Compute Cloud&lt;/a&gt; (Amazon EC2) with GPU instances (&lt;a href="https://aws.amazon.com/ec2/instance-types/p5/" target="_blank" rel="noopener"&gt;P5&lt;/a&gt; and &lt;a href="https://aws.amazon.com/ec2/instance-types/g5/" target="_blank" rel="noopener"&gt;G5&lt;/a&gt;) and &lt;a href="https://www.aboutamazon.com/news/aws/graviton4-aws-cloud-computing-chip" target="_blank" rel="noopener"&gt;Graviton 4 chips&lt;/a&gt; “for real-time genomic analysis and AI inference.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“This isn’t just about technology, it’s about trust,” said Professor Paul Pronyk, Director of the Duke-NUS &lt;a href="https://www.duke-nus.edu.sg/cop" target="_blank" rel="noopener"&gt;Centre for Outbreak Preparedness&lt;/a&gt;. “When countries know their raw genomic data never leaves their borders, they’re more willing to participate in the collaborative surveillance that keeps us all safe. PathGen proves that data sovereignty and global health security aren’t competing values. They’re complementary goals that, with the right architecture, can strengthen each other.”&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;h3&gt;Technical Innovation: Making Genomic Surveillance Accessible&lt;/h3&gt; 
&lt;p&gt;The success of platforms like PathGen depends on democratizing access to key biomedical data resources, and making complex genomic analysis infrastructure and solutions accessible to public health officials who may not be bioinformatics experts.&lt;/p&gt; 
&lt;p&gt;The Wisconsin State Laboratory of Hygiene (WSLH) leveraged AWS resources to develop two solutions that address this problem:&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Open-Source Solutions for Public Health Labs.&lt;/strong&gt; &lt;a href="https://www.easygenomics.org/" target="_blank" rel="noopener"&gt;Easy Genomics&lt;/a&gt; is an open-source solution &lt;a href="https://aws.amazon.com/blogs/industries/easy-genomics-solution-for-public-health-labs/" target="_blank" rel="noopener"&gt;designed specifically for public health laboratories&lt;/a&gt;, developed for WSLH using AWS Partner support. By providing pre-configured pipelines that run on AWS, the platform enables labs with limited bioinformatics expertise to conduct sophisticated analyses.&lt;/p&gt; 
&lt;p&gt;This is particularly important in resource-limited settings where hiring bioinformaticians may not be feasible. With Easy Genomics, someone without bioinformatics expertise can upload raw sequencing data and receive actionable results without needing to understand the underlying computational complexity.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Generative AI for Data Standardization.&lt;/strong&gt; One of the biggest challenges in genomic surveillance is data standardization—different labs use different protocols, instruments, and file formats. WSLH wanted to leverage generative AI to accelerate that process. Students working with the &lt;a href="https://dxhub.calpoly.edu/" target="_blank" rel="noopener"&gt;Digital Transformation Hub&lt;/a&gt; (DxHub) at &lt;a href="https://www.calpoly.edu/" target="_blank" rel="noopener"&gt;California Polytechnic State University&lt;/a&gt; (Cal Poly) built the open source &lt;a href="https://aws.amazon.com/blogs/publicsector/leveraging-generative-ai-to-accelerate-public-health-genomics-data-standardization/" target="_blank" rel="noopener"&gt;AI Genomic Schema Harmonizer&lt;/a&gt; in partnership with &lt;a href="https://aws.amazon.com/government-education/cloud-innovation-centers/" target="_blank" rel="noopener"&gt;AWS Cloud Innovation Centers&lt;/a&gt; (CIC).&lt;/p&gt; 
&lt;p&gt;By automatically detecting and correcting format inconsistencies, extracting relevant metadata, and flagging potential quality issues, the Harmonizer reduces the technical barriers to participation in global surveillance networks.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“The success of democratizing pathogen genomics depends on making complex data analysis and workflows accessible to public health officials who may not have a scientific computing background,” said Dr. Kelsey Florek of WSLH. “It’s about removing technical barriers so that every public health laboratory, regardless of resources, can participate in protecting their communities.”&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;h3&gt;Technology in Service of Humanity&lt;/h3&gt; 
&lt;p&gt;The democratization of genomic data and analytics represents one of the most significant advances in improving health outcomes worldwide in recent decades. By making powerful computational tools accessible to researchers and public health officials worldwide, regardless of their location or resources, AWS helps level the playing field in the fight against disease.&lt;/p&gt; 
&lt;p&gt;At AWS, we believe our cloud and AI services are powerful tools to address the world’s urgent and complex challenges in health. Through continued innovation, strategic partnerships, and unwavering commitment to our customers’ missions, we’re working to build a future where genomic insights benefit all of humanity—not just the privileged few.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Read the first blog in the series, &lt;a href="//aws.amazon.com/blogs/publicsector/breaking-down-barriers-how-aws-democratizes-genomic-data-for-the-world"&gt;Breaking Down Barriers: How AWS Democratizes Genomic Data for the World&lt;/a&gt;, which highlights how AWS services and support are empowering researchers to move faster, at scale, to achieve groundbreaking discoveries and transform healthcare delivery for everyone.&lt;/li&gt; 
 &lt;li&gt;Follow along for the third blog in the series, which highlights how biobanks are reshaping medical research.&lt;/li&gt; 
 &lt;li&gt;Learn more about &lt;a href="https://aws.amazon.com/about-aws/our-impact/" target="_blank" rel="noopener"&gt;AWS Skilling and Social Impact&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;To learn more about AWS for Healthcare and Life Sciences, visit &lt;a href="http://aws.amazon.com/health/" target="_blank" rel="noopener"&gt;aws.amazon.com/health&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;To explore how AWS can support your organization’s mission, contact your AWS account team or visit &lt;a href="https://aws.amazon.com/health/" target="_blank" rel="noopener"&gt;aws.amazon.com/contact-us&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>How Kofile modernizes county records with AI on AWS</title>
		<link>https://aws.amazon.com/blogs/publicsector/how-kofile-modernizes-county-records-with-ai-on-aws/</link>
		
		<dc:creator><![CDATA[Lekan Ojo]]></dc:creator>
		<pubDate>Tue, 05 May 2026 21:10:23 +0000</pubDate>
				<category><![CDATA[Education]]></category>
		<category><![CDATA[Nonprofit]]></category>
		<category><![CDATA[Public Safety]]></category>
		<category><![CDATA[Public Sector]]></category>
		<guid isPermaLink="false">aec01dfafc105d43a038c0f45cc055ebcd87af6b</guid>

					<description>In this blog post, learn how Kofile Technologies has transformed public records management for more than 3,000 county governments using AI-powered document intelligence built on Amazon Web Services (AWS).</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-30915 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/30/How-Kofile-modernizes-county-records-with-AI-on-AWS.png" alt="How Kofile modernizes county records with AI on AWS" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;County governments across the United States manage millions of public records—property deeds, marriage certificates, court documents, and business licenses—that serve citizens, title companies, legal professionals, and researchers daily. Yet these records often exist in fragmented formats spanning paper documents, microfiche, PDFs, and various digital files, making search and retrieval time consuming and labor intensive for both government staff and the public.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://kofile.com/" target="_blank" rel="noopener"&gt;Kofile Technologies&lt;/a&gt; has transformed public records management for more than 3,000 county governments using &lt;a href="https://aws.amazon.com/bedrock/" target="_blank" rel="noopener"&gt;Amazon Bedrock&lt;/a&gt; powered document intelligence built on &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS)&lt;/a&gt;. Their &lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt; platform reduces citizen search times from hours to seconds by processing millions of historical documents with automated classification and intelligent search capabilities so that users can summarize, translate, and engage in conversational, multilingual interactions that make information more accessible than ever before. At the same time, they maintain the stringent security and compliance standards government organizations require.&lt;/p&gt; 
&lt;h3&gt;Millions of records, limited access&lt;/h3&gt; 
&lt;p&gt;County governments serve as custodians of vast collections of public records, often spanning decades or even centuries. These records exist in many formats—paper, microfilm, digital scans, maps, and plats—and are stored across multiple systems with limited interoperability.&lt;/p&gt; 
&lt;p&gt;For county clerks, recorders, and their staff, the daily reality involves manually searching through indexes, cross-referencing physical files, and responding to public records requests with processes that haven’t fundamentally changed in decades. Citizens seeking property records, attorneys researching title histories, and government staff conducting audits all face the same bottleneck: finding the right document takes too long.&lt;/p&gt; 
&lt;p&gt;According to Kofile leadership, county clerks were overwhelmed by paper-based processes, while citizens experienced frustration with weeks-long wait times for simple records requests. The company recognized that AI could transform this experience, but the solution had to be secure, scalable, and built specifically for government requirements.&lt;/p&gt; 
&lt;p&gt;Kofile recognized that the challenge wasn’t just digitization—counties had been scanning documents for years. The real gap was intelligence. How do you take millions of digitized pages and make them truly useful? How do you allow a county clerk to find a specific deed from 1987 using a natural language question instead of navigating a complex index system?&lt;/p&gt; 
&lt;p&gt;The manual nature of traditional records management creates bottlenecks that affect entire communities. Property transactions stall waiting for deed searches. Legal proceedings are delayed while attorneys request court documents. Economic development slows when businesses can’t quickly access the records they need. County IT teams struggle to maintain aging infrastructure while managing security and compliance requirements.&lt;/p&gt; 
&lt;h3&gt;Document intelligence for civic assets&lt;/h3&gt; 
&lt;p&gt;Kofile’s vision for &lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt; went beyond building a better search engine. To truly modernize public records management, the AWS team helped them address six critical challenges for their county customers:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Multimodal document ingestion&lt;/strong&gt; to process everything from freshly scanned pages to legacy microfilm conversions at scale&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Automated metadata extraction&lt;/strong&gt; and classification using AI to reduce the manual effort required to catalog records by identifying document types, dates, names, parcel numbers, and other key information&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Semantic search capabilities&lt;/strong&gt; so that county staff and citizens could find records by describing what they need in plain English, rather than knowing exact index terms&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Document translation&lt;/strong&gt; for counties serving multilingual communities through multi-language interaction in more than 90 languages&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Analytics dashboards&lt;/strong&gt; for county administrators to understand usage patterns and operational metrics&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Enterprise-grade security&lt;/strong&gt; with encryption, role-based access controls, full audit trails, and multi-tenant isolation—requirements that are nonnegotiable for government records&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Collaborating with AWS to accelerate innovation&lt;/h3&gt; 
&lt;p&gt;Kofile’s work with AWS has been instrumental in bringing &lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt; to market quickly while maintaining the security and compliance standards government organizations require. The breadth of &lt;a href="https://aws.amazon.com/ai/generative-ai/" target="_blank" rel="noopener"&gt;generative AI&lt;/a&gt;, &lt;a href="https://aws.amazon.com/what-is/artificial-intelligence/" target="_blank" rel="noopener"&gt;AI&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/ai/machine-learning/" target="_blank" rel="noopener"&gt;machine learning (ML)&lt;/a&gt; services offered by AWS mean that Kofile can focus on solving government-specific challenges rather than building infrastructure from scratch.&lt;/p&gt; 
&lt;p&gt;According to Kofile’s technical leadership, purpose-built AI services from AWS provided enterprise-grade capabilities out of the box, dramatically accelerating their time to market. This allowed the team to focus on the unique needs of county governments rather than reinventing document processing and search infrastructure.&lt;/p&gt; 
&lt;p&gt;This collaboration also provides Kofile’s government customers with confidence in the platform’s long-term viability and continuous innovation, backed by the AWS commitment to the public sector.&lt;/p&gt; 
&lt;h3&gt;Building on a platform designed for government on AWS&lt;/h3&gt; 
&lt;p&gt;Kofile architected &lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt; as a cloud-centered, multi-tenant &lt;a href="https://aws.amazon.com/what-is/saas/" target="_blank" rel="noopener"&gt;software-as-a-service (SaaS)&lt;/a&gt; platform using Amazon Bedrock to deliver comprehensive document intelligence capabilities. Building on AWS gave Kofile the foundation to deliver these capabilities at scale while meeting the stringent security and compliance requirements of government customers.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt; uses a range of AWS services to deliver its document intelligence capabilities. The platform is architected as a multi-tenant SaaS solution, with each county’s data isolated and secured.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Document ingestion and storage&lt;br&gt; &lt;/strong&gt;&lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt; ingests documents from multiple sources—scanners, existing digital repositories, and bulk uploads—storing them securely in durable, scalable storage that makes them immediately available for the automated processing pipeline. The platform handles high-volume batch uploads—critical for counties that might be onboarding decades of historical records—as well as ongoing day-to-day document additions.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;AI-powered processing&lt;br&gt; &lt;/strong&gt;After they’re ingested, documents flow through an automated AI processing pipeline that extracts intelligence and structure from unstructured content. The platform automatically extracts metadata and entities—names, dates, parcel numbers, document types—reducing the manual indexing burden on county staff.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt; then classifies documents by type—property deeds, mortgage documents, liens, marriage certificates, death certificates, and dozens of other categories. This classification happens in seconds, compared to the manual review that previously took minutes per document. This automated classification proves particularly valuable for large-scale digitization projects where manually categorizing hundreds of thousands of documents would be impractical.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Intelligent search&lt;br&gt; &lt;/strong&gt;&lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt;’s search capabilities go beyond traditional keyword matching. The platform vectorizes and indexes documents, providing semantic search that understands the intent behind a query. A county clerk searching for “property transfers in the downtown district last year” receives relevant results even when documents use different terminology like “conveyance,” “deed,” or “real estate transaction.”&lt;/p&gt; 
&lt;p&gt;This semantic search capability transforms the user experience, making decades of records accessible through intuitive, conversational queries rather than requiring specialized knowledge of document terminology and filing systems.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Document translation and export&lt;br&gt; &lt;/strong&gt;For counties serving diverse communities, &lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt; offers optional document translation capabilities. Users can select documents, translate them into a target language, and export bundled document packages—streamlining workflows for public records requests. The platform also supports flexible export options, allowing users to download documents in various formats while maintaining proper formatting and metadata.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Analytics and operational dashboards&lt;br&gt; &lt;/strong&gt;County administrators gain visibility into records operations through analytics dashboards. The platform provides real-time monitoring of system performance, processing volumes, and user activity, supporting proactive management and capacity planning. These dashboards reveal patterns in records requests, processing bottlenecks, and usage trends, helping counties optimize their operations and allocate resources effectively.&lt;/p&gt; 
&lt;h3&gt;Security and compliance for government&lt;/h3&gt; 
&lt;p&gt;Kofile architected &lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt; to meet the stringent security and compliance requirements of government organizations, taking advantage of the comprehensive security services and infrastructure of AWS:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Data protection&lt;/strong&gt; – &lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt; encrypts all data at rest using customer managed keys, giving counties control over their encryption keys. TLS 1.2 or higher protects data in transit. Data residency controls verify that records remain in specified &lt;a href="https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region" target="_blank" rel="noopener"&gt;AWS Regions&lt;/a&gt;, meeting state and local data sovereignty requirements.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Access control and identity&lt;/strong&gt; – Role-based access control (RBAC) integrated with &lt;a href="https://aws.amazon.com/identity/" target="_blank" rel="noopener"&gt;AWS identity services&lt;/a&gt; solutions verifies that users access only the records and functions appropriate to their roles. Federation with government identity providers and &lt;a href="https://aws.amazon.com/what-is/mfa/" target="_blank" rel="noopener"&gt;multi-factor authentication (MFA)&lt;/a&gt; enforcement adds an additional security layer.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Audit and compliance&lt;/strong&gt; – &lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt; captures comprehensive audit trails that log every access and action, providing the detailed audit history government organizations require. The platform’s architecture aligns with the &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html" target="_blank" rel="noopener"&gt;AWS Well-Architected Framework&lt;/a&gt;, particularly the Security and Reliability pillars, maintaining best practices in cloud architecture.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Resilience and recovery&lt;/strong&gt; – Multi-AZ deployment provides high availability, and automated backups with cross-Region replication offer disaster recovery capabilities. This architecture delivers the resilience government organizations need to maintain continuous access to critical public records. The platform positions counties for compliance with frameworks such as &lt;a href="https://aws.amazon.com/compliance/govramp/" target="_blank" rel="noopener"&gt;Government Risk and Authorization Management Program (GovRAMP)&lt;/a&gt;, depending on their specific requirements and the records they manage.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Infrastructure as code (IaC)&lt;/strong&gt; – &lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt;’s infrastructure is defined using IaC tools, supporting consistent, repeatable deployments with built-in security controls. &lt;a href="https://aws.amazon.com/what-is/ci-cd/" target="_blank" rel="noopener"&gt;Continuous integration and continuous deployment (CI/CD)&lt;/a&gt; pipelines verify that updates are tested and deployed systematically, minimizing risk while facilitating rapid innovation.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Real-world impact on government services&lt;/h3&gt; 
&lt;p&gt;The transformation &lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt; delivers extends across multiple stakeholder groups, including county clerks and staff, citizens and title companies, county administrators, and IT teams. &lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt; fundamentally changes how counties manage and provide access to public records.&lt;/p&gt; 
&lt;p&gt;What previously took county clerks and staff hours of manual searching through filing cabinets and microfiche readers now takes seconds through intelligent search. Staff can handle significantly higher volumes of records requests without increasing headcount. Automated classification and metadata extraction eliminate tedious data entry, allowing clerks to focus on higher-value citizen services. The reduction in manual processing time translates directly to cost savings and improved employee satisfaction.&lt;/p&gt; 
&lt;p&gt;Public records that previously required in-person visits by citizens and title companies or multi-day wait times are now accessible online with immediate results. Title companies can complete property searches in minutes rather than hours, accelerating real estate transactions and economic activity. Citizens can access marriage certificates, property records, and other documents without taking time off work to visit county offices. This improved accessibility increases transparency and strengthens trust in government.&lt;/p&gt; 
&lt;p&gt;Real-time dashboards provide visibility for county administrators into operations that were previously opaque. Administrators can identify processing bottlenecks, track service levels, and make data-driven decisions about resource allocation. The analytics capabilities reveal patterns in records requests, helping counties anticipate demand and plan capacity. Quantifiable metrics on processing times and citizen satisfaction support budget justifications and demonstrate the value of modernization investments.&lt;/p&gt; 
&lt;p&gt;The cloud-based architecture eliminates the burden on IT teams of maintaining on-premises infrastructure, including servers, storage systems, and backup solutions. Automatic scaling handles peak demand without manual intervention. The CI/CD pipeline manages security updates and patches systematically. Multi-tenant isolation verifies that each county’s data remains secure and separate, and centralized management reduces operational complexity.&lt;/p&gt; 
&lt;h3&gt;Looking ahead&lt;/h3&gt; 
&lt;p&gt;Kofile’s investment in AI-powered document intelligence represents a broader shift in how government organizations think about their records. Public records aren’t merely archives to be preserved—they’re civic assets that, when made accessible and intelligent, can drive better outcomes for communities.&lt;/p&gt; 
&lt;p&gt;As AI capabilities continue to advance, the potential applications expand: automated redaction for sensitive information, proactive identification of records that need preservation attention, and deeper integration with county workflows and systems.&lt;/p&gt; 
&lt;p&gt;For counties looking to modernize their records management, one proven path forward combines decades of domain expertise in government records with the scale, security, and AI capabilities of the cloud.&lt;/p&gt; 
&lt;p&gt;To learn more about Kofile Technologies and the &lt;a href="https://kofile.com/kleio/" target="_blank" rel="noopener"&gt;Kleio&lt;/a&gt;&lt;sup&gt;SM&lt;/sup&gt; platform, visit &lt;a href="http://kofile.com/" target="_blank" rel="noopener"&gt;kofile.com&lt;/a&gt;. To explore how AWS supports public sector organizations, visit &lt;a href="http://aws.amazon.com/government-education" target="_blank" rel="noopener"&gt;AWS in the Public Sector&lt;/a&gt;.&lt;/p&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Accelerating federal document processing using Document AI from DMI</title>
		<link>https://aws.amazon.com/blogs/publicsector/accelerating-federal-document-processing-using-document-ai-from-dmi/</link>
		
		<dc:creator><![CDATA[Channa Basavaraja]]></dc:creator>
		<pubDate>Mon, 04 May 2026 23:47:01 +0000</pubDate>
				<category><![CDATA[Amazon Bedrock]]></category>
		<category><![CDATA[Amazon Bedrock Guardrails]]></category>
		<category><![CDATA[Amazon Bedrock Knowledge Bases]]></category>
		<category><![CDATA[Amazon Nova]]></category>
		<category><![CDATA[AWS CloudFormation]]></category>
		<category><![CDATA[Public Sector]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">4e1fd32f0671b2e6c4953e76039cdb891e2b0d19</guid>

					<description>This blog share how if your agency is managing millions of unstructured documents, from eligibility records to case files, this post introduces how DMI Document AI, built on Amazon Bedrock, can help you automate document processing, reduce backlogs, and redirect your team toward higher-value mission work.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-30684 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/13/Accelerating-federal-document-processing-using-Document-AI-from-DMI.jpg" alt="Accelerating federal document processing using Document AI from DMI" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;If your agency is managing millions of unstructured documents, from eligibility records to case files, this post introduces how DMI &lt;a href="https://dminc.com/services/document-ai/" target="_blank" rel="noopener"&gt;Document AI&lt;/a&gt;, built on &lt;a href="https://aws.amazon.com/bedrock/" target="_blank" rel="noopener"&gt;Amazon Bedrock&lt;/a&gt;, can help you automate document processing, reduce backlogs, and redirect your team toward higher-value mission work.&lt;/p&gt; 
&lt;h3&gt;Common digitalization challenges across federal agencies and beyond&lt;/h3&gt; 
&lt;p&gt;Your agency might be navigating a massive influx of unstructured, high sensitivity documents every day. These could include regulatory records, eligibility evidence, applications, and supporting documentation such as birth and educational certificates, citizen forms, adjudication packets, and case files.&lt;/p&gt; 
&lt;p&gt;Many agencies are dealing with backlogs of millions of cases, alongside annual intake volumes that often exceed tens of millions of forms. This creates an urgent need to automate data extraction and validation to speed up approvals, compliance, and mission-critical workflows.&lt;/p&gt; 
&lt;p&gt;Agencies are noticing encouraging early outcomes from &lt;a href="https://aws.amazon.com/ai/generative-ai/use-cases/document-processing/" target="_blank" rel="noopener"&gt;intelligent document processing (IDP)&lt;/a&gt; pilots and partial automation. By integrating workflow automation with &lt;a href="https://aws.amazon.com/what-is/ocr/" target="_blank" rel="noopener"&gt;optical character recognition (OCR)&lt;/a&gt; or intelligent character recognition (ICR), some have achieved impressive milestones, such as 50% faster cycle times, based on DMI field experience. However, these pilots also reveal the challenges of scaling enterprise production across various document types, noisy documents, and unstandardized formats.&lt;/p&gt; 
&lt;p&gt;Many agencies have strategic mandates to digitize hundreds of millions of pages to meet federal digitization mandates.&lt;/p&gt; 
&lt;p&gt;The challenges are essentially of scale and complexity:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Billions of paper documents&lt;/li&gt; 
 &lt;li&gt;Scanned legacy forms with inconsistent layouts&lt;/li&gt; 
 &lt;li&gt;Aged or degraded paper records, such as legacy handwriting notes, signatures, and historical service records&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;These document collections require capabilities well beyond traditional OCR. Agencies increasingly need advanced generative AI–driven IDP solutions such as Document AI to:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Interpret handwritten and inconsistent text&lt;/li&gt; 
 &lt;li&gt;Classify documents automatically (document understanding)&lt;/li&gt; 
 &lt;li&gt;Extract key entities such as names, dates, addresses, occupations, and identifiers&lt;/li&gt; 
 &lt;li&gt;Normalize and validate extracted data against business rules&lt;/li&gt; 
 &lt;li&gt;Make extracted artifacts searchable, discoverable, and usable for downstream mission operations&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Agencies are seeking secure, scalable, and high-accuracy modern IDP solutions that can modernize intake-heavy workflows while maintaining stringent standards for privacy, auditability, and data integrity. To meet these rigorous demands, Document AI uses &lt;a href="https://aws.amazon.com/compliance/fedramp/" target="_blank" rel="noopener"&gt;Federal Risk and Authorization Management Program (FedRAMP)&lt;/a&gt; accredited &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS)&lt;/a&gt; services, providing a foundation of robust encryption, data residency, and comprehensive audit logging. By building on these pre-authorized components, agencies can support secure, responsible adoption and significantly fast-track FedRAMP certification for their document solutions while maintaining data protection within their AWS environment.&lt;/p&gt; 
&lt;h3&gt;Introducing DMI Document AI solution powered by Amazon Bedrock&lt;/h3&gt; 
&lt;p&gt;DMI Document AI is a robust solution for IDP. Organizations can use it to digitize, comprehend, and operationalize documents. The solution delivers precise document capture and data extraction, paired with AI-driven document type classification, analytics, workflow automation, and automated document generation. In essence, it converts static documents into searchable, actionable business intelligence. It includes three integrated modules: DocuChew, DocuRun, and DocuGen:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;DocuChew&lt;/strong&gt; – Moving beyond traditional rule-based OCR, uses generative AI–enabled approach to document classification and metadata extraction, using advanced generative AI and&amp;nbsp;multimodal&amp;nbsp;&lt;a href="https://aws.amazon.com/what-is/large-language-model/" target="_blank" rel="noopener"&gt;large language models&lt;/a&gt; (LLMs)&amp;nbsp;to replace rigid, rule-based workflows. The solution currently supports English-language processing for common document formats, including Word, PDF, images, and Excel. Although traditional OCR systems often struggle with context, handwriting, and layout changes, DocuChew maintains a&amp;nbsp;high&amp;nbsp;extraction accuracy of over 90 percent, &lt;strong&gt;based on DMI’s benchmarking using several hundred page data&lt;/strong&gt;. This provides agencies with a more flexible and sustainable foundation for managing new and evolving document types without the reactive cycle of manual workarounds required by legacy systems.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;DocuRun&lt;/strong&gt; – Uses Amazon Bedrock &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/bda.html" target="_blank" rel="noopener"&gt;Data Automation&lt;/a&gt; to transform how organizations handle unstructured multimodal content using pre-built smart document templates known as&amp;nbsp;&lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/bda-blueprint-info.html" target="_blank" rel="noopener"&gt;blueprints&lt;/a&gt;. These blueprints apply natural language context to intelligently identify, normalize, and extract data, decoupling extraction logic from underlying code for rapid scaling. To support data integrity, DocuRun incorporates robust validation and error handling through human-in-the-loop workflows. By using &lt;a href="https://aws.amazon.com/augmented-ai/" target="_blank" rel="noopener"&gt;Amazon Augmented AI (A2I)&lt;/a&gt;, the system allows for manual reviews and corrections, providing high accuracy and compliance.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;DocuGen&lt;/strong&gt; – Is an advanced &lt;a href="https://aws.amazon.com/what-is/retrieval-augmented-generation/" target="_blank" rel="noopener"&gt;Retrieval Augmented Generation (RAG)&lt;/a&gt; system (patent pending) specifically designed for long document generation (LDG). Traditional RAG pipelines often act as &lt;a href="https://en.wikipedia.org/wiki/Black_box" target="_blank" rel="noopener"&gt;black boxes&lt;/a&gt; that collate all available source data, which can lead to wasted LLM processing tokens and outputs that ignore user priorities. DMI’s DocuGen solves this by: 
  &lt;ul&gt; 
   &lt;li&gt;&lt;strong&gt;Prioritizing relevance&lt;/strong&gt; – Uses a weighted mechanism to make the generated content, such as policies and standard operating procedures (SOPs), align with the specific importance of sources in the document.&lt;/li&gt; 
   &lt;li&gt;&lt;strong&gt;Massive efficiency gains&lt;/strong&gt; – The module can autonomously produce a more than 200-page word document in approximately 45 minutes, a task that typically takes days to complete. This solution securely stores generated documents in &lt;a href="https://aws.amazon.com/s3/" target="_blank" rel="noopener"&gt;Amazon Simple Storage Service (Amazon S3)&lt;/a&gt; for further processing.&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;The following graphic illustrates these three modules.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/13/Document-AI-modules.png" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30681 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/13/Document-AI-modules.png" alt="Document AI modules" width="1430" height="780"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 1: Document AI modules&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;Solution architecture and capabilities&lt;/h3&gt; 
&lt;p&gt;To deliver a secure, scalable, and fully customizable solution, DMI Document AI integrates with core AWS services, using Amazon Bedrock for leading LLMs including &lt;a href="https://nova.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Nova&lt;/a&gt; family of models, &lt;a href="https://aws.amazon.com/bedrock/knowledge-bases/" target="_blank" rel="noopener"&gt;Amazon Bedrock Knowledge Bases&lt;/a&gt; for durable vectorization and embeddings, and Amazon Bedrock Data Automation for intelligent document extraction and smart templates. The solution further supports mission integrity through AWS tenant security isolation, &lt;a href="https://aws.amazon.com/bedrock/guardrails/" target="_blank" rel="noopener"&gt;Amazon Bedrock Guardrails&lt;/a&gt;, and personally identifiable information (PII) protection for security, while using A2I for human-in-the-loop workflows and &lt;a href="https://aws.amazon.com/cloudformation/" target="_blank" rel="noopener"&gt;AWS CloudFormation&lt;/a&gt; for rapid, standardized deployment using &lt;a href="https://aws.amazon.com/what-is/iac/" target="_blank" rel="noopener"&gt;infrastructure as code (IaC)&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;The following diagram illustrates the solution architecture.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/15/Document_AI_Arch-v2.png" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30706 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/15/Document_AI_Arch-v2.png" alt="Diagram of Document AI solution architecture. User ingests source documents via web portal, which routes through custom event-based processing workflow with configurable guardrails and validations to generate business ready insights at scale." width="1024" height="791"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 2: Document AI architecture&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;The solution is deployed through IaC using AWS CloudFormation templates, enabling quicker deployments. Document AI is deployed within customer’s AWS account, allowing customers to maintain control over their data and avoid vendor lock-in or black-box architectures.&lt;/p&gt; 
&lt;h3&gt;Key solution capabilities&lt;/h3&gt; 
&lt;p&gt;The DMI Document AI solution is a modular, AWS based solution designed to modernize the entire document lifecycle through three core modules: DocuChew (ingestion and extraction), DocuRun (classification and workflow), and DocuGen (advanced document generation). The solution’s end-to-end workflow follows a multistep process that transforms raw data into actionable intelligence and business asset. Here are the steps:&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;1. OCR-free ingestion&lt;/strong&gt; – Documents are ingested through automated data pipelines using generative AI, avoiding the limitations of rigid, rule-based OCR.&lt;br&gt; &lt;strong&gt;2. Multimodal analysis&lt;/strong&gt; – The system handles multipage processing for complex, unstructured data, including PDFs, images, and handwritten records.&lt;br&gt; &lt;strong&gt;3. Intelligent classification&lt;/strong&gt; – DocuChew uses multimodal LLMs to automatically identify document types and extract metadata.&lt;br&gt; &lt;strong&gt;4. Smart template application&lt;/strong&gt; – DocuRun uses Amazon Bedrock Data Automation to apply blueprints that use natural language context to identify content structures.&lt;br&gt; &lt;strong&gt;5. Contextual extraction&lt;/strong&gt; – Data is extracted and normalized from various file types, including tables and images, without custom-coded logic.&lt;br&gt; &lt;strong&gt;6. Entity recognition&lt;/strong&gt; – Key entities such as names, dates, and identifiers are extracted and validated against business rules.&lt;br&gt; &lt;strong&gt;7. Durable vectorization&lt;/strong&gt; – Extracted artifacts are vectorized using Amazon Bedrock Knowledge Bases backed by OpenSearch Serverless, creating a long-term memory for LLM-aware processing.&lt;br&gt; &lt;strong&gt;8. Search and discovery&lt;/strong&gt; – Documents are transformed into searchable artifacts for downstream mission operations.&lt;br&gt; &lt;strong&gt;9. User priority definition&lt;/strong&gt; – For new long document generation, users begin by assigning weights to identify the relative importance of each data source.&lt;br&gt; &lt;strong&gt;10. Weight validation&lt;/strong&gt; – The system performs a check to validate user-defined priority weights sum up to 100 percent.&lt;br&gt; &lt;strong&gt;11. Strategic source selection&lt;/strong&gt; – Based on weights, the system selects content from diverse sources such as agency security docs, past audits, and SOPs.&lt;br&gt; &lt;strong&gt;12. Package aggregation&lt;/strong&gt; – Relevant content is combined into a single working package.&lt;br&gt; &lt;strong&gt;13. Constraint management&lt;/strong&gt; – The system automatically confirms the package stays within allowed size and token limits.&lt;br&gt; &lt;strong&gt;14. Autonomous generation&lt;/strong&gt; – The package is sent to high-performance models to generate targeted, multipage documents.&lt;br&gt; &lt;strong&gt;15. Human-in-the-loop review&lt;/strong&gt; – The output is reviewed by business users through A2I to support accuracy and compliance.&lt;br&gt; &lt;strong&gt;16. Iterative refinement&lt;/strong&gt; – User feedback is fed back into the system to refine future results, creating a repeatable loop for continuous improvement.&lt;/p&gt; 
&lt;h3&gt;Security and integrity&lt;/h3&gt; 
&lt;p&gt;DMI Document AI solution is engineered with a &lt;a href="https://aws.amazon.com/what-is/security-architecture/" target="_blank" rel="noopener"&gt;defense-in-depth architecture&lt;/a&gt;, so that every stage of the multistep process adheres to stringent government mandates for privacy, auditability, and data integrity.&lt;/p&gt; 
&lt;p&gt;For infrastructure and data sovereignty, the solution is deployed through IaC using AWS CloudFormation templates, allowing for rapid, standardized deployment directly within the agency’s own AWS account. This supports data sovereignty and Amazon Virtual Private Cloud (&lt;a href="https://aws.amazon.com/vpc/" target="_blank" rel="noopener"&gt;Amazon VPC&lt;/a&gt;) isolation, preventing data from leaving the agency’s secure perimeter or being used to train third-party models. Solution can be augmented with AWS &lt;a href="https://aws.amazon.com/privatelink/" target="_blank" rel="noopener"&gt;PrivateLink&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html#concepts-vpc-endpoints" target="_blank" rel="noopener"&gt;VPC Endpoints&lt;/a&gt; for inter service communication to further security posture.&lt;/p&gt; 
&lt;p&gt;To maintain a zero trust posture, the solution uses &lt;a href="https://aws.amazon.com/iam/" target="_blank" rel="noopener"&gt;AWS Identity and Access Management (IAM)&lt;/a&gt; controls for access and encryption and API security to enforce the principle of least privilege. Data is protected by industry-standard encryption both at rest and in transit, so that sensitive materials such as regulatory records and personnel files remain secure throughout the ingestion and extraction lifecycle.&lt;/p&gt; 
&lt;p&gt;The system uses Amazon Bedrock Guardrails specifically configured for high-sensitivity federal use cases. Examples of these include:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;PII protection&lt;/strong&gt; – Automatically detects and masks PII such as Social Security numbers found in eligibility evidence and case files.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Prompt injection defense&lt;/strong&gt; – Neutralizes malicious attempts to bypass model constraints or exploit vulnerabilities, referred to as jailbreaking, in AI systems, keeping the system focused only on its mission-critical extraction or generation tasks.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Content filtering&lt;/strong&gt; – Prevents the generation of harmful or off-topic content during the creation of documents.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Governance and compliance&lt;/strong&gt; – Provides comprehensive audit logging, tracking interactions for full transparency. Because the solution uses FedRAMP-accredited AWS services, agencies can inherit established security controls, significantly fast-tracking the path to final Authority to Operate (ATO) for their implementations. Please refer to &lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture-generative-ai" target="_blank" rel="noopener"&gt;Security Reference Architecture for AI&lt;/a&gt; for details on securing AI solutions.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;An AI-enabled approach for digitization&lt;/h3&gt; 
&lt;p&gt;Traditional unstructured data extraction pipelines are often rule based and limited in their support for recognizing new document types and dynamic adjustment.&lt;/p&gt; 
&lt;p&gt;Although traditional template-based OCR has been the standard for decades, it’s insufficient for the scale and complexity of current federal datasets. DMI Document AI provides a flexible, &lt;strong&gt;OCR-free approach&lt;/strong&gt; that addresses these limitations.&lt;/p&gt; 
&lt;p&gt;The following table compares the abilities of traditional OCR and DMI Document AI across a range of features.&lt;/p&gt; 
&lt;table border="2"&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Feature&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;strong&gt;Traditional OCR&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;strong&gt;DMI Document AI&lt;/strong&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Handwriting support&lt;/td&gt; 
   &lt;td&gt;Struggles with handwritten content and degraded images.&lt;/td&gt; 
   &lt;td&gt;Designed to interpret handwritten content, signatures, and aged paper records, such as archives, with high precision.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Context understanding&lt;/td&gt; 
   &lt;td&gt;Often loses context when layouts change or when dealing with complex structures such as table headers and chart captions.&lt;/td&gt; 
   &lt;td&gt;Uses generative AI and multimodal LLMs to apply natural language context, allowing it to intelligently classify documents and extract data based on meaning rather than rigid coordinates.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Deployment time&lt;/td&gt; 
   &lt;td&gt;Takes weeks or more to configure for new document types.&lt;/td&gt; 
   &lt;td&gt;Uses IaC through CloudFormation templates. The entire solution can be configured and deployed in days, providing rapid, measurable value.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Maintenance&lt;/td&gt; 
   &lt;td&gt;Forces teams into a reactive cycle of tweaking extraction rules each time form layout changes.&lt;/td&gt; 
   &lt;td&gt;Decouples extraction logic from underlying code using smart templates (blueprints), which allow for seamless updates through straightforward configuration, significantly reducing operational burden.&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h3&gt;DMI’s proprietary advanced RAG: An innovative approach for LDG&lt;/h3&gt; 
&lt;p&gt;Enterprise RAG has been used to create on-demand, high-quality documents for agencies. However, the current RAG pipelines often function as generic systems that include all data elements. This approach leads to blanket ingestion, including non-business essential attributes, resulting in inefficient usage of LLM resources and ignoring priorities preferred by business users.&lt;/p&gt; 
&lt;p&gt;As an advanced RAG-based solution, DocuGen can create long documents. For example, DocuGen has autonomously generated 200-page documents in only 45 minutes in a representative testing environment. This drastically cuts down time and reduces manual labor and errors.&lt;/p&gt; 
&lt;p&gt;DocuGen helps the AI workflow produce better content that is contextually relevant by incorporating a proprietary, user-steered prioritization model. Users start by assigning percentage weights (called user-defined weights) to reflect what matters most to them. The system confirms that the percentages add up to 100%. It then uses these weights to decide how to prioritize information from each document source, such as agency security documentation, past audit reports, and existing SOPs.&lt;/p&gt; 
&lt;p&gt;It selects an optimal portion of content from each source, combines them into a single working package, and helps the package stay within the allowed LLM context window size limits. The system then sends the package along with the prompt to the AI model to generate the targeted content.&lt;/p&gt; 
&lt;p&gt;Finally, the output is reviewed by the user, using A2I, and the feedback is used to refine future results. This creates a repeatable loop that improves accuracy, relevance, and alignment with business objectives.&lt;/p&gt; 
&lt;h3&gt;Designed for broad application&lt;/h3&gt; 
&lt;p&gt;Agencies are at various stages of technical maturity when it comes to handling unstructured data, and they explore various AI-powered IDP solutions. Some have experimented with proofs of concept (POCs) and generic LLMs such as ChatGPT, while others have limited AI skills and resources. Document AI offers a way to automate tasks more efficiently and manage workloads effectively. It supports modular deployment, allowing each module (DocuChew, DocuRun, and DocuGen) to be deployed independently. Here are the key benefits:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Deployment&lt;/strong&gt; – The plug-and-play design integrates seamlessly with existing enterprise systems.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Customization&lt;/strong&gt; – Each component can be tailored to meet the specific needs of the agencies.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Integration&lt;/strong&gt; – Components can be deployed and integrated within existing environments independently.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Document AI use case examples&lt;/h3&gt; 
&lt;p&gt;Organizations across large federal, defense, civilian, and health sectors can streamline their document operations and expand their capabilities, as demonstrated in DMI’s work with a U.S. Department of Defense organization and multiple other federal customers.&lt;br&gt; For example, &lt;strong&gt;DMI&lt;/strong&gt; supported a &lt;strong&gt;U.S. Department of Defense organization&lt;/strong&gt; in modernizing its records management system, enabling &lt;strong&gt;scalable processing and secure storage of millions of personnel records&lt;/strong&gt;. This effort &lt;strong&gt;enhanced document handling, accessibility, and organization&lt;/strong&gt;, reducing&lt;strong&gt; total cost of ownership significantly&lt;/strong&gt; while improving the &lt;strong&gt;speed and accuracy&lt;/strong&gt; of working with high-volume document sets.&lt;/p&gt; 
&lt;p&gt;Some of the most common uses cases:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Automate digital document intake, processing, and extraction&lt;/strong&gt; of content such as invoices and case files, reducing data engineering time for business users.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Use AI to automatically index incoming personnel documents&lt;/strong&gt;, enabling the reassignment of personnel.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Generate new documents from existing knowledge bases&lt;/strong&gt; (such as internal policies and manuals)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Automate structured and unstructured processing and progress review of grants&lt;/strong&gt;, reducing review timelines for grant approvers.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Conclusion&lt;/h3&gt; 
&lt;p&gt;&lt;a href="https://dminc.com/" target="_blank" rel="noopener"&gt;DMI&lt;/a&gt;, an &lt;a href="https://aws.amazon.com/partners/services-tiers/" target="_blank" rel="noopener"&gt;AWS Advanced Tier Services Partner&lt;/a&gt;, helps government agencies cut through document processing burdens by eliminating manual, repetitive tasks and embracing secure, responsible AI. Powered by Amazon Bedrock and Amazon Bedrock Data Automation—and tailored for mission operations—it accelerates the shift to a modern, outcome-focused organization.&lt;/p&gt; 
&lt;p&gt;With Document AI, agencies can handle different document types, streamline document compliance, and automate document workflows with human-in-the-loop review without the need to overhaul their systems.&lt;/p&gt; 
&lt;p&gt;DMI brings together an integrated set of mission-driven transformation services and solutions to help US defense, intelligence, and federal civilian agencies evolve through the next wave of digital transformation and beyond. Combining the best of both public and private sector expertise, DMI provides managed services, application development, digital strategy and consulting, cloud transformation, cybersecurity, and AI services to deliver human-centric outcomes at scale while maintaining the highest standards for reliability, performance, and security.&lt;/p&gt; 
&lt;p&gt;To learn more about DMI’s Document AI capabilities and how agencies are applying this approach in production environments, visit the &lt;a href="https://partners.amazonaws.com/partners/001E000000U0VKUIA3/" target="_blank" rel="noopener"&gt;DMI AWS Partner Profile&lt;/a&gt; or contact our team at &lt;a href="mailto:engage@dminc.com" target="_blank" rel="noopener"&gt;engage@dminc.com&lt;/a&gt;.&amp;nbsp; You can also connect with &lt;a href="https://dminc.com/contact/" target="_blank" rel="noopener"&gt;DMI&lt;/a&gt; to explore use cases aligned to your agency’s document processing and modernization priorities.&lt;/p&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>University of Maryland Athletics Transforms Fan Experience with AWS: A Data-Driven Success Story</title>
		<link>https://aws.amazon.com/blogs/publicsector/university-of-maryland-athletics-transforms-fan-experience-with-aws-a-data-driven-success-story/</link>
		
		<dc:creator><![CDATA[Kaitlin Darby]]></dc:creator>
		<pubDate>Mon, 04 May 2026 22:01:45 +0000</pubDate>
				<category><![CDATA[Public Sector]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">f45b8489e74e76243527b063dcdee191c5553508</guid>

					<description>Learn how the University of Maryland Athletics Department has achieved a remarkable transformation in how it understands and serves its fans, partnering with Amazon Web Services (AWS) to build a unified data platform that's delivering measurable results across operations, insights, and revenue generation.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-30556 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/01/University-of-Maryland-Athletics-Transforms-Fan-Experience-with-AWS-1.png" alt="University of Maryland Athletics Transforms Fan Experience with AWS: A Data-Driven Success Story" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;The University of Maryland Athletics Department has achieved a remarkable transformation in how it understands and serves its fans, partnering with Amazon Web Services (AWS) to build a unified data platform that’s delivering measurable results across operations, insights, and revenue generation.&lt;/p&gt; 
&lt;h3&gt;The Challenge Every Athletics Department Knows&lt;/h3&gt; 
&lt;p&gt;Like many collegiate athletics programs, UMD Athletics faced fragmented data systems, time-consuming manual processes, and delayed insights that made it difficult to respond quickly to fan needs and market opportunities. Survey analysis took hours each week, pricing decisions relied on manual weekly checks, and delivering actionable insights to stakeholders could take 1-2 weeks—far too slow in today’s fast-paced sports environment.&lt;/p&gt; 
&lt;h3&gt;Creating Actionable Insights At Scale&lt;/h3&gt; 
&lt;p&gt;UMD Athletics partnered with AWS to create a comprehensive platform that brings together customer profiles, fan segmentation, and self-service analytics across the entire organization. The journey began following an enablement session on modern data architecture led by UMD’s AWS account team. UMD Athletics took the first step by loading CSV ticket purchase data from Paciolan into Amazon S3, then leveraging AWS Glue to clean and transform that raw information into structured data products segmented by sport. Using Amazon Athena, the UMD analyst created views to query the data directly from S3, and integrated Amazon QuickSight to rapidly build dashboards surfacing revenue trends by sport, game, ticket type, and more — delivering immediate, actionable visibility without the need for a traditional data warehouse.&lt;/p&gt; 
&lt;p&gt;From that early success, expansion accelerated by harnessing the power of generative AI for development. Using Amazon Q Developer for code generation, analysts were able to describe their goals in natural language and receive Python code to accomplish the task — dramatically lowering the barrier to building new data pipelines. AWS Lambda functions were developed to incorporate season ticket holder data and donor information, broadening the platform’s view of each fan and supporter. Post-game survey data was then added to the mix and analyzed by Amazon Bedrock for sentiment analysis and fan segmentation, turning qualitative feedback into quantitative insight. Automated follow-up processes were built on top of this foundation, freeing up capacity for expanded survey programs and more targeted marketing efforts.&lt;/p&gt; 
&lt;p&gt;The result is a solution that combines cloud data infrastructure with AI-powered automation to transform raw data into actionable insights at scale.&lt;/p&gt; 
&lt;p&gt;The platform delivers 360-degree customer profiles and enables every team member — from ticketing to marketing to development — to access the insights they need through intuitive dashboards, without waiting for IT or analytics teams. What began as a single analyst loading a CSV file has grown into an organization-wide self-service analytics capability, with generative AI accelerating every stage of the data lifecycle.&lt;/p&gt; 
&lt;h3&gt;Results That Transform Operations&lt;/h3&gt; 
&lt;p&gt;The platform has delivered substantial cost reductions and time savings, with $75-80K in annual operating expense savings, 90+ hours saved annually from automated pricing processes alone, and hundreds of additional hours recovered from eliminating manual report processing.&lt;/p&gt; 
&lt;p&gt;Perhaps most impressive is how the platform has accelerated decision-making: survey analysis has been reduced from hours to minutes using AI-powered categorization, insights delivery has accelerated from 1-2 weeks to near real-time enabling mid-season action on fan feedback rather than waiting for end-of-year reviews, survey capacity has doubled from 10 to 20+ annually with no additional headcount required, and pricing decisions are now 7x faster, moving from weekly manual checks to daily automated alerts.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“AWS has allowed us to build tooling that moves at our speed and scale, fully customized to our business. Being able to aggregate and analyze data holistically while piecing together the entire fan journey, allows us to react proactively, and that’s completely changed how we make decisions.”&lt;/p&gt; 
 &lt;p&gt;John Tieso, Lead Data &amp;amp; Marketing Analyst, University of Maryland Athletics&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;h3&gt;Revenue Impact: From Weekly Reviews to Daily Action&lt;/h3&gt; 
&lt;p&gt;The platform’s ability to integrate primary and secondary market data, velocity metrics, and attendance patterns has transformed pricing strategy with daily automated pricing alerts replacing weekly manual checks, same-day pricing decisions that respond to market conditions in real-time, and $200K-$300K+ in projected incremental revenue for 2026.&lt;/p&gt; 
&lt;p&gt;The transformation goes beyond numbers. UMD Athletics has shifted from a reactive posture—analyzing last season’s data to inform next season’s decisions—to a proactive approach where fan feedback and market signals drive immediate action. The athletics department can now identify and respond to fan sentiment trends during the season, adjust pricing strategies daily based on comprehensive market intelligence, expand survey programs to gather more feedback without adding staff, and empower staff across departments with self-service access to insights.&lt;/p&gt; 
&lt;h3&gt;A Blueprint for Collegiate Athletics&lt;/h3&gt; 
&lt;p&gt;As programs nationwide face pressure to deliver exceptional fan experiences while managing costs and maximizing revenue, UMD’s transformation offers a proven roadmap. The key ingredients: unified data infrastructure that breaks down silos, AI-powered automation that accelerates insights, and self-service analytics that democratize access to intelligence across the organization.&lt;/p&gt; 
&lt;p&gt;The University of Maryland Athletics Department’s partnership with AWS demonstrates how modern cloud technologies and AI can transform operations in collegiate athletics. By building a unified data platform that breaks down silos and accelerates insights, UMD has created a foundation for continued innovation in fan engagement and revenue optimization.&lt;/p&gt; 
&lt;p&gt;&lt;em&gt;The University of Maryland Athletics Department continues to expand its use of AWS services to enhance fan experiences and operational excellence. For more information about AWS solutions for sports and entertainment organizations, visit aws.amazon.com.&lt;/em&gt;&lt;/p&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>This content has been blocked by our content filters on Amazon Bedrock—Now what?</title>
		<link>https://aws.amazon.com/blogs/publicsector/this-content-has-been-blocked-by-our-content-filters-on-amazon-bedrock-now-what/</link>
		
		<dc:creator><![CDATA[Josh Famestad]]></dc:creator>
		<pubDate>Mon, 04 May 2026 13:51:50 +0000</pubDate>
				<category><![CDATA[Amazon Bedrock]]></category>
		<category><![CDATA[Public Sector]]></category>
		<guid isPermaLink="false">9caab57e4d4dc38522beb5f649b02b35b663ec0b</guid>

					<description>This blog post walks you through a practical framework for understanding why this happens and what you can do about the message “This content has been blocked by our content filters” while using a foundation model (FM) on Amazon Bedrock, or a model has otherwise refused to perform a task, you’re not alone.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-30613 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/08/This-content-has-been-blocked-by-our-content-filters-on-Amazon-Bedrock.jpg" alt="This content has been blocked by our content filters on Amazon Bedrock—Now what?" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;If you’ve received the message “This content has been blocked by our content filters” while using a &lt;a href="https://aws.amazon.com/what-is/foundation-models/" target="_blank" rel="noopener"&gt;foundation model (FM)&lt;/a&gt; on &lt;a href="https://aws.amazon.com/bedrock" target="_blank" rel="noopener"&gt;Amazon Bedrock&lt;/a&gt;, or a model has otherwise refused to perform a task, you’re not alone. This is a question customers sometimes face when building applications to process sensitive or complex content on &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS)&lt;/a&gt; when that content triggers safety mechanisms. This post walks you through a practical framework for understanding why this happens and what you can do about it.&lt;/p&gt; 
&lt;h3&gt;Why do models refuse requests?&lt;/h3&gt; 
&lt;p&gt;Model providers train FMs with safety mechanisms that cause them to decline certain types of requests. Providers implement these mechanisms to prevent misuse and protect end users. Content filtering and refusal behavior comes from multiple layers:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Model-level safety training&lt;/strong&gt; – Each FM is trained by its provider with built-in refusal behaviors. These vary between model families and even between versions within the same family.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Provider acceptable use policies (AUPs)&lt;/strong&gt; – Each model provider on Amazon Bedrock (Amazon, Anthropic, Meta, Mistral, Cohere, AI21 Labs, and others) maintains their own terms of service and AUP that define what the model can and can’t be used for. You can find these on the &lt;a href="https://aws.amazon.com/ai/responsible-ai/resources/" target="_blank" rel="noopener"&gt;AI Service Cards&lt;/a&gt; for Amazon models and on the &lt;a href="https://aws.amazon.com/legal/bedrock/third-party-models/" target="_blank" rel="noopener"&gt;Serverless Third-Party Models on Amazon Bedrock End User License Agreement (EULA) and Terms of Service (TOS).&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Amazon Bedrock platform-level protections&lt;/strong&gt; – Amazon Bedrock includes abuse detection features to prevent misuse that violates the AWS Acceptable Use Policy.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;The challenge is that these safety mechanisms can’t distinguish between malicious intent and legitimate business need. Law enforcement analysts reviewing threat communications, healthcare providers processing crisis intervention notes, and content moderation platforms analyzing user reports might all encounter refusals.&lt;/p&gt; 
&lt;h3&gt;How to troubleshoot content filtering challenges&lt;/h3&gt; 
&lt;p&gt;We recommend two steps:&lt;/p&gt; 
&lt;p&gt;1. Verify the provider’s policies permit your use case.&lt;br&gt; 2. Evaluate the model’s technical performance with representative data and tasks.&lt;/p&gt; 
&lt;p&gt;These steps are explained in the following sections.&lt;/p&gt; 
&lt;h3&gt;Verify your use case is permitted&lt;/h3&gt; 
&lt;p&gt;Consult your legal and compliance teams when working with sensitive content. Confirm that your intended use case is allowed under the model provider’s AUP. Restrictions vary across models including limitations on:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Criminal justice and law enforcement applications&lt;/li&gt; 
 &lt;li&gt;Surveillance and monitoring&lt;/li&gt; 
 &lt;li&gt;Biometric identification and facial recognition&lt;/li&gt; 
 &lt;li&gt;Predictive policing&lt;/li&gt; 
 &lt;li&gt;Automated decision-making and profiling&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Some restrictions are absolute. Others can be addressed through an approval or exception process with the provider. Review the relevant policies for models and services in scope:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;AI Service Cards – Transparency resources covering intended use cases, limitations, and design choices for AWS AI services&lt;/li&gt; 
 &lt;li&gt;Serverless Third-Party Models on Amazon Bedrock&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/nova/latest/userguide/responsible-use.html" target="_blank" rel="noopener"&gt;Responsible use&lt;/a&gt; in the Amazon Nova User Guide&lt;/li&gt; 
 &lt;li&gt;AWS Acceptable Use Policy&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/abuse-detection.html" target="_blank" rel="noopener"&gt;Amazon Bedrock abuse detection&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Key takeaway:&lt;/strong&gt; Don’t invest engineering effort into a model whose provider doesn’t permit your use case.&lt;/p&gt; 
&lt;h3&gt;Evaluate models with your own data&lt;/h3&gt; 
&lt;p&gt;Published benchmarks and general guidance can help you narrow the field, but the only reliable way to know if a model works for your use case is to test it with data that’s representative of your actual workload.&lt;/p&gt; 
&lt;p&gt;Amazon Bedrock Evaluations provides built-in &lt;a href="https://aws.amazon.com/bedrock/evaluations/" target="_blank" rel="noopener"&gt;model evaluation&lt;/a&gt; capabilities that you can use to compare models using your own datasets and evaluation criteria. You can assess models on task accuracy, robustness, toxicity detection, and run evaluations across multiple models simultaneously.&lt;/p&gt; 
&lt;p&gt;With open source evaluation frameworks such as &lt;a href="https://www.promptfoo.dev/" target="_blank" rel="noopener"&gt;PromptFoo&lt;/a&gt;, you can define test cases with assertions to systematically measure how models handle your content. You can create evaluation suites that test whether a model refuses or completes tasks across your specific content categories, using both deterministic checks (for example, detecting refusal patterns in output) and model-graded assessments (for example, scoring output quality against a rubric).&lt;/p&gt; 
&lt;p&gt;What to measure:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Task completion rate&lt;/strong&gt; – How often does the model successfully complete the task and how often does it refuse?&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Refusal patterns&lt;/strong&gt; – Which content categories trigger refusals? Are they relevant to your workload?&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Output quality&lt;/strong&gt; – When the model does respond, is the output accurate and useful?&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Consistency&lt;/strong&gt; – Does the model behave predictably, or do minor prompt variations cause different refusal behavior?&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Key takeaway:&lt;/strong&gt; Narrow your candidate models using these steps, then invest evaluation effort on the resulting list using iterative results as you experiment with prompts across multiple candidate models.&lt;/p&gt; 
&lt;h3&gt;Putting it all together&lt;/h3&gt; 
&lt;p&gt;In summary, here’s the decision flow:&lt;/p&gt; 
&lt;p&gt;1. Check the policies – Identify models that permit your use case.&lt;/p&gt; 
&lt;p&gt;2. Evaluate with your data – Test your short-listed models with representative samples from your actual use case.&lt;/p&gt; 
&lt;p&gt;This isn’t a one-time exercise. As new models become available on Amazon Bedrock and providers update their policies and safety training, it’s worth periodically reevaluating your model selection.&lt;/p&gt; 
&lt;h3&gt;Resources&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html" target="_blank" rel="noopener"&gt;Amazon Bedrock User Guide&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/considerations-for-addressing-the-core-dimensions-of-responsible-ai-for-amazon-bedrock-applications/" target="_blank" rel="noopener"&gt;Considerations for addressing the core dimensions of responsible AI for Amazon Bedrock applications&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>CMMC implementation begins: A new era for defense contractors</title>
		<link>https://aws.amazon.com/blogs/publicsector/cmmc-implementation-begins-a-new-era-for-defense-contractors/</link>
		
		<dc:creator><![CDATA[Paul Keastead]]></dc:creator>
		<pubDate>Mon, 04 May 2026 13:20:58 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[Public Sector]]></category>
		<category><![CDATA[CMMC]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[defense]]></category>
		<category><![CDATA[DFARS]]></category>
		<category><![CDATA[nist]]></category>
		<guid isPermaLink="false">f63dc3d2e43fe810ded681615c960c48fa0c5e87</guid>

					<description>This post explores the implications of these developments and what they mean for businesses in the defense sector. This includes organizations in aerospace, defense satellite, healthcare, manufacturing, and higher education that conduct business with the Department of Defense (DoW).</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-30932 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/03/CMMC-implementation-begins-A-new-era-for-defense-contractors.png" alt="" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;The long-awaited &lt;a href="https://dodcio.defense.gov/CMMC/" target="_blank" rel="noopener"&gt;Cybersecurity Maturity Model Certification &lt;/a&gt;(CMMC) 2.0 is now a reality for the &lt;a href="https://www.cisa.gov/topics/critical-infrastructure-security-and-resilience/critical-infrastructure-sectors/defense-industrial-base-sector" target="_blank" rel="noopener"&gt;Defense Industrial Base (DIB)&lt;/a&gt;. With the finalization of both the &lt;a href="https://www.ecfr.gov/current/title-32" target="_blank" rel="noopener"&gt;Code of Federal Regulations (CFR) Title 32&lt;/a&gt; and &lt;a href="https://www.ecfr.gov/current/title-48" target="_blank" rel="noopener"&gt;CFR Title 48&lt;/a&gt; rules, we’ve entered a new era of cybersecurity requirements for defense contractors. This post explores the implications of these developments and what they mean for businesses in the defense sector. This includes organizations in aerospace, defense satellite, healthcare, manufacturing, and higher education that conduct business with the &lt;a href="https://www.defense.gov/" target="_blank" rel="noopener"&gt;Department of Defense (DoW).&lt;/a&gt; AWS supports these organizations in CMMC implementation through comprehensive security services, compliance documentation, and infrastructure that aligns with CMMC requirements across all levels while providing tools and resources to help organizations achieve and maintain certification.&lt;/p&gt; 
&lt;p&gt;The road to CMMC implementation has been a carefully orchestrated process. The 32 CFR CMMC Final Rule, published on October 15, 2024, and effective as of December 16, 2024, laid the groundwork by establishing the CMMC Program, defining security controls for each CMMC level and outlining assessment and certification processes. Following this, the crucial 48 CFR rule, which integrates CMMC requirements into the &lt;a href="https://www.acquisition.gov/dfars" target="_blank" rel="noopener"&gt;Defense Federal Acquisition Regulation Supplement (DFARS)&lt;/a&gt;, has now been finalized. This means that all contracts that have Federal Contract Information (FCI) and Controlled Unclassified Information (CUI) will require an assessment of the contractor or subcontractor environment to ensure they’ve implemented the proper cybersecurity controls.&lt;/p&gt; 
&lt;p&gt;The DoW has now begun the phased rollout of CMMC requirements in contracts. This marks the start of a new era in defense contracting, where cybersecurity compliance is no longer just a contractual obligation but a prerequisite for doing business with the DoW.&lt;/p&gt; 
&lt;h3&gt;What this means for contractors&lt;/h3&gt; 
&lt;p&gt;November 10th, 2025 CMMC requirements began appearing in select new contracts, with full implementation expected by fiscal year 2028. This gives contractors time to adapt, but it also means that early adopters will have a competitive advantage in the market. Contractors and subcontractors face several significant challenges as they pursue CMMC certification. The requirement for pre-award certification has fundamentally changed the contracting landscape, as organizations must now achieve certification before they can be awarded DoW contracts. Additionally, prime contractors bear the responsibility of ensuring their subcontractors meet appropriate CMMC levels, creating cascading compliance requirements throughout the supply chain. The new framework’s restrictions on Plans of Action and Milestones (POA&amp;amp;Ms) further complicate matters, as organizations must demonstrate proactive compliance rather than relying on reactive planning approaches. Finally, CMMC 2.0 demands ongoing maintenance of cybersecurity practices through continuous monitoring, moving beyond the traditional point-in-time certification model to ensure sustained security posture.&lt;/p&gt; 
&lt;p&gt;When contractors and subcontractors are ready to move forward, they can follow this five-step plan:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Assess current posture&lt;/strong&gt; – Conduct a thorough gap analysis or self-assessment against CMMC requirements for your targeted level.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Develop compliance strategy&lt;/strong&gt; – Create a comprehensive roadmap for achieving and maintaining CMMC compliance.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Initiate certification process&lt;/strong&gt; – Begin working with a certified third-party assessment organization (C3PAO) to schedule your assessment for CMMC level 2.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Supply chain management&lt;/strong&gt; – Review and update agreements with subcontractors to ensure they meet necessary CMMC levels.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Training and documentation&lt;/strong&gt; – Implement robust training programs and documentation processes to support ongoing compliance.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Conclusion&lt;/h3&gt; 
&lt;p&gt;The implementation of CMMC represents a significant shift in how the DoW approaches cybersecurity in its supply chain. Although it presents challenges, it also offers opportunities for contractors who can effectively navigate the new landscape. Those who embrace these changes and demonstrate their commitment to robust cybersecurity practices will be best positioned for success in future defense contracting.&lt;/p&gt; 
&lt;p&gt;Expect to see increased scrutiny of cybersecurity practices, not only during the certification process, but throughout the lifecycle of contracts. The DoW’s commitment to enhancing the security of the DIB is clear, and contractors must align with this vision to remain competitive. Organizations that can adapt and comply with these new regulations are more likely to thrive in this new cybersecurity-focused environment. For more information on how to accelerate CMMC with AWS visit &lt;a href="https://aws.amazon.com/compliance/cmmc/" target="_blank" rel="noopener"&gt;https://aws.amazon.com/compliance/cmmc/&lt;/a&gt; or contact &lt;a href="mailto:CMMConAWS@amazon.com" target="_blank" rel="noopener"&gt;CMMConAWS@amazon.com&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;TAGS: announcements, AWS Public Sector, cybersecurity, defense, government, U.S. Department of Defense, CMMC, DFARS, NIST, Defense Industrial Base&lt;/p&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>How nonprofits can explore, document, and enhance applications using Kiro</title>
		<link>https://aws.amazon.com/blogs/publicsector/how-nonprofits-can-explore-document-and-enhance-applications-using-kiro/</link>
		
		<dc:creator><![CDATA[Ben Turnbull]]></dc:creator>
		<pubDate>Wed, 29 Apr 2026 23:03:05 +0000</pubDate>
				<category><![CDATA[Amazon Bedrock]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AWS Transform]]></category>
		<category><![CDATA[Kiro]]></category>
		<category><![CDATA[Public Sector]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">0217792acc7a8a44d8db97fc48b90fa9025c0dd1</guid>

					<description>In this blog post, you'll see how Kiro, an AI-assisted development environment powered by Amazon Bedrock, helps nonprofit technical teams close that knowledge gap from both sides.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-30888 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/29/How-nonprofits-can-explore-document-and-enhance-applications-using-Kiro.png" alt="How nonprofits can explore, document, and enhance applications using Kiro" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;Nonprofit technical teams are caught in a knowledge squeeze. On one side, staff departures walk institutional knowledge out the door. According to the 2026 &lt;a href="https://www.thenonprofiteers.com/sisr" target="_blank" rel="noopener"&gt;Social Impact Staff Retention Project&lt;/a&gt; survey, nearly 7 in 10 nonprofit employees are considering new jobs, taking undocumented knowledge with them. On the other, new applications arrive from contractor handoffs, open source adoptions, and rapid prototyping sprints, each one adding complexity that no one has time to document. The gap between what your team owns and what your team knows continues to widen.&lt;/p&gt; 
&lt;p&gt;In this post, you’ll see how &lt;a href="https://kiro.dev/" target="_blank" rel="noopener"&gt;Kiro&lt;/a&gt;, an AI-assisted development environment powered by &lt;a href="https://aws.amazon.com/bedrock" target="_blank" rel="noopener"&gt;Amazon Bedrock&lt;/a&gt;, helps nonprofit technical teams close that knowledge gap from both sides. First, you’ll walk through techniques for exploring unfamiliar code bases, turning inherited applications into transparent, navigable systems rather than black boxes. Then, you’ll learn how to generate living documentation that captures institutional knowledge, accelerates onboarding for incoming staff, and stays current as your workloads evolve.&lt;/p&gt; 
&lt;h3&gt;The challenge of knowledge loss and rising complexity&lt;/h3&gt; 
&lt;p&gt;At &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS)&lt;/a&gt;, nonprofit technical leaders regularly tell us that understanding their own applications is one of their biggest operational risks. Consider these common scenarios:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;A community health organization inherits a patient intake application after its lead developer departs. They’re left with a running application, a code repository, and little to no documentation.&lt;/li&gt; 
 &lt;li&gt;A youth services nonprofit adopts an open source case management platform. The team customizes it to fit their workflows, but 6 months later, no one remembers why certain modules were modified or how they interact with the rest of the system.&lt;/li&gt; 
 &lt;li&gt;A small IT team needs to onboard a contractor to add a critical feature but spends the first 2 weeks explaining how the application works, impacting timeline and budget.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;For many nonprofits, institutional knowledge lives in the heads of a few individuals, and when those individuals leave, the organization’s ability to maintain and improve its technology leaves with them. At the same time, new workloads keep arriving. Organizations explore open source solutions, contractors hand off deliverables from completed engagements, and each new project brings an unfamiliar tech stack that can take weeks to onboard. The knowledge deficit grows in both directions at once.&lt;/p&gt; 
&lt;h3&gt;Solution overview&lt;/h3&gt; 
&lt;p&gt;Kiro is an AI-assisted development environment that analyzes your existing code base and helps you generate documentation, architecture insights, and best-practice guidance without requiring deep prior knowledge of the application. Here’s how Kiro supports the knowledge gap at a high level:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Analyze your existing code base using AI to understand application structure, dependencies, and logic flows.&lt;/li&gt; 
 &lt;li&gt;Produce steering files, which are best-practice configuration files that guide future development, so new features align with your organization’s conventions.&lt;/li&gt; 
 &lt;li&gt;Generate documentation and architecture diagrams, creating institutional knowledge that might never have existed.&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://modelcontextprotocol.io/docs/getting-started/intro" target="_blank" rel="noopener"&gt;Model Context Protocol (MCP)&lt;/a&gt; connects Kiro to external knowledge sources, such as AWS documentation or your own internal tools, so that recommendations stay current and contextually relevant.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;The key benefits for nonprofit teams include the ability to accelerate application discovery from weeks to hours through AI-assisted exploration, generate living documentation that evolves with your code automatically, and apply best practices consistently, even without deep expertise on every team. The walkthrough in this post explores these elements in Kiro.&lt;/p&gt; 
&lt;h3&gt;Prerequisites&lt;/h3&gt; 
&lt;p&gt;To follow along with this walkthrough, you need Kiro installed on your development machine. For reference, I will use the &lt;a href="https://github.com/aws-samples/bedrock-chat" target="_blank" rel="noopener"&gt;bedrock-chat repository&lt;/a&gt; from AWS samples to demonstrate a new code base exploration.&lt;/p&gt; 
&lt;h3&gt;Discovery and documentation&lt;/h3&gt; 
&lt;p&gt;As you begin exploring a new workload, use the built-in tools available in Kiro to explore the repository both at a high level and a more granular level.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Start with steering files&lt;br&gt; &lt;/strong&gt;Use Kiro’s predefined steering file generation to create an overview of your project in three distinct markdown files. These &lt;a href="https://kiro.dev/docs/steering/#foundational-steering-files" target="_blank" rel="noopener"&gt;foundational steering files&lt;/a&gt; establish core project context so Kiro can align with your goals and patterns during development and double as a useful summary as you onboard to the project.&lt;/p&gt; 
&lt;p&gt;To generate these steering files, follow these steps:&lt;/p&gt; 
&lt;p&gt;1. In the Kiro panel, choose &lt;strong&gt;AGENT STEERING &amp;amp; SKILLS&lt;/strong&gt;&lt;br&gt; 2. Choose &lt;strong&gt;Generate Steering Docs.&lt;/strong&gt;&lt;br&gt; 3. The Kiro agent will explore the repository and produce the following three documents automatically:&lt;/p&gt; 
&lt;p&gt;a. &lt;code&gt;product.md&lt;/code&gt; – Describes what your repository does.&lt;br&gt; b. &lt;code&gt;tech.md&lt;/code&gt; – Lists your frameworks, libraries, and tools.&lt;br&gt; c. &lt;code&gt;structure.md&lt;/code&gt; – Maps out how your project is organized.&lt;/p&gt; 
&lt;p&gt;The following screenshots illustrate these steps.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/29/Picture1-kiro-panel.png" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30881 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/29/Picture1-kiro-panel.png" alt="Three screenshots of the Kiro console showing the steps described in the text." width="1512" height="404"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 1: Foundational steering documents in the Kiro panel&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;These foundational files help your team onboard by providing a human-readable project summary, but steering files primarily help Kiro learn your organization’s patterns and best practices for development. As you encode standards, such as preferred naming conventions, security patterns, or architectural decisions into custom steering files, Kiro uses them to guide future code suggestions and feature development. This means new developers get consistent guidance from both the documentation and the AI tool itself, reducing the risk of diverging from established patterns. You can read more about this in the &lt;strong&gt;Keeping documentation up to date&lt;/strong&gt; section later in the post.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Generate custom documentation with Kiro chat&lt;/strong&gt;&lt;br&gt; For more tailored documentation, use the Kiro built-in &lt;a href="https://kiro.dev/docs/chat/subagents/" target="_blank" rel="noopener"&gt;context gathering subagent&lt;/a&gt;. Kiro automatically indexes your code base when you open a project so the subagent can explore your repository and generate custom documentation on demand.&lt;/p&gt; 
&lt;p&gt;Open a new Kiro chat session in &lt;a href="https://kiro.dev/docs/chat/vibe/" target="_blank" rel="noopener"&gt;Vibe mode&lt;/a&gt; and prompt it to generate an onboarding document. This document will contain useful context to help someone new better understand the project. It can align with common &lt;a href="https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-readmes" target="_blank" rel="noopener"&gt;README&lt;/a&gt; context, or you can extend it to an onboarding narrative with service overviews and custom guidance. Consider prompting Kiro to personalize the onboarding communication to the capability of your team members. Kiro uses its context subagent to explore the repo and build context that will be summarized in a final markdown document.&lt;/p&gt; 
&lt;p&gt;Consider submitting this sample prompt to the Kiro chat to generate custom documentation:&lt;/p&gt; 
&lt;div class="hide-language"&gt; 
 &lt;pre&gt;&lt;code class="lang-json"&gt;Explore this repository and generate a guided onboarding narrative called ONBOARDING.md. The document should be welcoming, conversational, and structured like a senior teammate walking a new developer through the project on day one. Cover these sections in order:

1. Welcome &amp;amp; what this project does: What the app is, who uses it, and the problem it solves.
2. Architecture at a glance: Mermaid diagram of major components and request flow, with brief explanations.
3. Key concepts &amp;amp; domain language: Project-specific terms defined and mapped to implementing code.
4. Local environment setup: Prerequisites, step-by-step install, dev stack deployment, and common gotchas.
5. Day-to-day workflows: Running tests, adding endpoints/tools, modifying infra, and CI/CD overview.
6. Services cheat sheet: Table of every service dependency: role, abstracting module, docs link, and quota notes.
7. Where to go from here: Links to existing docs, suggested first tasks, and team contact placeholders.
&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;The following screenshot shows Kiro’s response. First, it indicates it will explore the repository structure. Then, it creates and verifies a comprehensive onboarding document to orient new staff in a Kiro chat.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/29/Figure-2-Kiro-chat.png" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30882 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/29/Figure-2-Kiro-chat.png" alt="Screenshot of the Kiro console showing the Kiro response to the sample prompt described in the text of the post. The response is described in the text." width="1377" height="532"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 2: Kiro chat generates an onboarding document to orient new staff&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;For more granular exploration, Kiro chat is context aware. You can open a file and ask, “Summarize this file,” or reference specific files using the context mention icon (#). You can highlight a code snippet, right-click, and choose &lt;strong&gt;Kiro&lt;/strong&gt; then &lt;strong&gt;Chat&lt;/strong&gt; to ask targeted questions about that specific section of code. This makes Kiro chat a companion for ongoing code discovery as you learn more about the repository.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/29/context-aware-exploration.png" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30904 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/29/context-aware-exploration.png" alt="Screenshot of the Kiro console showing a file on the left and a Kiro window on the right summarizing what the file does." width="1892" height="673"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 3: Context-aware file exploration in Kiro chat&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;Scale discovery with advanced tools&lt;/h3&gt; 
&lt;p&gt;Foundational steering files and built-in subagents through Kiro chat are a good starting point for exploratory analysis. However, as your projects grow in complexity, you might require more sophisticated approaches. Many enterprise applications yield additional complexity, and documentation will need to persist in shared spaces beyond a local file system. This section addresses more advanced methods for generating onboarding documentation through multi-root workspaces, the use of MCP servers, and shareable agent skills.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Large-scale application discovery&lt;/strong&gt;&lt;br&gt; Kiro supports &lt;a href="https://kiro.dev/docs/editor/multi-root-workspaces/" target="_blank" rel="noopener"&gt;multi-root workspaces&lt;/a&gt; for applications spanning multiple repositories, which is common in microservice architectures. Add additional folders to your workspace by choosing &lt;strong&gt;File&lt;/strong&gt; and then &lt;strong&gt;Add Folder to Workspace&lt;/strong&gt;, and the same discovery methods shared previously work across all roots. This simplifies exploration across the full stack of an application by unifying disparate repositories in one shared workspace.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Model Context Protocol&lt;/strong&gt;&lt;br&gt; You can extend Kiro’s capabilities through MCP servers to develop more robust documentation and connect with downstream systems. For example, configuring the &lt;a href="https://github.com/jgraph/drawio-mcp/blob/main/mcp-tool-server/README.md" target="_blank" rel="noopener"&gt;draw.io&lt;/a&gt; MCP server enables Kiro to generate architecture diagrams and visual workflows directly from your code base context. By adding the &lt;a href="https://support.atlassian.com/atlassian-rovo-mcp-server/docs/getting-started-with-the-atlassian-remote-mcp-server/" target="_blank" rel="noopener"&gt;Atlassian MCP server&lt;/a&gt;, you can publish documentation to Confluence, creating a shared knowledge base accessible to your entire team. You might also consider the &lt;a href="https://awslabs.github.io/mcp/servers/aws-knowledge-mcp-server" target="_blank" rel="noopener"&gt;AWS Knowledge MCP&lt;/a&gt; for code bases with AWS infrastructure to reference the latest AWS documentation for service explanation.&lt;/p&gt; 
&lt;p&gt;The following screenshot shows a diagram generated by Kiro using the Draw.io MCP server.&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/01/bedrock-chat-diagram.png" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30917 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/05/01/bedrock-chat-diagram.png" alt="Screenshot of the architecture diagram generated by the draw.io MCP server for the sample repository referenced throughout the post." width="1679" height="907"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 4: Draw.io diagram generated through Kiro chat using the Draw.io MCP server&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;These MCP servers include bundles of tools that are readily available for you to use through the Kiro chat. Next, you can compile prompts and MCP servers into a packaged agentic skill for repeatable tasks such as generating initial onboarding documentation for a new project.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Agent skills for repeatable documentation&lt;/strong&gt;&lt;br&gt; &lt;a href="https://kiro.dev/docs/skills/" target="_blank" rel="noopener"&gt;Skills in Kiro&lt;/a&gt; are portable instruction packages that follow the open &lt;a href="https://agentskills.io/" target="_blank" rel="noopener"&gt;Agent Skills&lt;/a&gt; standard. They bundle instructions, scripts, and templates into reusable packages that Kiro can activate when relevant to your task. Skills live in your .kiro/skills directory and are version-controlled and shared across projects, driving consistent documentation practices organization-wide.&lt;/p&gt; 
&lt;p&gt;A documentation generation skill might break down into the following steps:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Use the context-gatherer subagent to analyze the repository.&lt;/li&gt; 
 &lt;li&gt;Call MCP tools to add technical details from external knowledge sources such as AWS documentation.&lt;/li&gt; 
 &lt;li&gt;With this collected context, generate an onboarding document to orient new users and store it in a project onboarding directory.&lt;/li&gt; 
 &lt;li&gt;Use the architecture context to create a draw.io diagram as an XML file and store it in the onboarding directory.&lt;/li&gt; 
 &lt;li&gt;Create a new page in Confluence that includes the full content of the onboarding document and a section for the draw.io diagram.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;The following screenshot shows the documentation uploaded directly to Confluence.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/29/confluence-doc.png" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30906 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/29/confluence-doc.png" alt="Screenshot of a sample page in Confluence generated by Kiro" width="1486" height="886"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 5: Confluence documentation published from the Kiro skill through the Atlassian MCP server&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;Keeping documentation up to date&lt;/h3&gt; 
&lt;p&gt;Initial discovery is only half of the challenge. As your organization continues developing these systems, documentation must stay current to avoid the same knowledge gaps from resurfacing.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Agent hooks for documentation&lt;/strong&gt;&lt;br&gt; &lt;a href="https://kiro.dev/docs/hooks/" target="_blank" rel="noopener"&gt;Agent hooks&lt;/a&gt; execute predefined actions when specific events occur in your &lt;a href="https://aws.amazon.com/what-is/ide/" target="_blank" rel="noopener"&gt;integrated development environment (IDE)&lt;/a&gt;. For documentation, consider two approaches:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;code&gt;userTriggered&lt;/code&gt; event that updates docs on demand from a button click&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;fileEdited&lt;/code&gt; event with tight path patterns that trigger updates when core files (such as API routes, components, or services) change&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;When a hook fires, Kiro can use your previously designed skill to automatically update your onboarding document, regenerate diagrams, and sync changes to Confluence, keeping your documentation current without manual effort.&lt;/p&gt; 
&lt;p&gt;Note: You’ll need to use the Kiro IDE to take advantage of agent hooks, which uses event signals from the IDE. As of this writing, hooks are currently unavailable if you’re using the &lt;a href="https://kiro.dev/cli/" target="_blank" rel="noopener"&gt;Kiro command line interface (CLI)&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Custom steering&lt;/strong&gt;&lt;br&gt; Steering files also play a forward-looking role here. Beyond onboarding documentation, use custom steering files to encode your organization’s development standards and AWS best practices. When a new developer starts working on a feature, these files guide them toward consistent patterns and away from common pitfalls. The foundational steering files help people understand what exists today, and custom steering files help Kiro (and your team) build what comes next in alignment with your organizational standards.&lt;/p&gt; 
&lt;h3&gt;Conclusion&lt;/h3&gt; 
&lt;p&gt;For nonprofit teams, institutional knowledge shouldn’t depend on any single person. AI-assisted development tools such as Kiro make it possible for resource-constrained teams to rapidly document, explore, and enhance the applications they depend on. Automated discovery and documentation reduce onboarding time and close knowledge gaps that put mission-critical systems at risk, and MCP integration and steering files guide new features to follow best practices.&lt;/p&gt; 
&lt;p&gt;Ready to get started? &lt;a href="https://kiro.dev/" target="_blank" rel="noopener"&gt;Install Kiro&lt;/a&gt; and try it on one of your existing applications today. For organizations that want to assess and modernize legacy applications more broadly, explore &lt;a href="https://aws.amazon.com/transform/" target="_blank" rel="noopener"&gt;AWS Transform&lt;/a&gt; to understand your full application portfolio and plan a path forward.&lt;/p&gt; 
&lt;p&gt;About the author&lt;/p&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>Brightpoint uses AWS to build CARA, an AI-powered chat assistant connecting families to critical resources across Illinois</title>
		<link>https://aws.amazon.com/blogs/publicsector/brightpoint-uses-aws-to-build-cara-an-ai-powered-chat-assistant-connecting-families-to-critical-resources-across-illinois/</link>
		
		<dc:creator><![CDATA[Michael Shaver]]></dc:creator>
		<pubDate>Wed, 29 Apr 2026 22:40:33 +0000</pubDate>
				<category><![CDATA[Amazon Bedrock]]></category>
		<category><![CDATA[Amazon DynamoDB]]></category>
		<category><![CDATA[AWS CloudTrail]]></category>
		<category><![CDATA[AWS Professional Services]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Public Sector]]></category>
		<guid isPermaLink="false">704804e4e5de8939e45febb28e3a0f88f19a8576</guid>

					<description>Learn more about CARA - an intelligent, multilingual, and judgment-free digital assistant that makes it easier for families to find and connect with the right support anytime, anywhere.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-30749 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/19/Brightpoint-uses-AWS-to-build-CARA-an-AI-powered-chat-assistant-connecting-families-to-critical-resources-across-Illinois.png" alt="Brightpoint uses AWS to build CARA, an AI-powered chat assistant connecting families to critical resources across Illinois" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://www.brightpoint.org/" target="_blank" rel="noopener"&gt;Brightpoint&lt;/a&gt; serves more than 37,000 children and families throughout Illinois, putting prevention at the center of their work to help families before small problems become life-altering crises. With support from the &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS)&lt;/a&gt; &lt;a href="https://aws.amazon.com/government-education/nonprofits/aws-imagine-grant-program/childrens-health-innovation-award/" target="_blank" rel="noopener"&gt;Children’s Health Innovation Award program&lt;/a&gt;, which provides both cash and AWS credit funding to registered nonprofit organizations, Brightpoint built Connection Assistant &amp;amp; Referral Agent (CARA). CARA is an intelligent, multilingual, and judgment-free digital assistant that makes it easier for families to find and connect with the right support anytime, anywhere. This always-available chat-based assistant enhances how Brightpoint connects families to resources, providing an additional assist to Brightpoint staff while strengthening the overall system of care across Illinois.&lt;/p&gt; 
&lt;h3&gt;Breaking down barriers to family support&lt;/h3&gt; 
&lt;p&gt;Families navigating complex systems face significant barriers when seeking support. Finding the right resources for home visiting, mental health services, childcare, and financial stability programs requires knowledge of available services and how to access them. Families can struggle to connect with appropriate resources, especially outside of traditional business hours when their Brightpoint team isn’t available.&lt;/p&gt; 
&lt;p&gt;Brightpoint recognized that their most knowledgeable staff members possessed invaluable expertise in connecting families to resources, but this knowledge wasn’t evenly distributed or accessible to families around the clock. The organization needed a solution that could provide the expertise of their best staff members through a self-service experience that could communicate in any language. This challenge was particularly urgent given Brightpoint’s commitment to prevention and their goal of reaching families before small problems escalate into major crises.&lt;/p&gt; 
&lt;h3&gt;Building an intelligent solution with AWS&lt;/h3&gt; 
&lt;p&gt;CARA was built using AWS services in close partnership with the &lt;a href="https://aws.amazon.com/government-education/cloud-innovation-centers/" target="_blank" rel="noopener"&gt;AWS Cloud Innovation Center&lt;/a&gt; and &lt;a href="https://aws.amazon.com/professional-services/" target="_blank" rel="noopener"&gt;AWS Professional Services&lt;/a&gt; teams. The custom solution uses multiple AWS technologies to deliver a comprehensive family support system:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/bedrock/" target="_blank" rel="noopener"&gt;Amazon Bedrock&lt;/a&gt; powers &lt;a href="https://aws.amazon.com/generative-ai/" target="_blank" rel="noopener"&gt;generative AI&lt;/a&gt; and natural language interactions, enabling CARA to communicate naturally with families in any language.&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/dynamodb/" target="_blank" rel="noopener"&gt;Amazon DynamoDB&lt;/a&gt; provides secure and scalable data storage for the centralized referral database.&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/cloudtrail/" target="_blank" rel="noopener"&gt;AWS CloudTrail&lt;/a&gt; provides governance and monitoring across the entire system.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Every component of CARA runs on AWS infrastructure, providing the reliability, security, and flexibility needed to scale statewide as Brightpoint expands access to more families across Illinois. Behind the scenes, CARA powers a centralized, continually updated referral database that captures real-time insights into community needs, identifies trends by geography, and gives staff the tools to rate and refine referral partners.&lt;/p&gt; 
&lt;p&gt;Resources from the Children’s Health Innovation Award, a category of the &lt;a href="https://aws.amazon.com/government-education/nonprofits/aws-imagine-grant-program/childrens-health-innovation-award/" target="_blank" rel="noopener"&gt;AWS Imagine Grant&lt;/a&gt;, helped Brightpoint access AWS experts to design and implement CARA from start to finish, so the solution could meet the complex needs of families while maintaining the highest standards for security and scalability.&lt;/p&gt; 
&lt;h3&gt;Preparing for statewide impact&lt;/h3&gt; 
&lt;p&gt;CARA is currently available to participants in Brightpoint’s Doula and Home Visiting programs in Central and Northern Illinois as part of a pilot. This pilot marks the first step in a phased expansion across the state, ultimately aiming to make CARA available to the more than 37,000 children and families that Brightpoint serves in programs throughout Illinois.&lt;/p&gt; 
&lt;p&gt;CARA’s mission-driven design addresses three critical areas. First, it makes it easier for families to find and access benefits and services such as home visiting, mental health, childcare, and financial stability programs. Second, it uses AI responsibly to deliver personalized interactions. Third, it collects structured data with consent to improve referral follow-through and provides system-level insight into unmet needs.&lt;/p&gt; 
&lt;p&gt;Brightpoint’s staff are eager to see the value this tool will bring to the families they work with, as well as the support it will provide to staff who help navigate complex systems to meet family needs. The organization plans to measure several key indicators as CARA rolls out, including tracking how many people sign up to use the tool, what kinds of questions or services they’re searching for, and whether they’re successfully getting connected to resources as a result. Over time, Brightpoint will analyze gaps and trends in the types of support being requested to help identify where additional funding and community resources are most needed.&lt;/p&gt; 
&lt;h3&gt;Lessons learned for nonprofit technology implementation&lt;/h3&gt; 
&lt;p&gt;For nonprofits exploring similar technology projects, Brightpoint’s team emphasizes the importance of staying grounded in both purpose and process while keeping an eye on the tradeoffs made at every step of design and build. Even a relatively narrow use case such as helping families find the right referral can reveal unexpected complexity.&lt;/p&gt; 
&lt;p&gt;The organization discovered that data preparation required significantly more manual work than anticipated. It took 2 months to collect and manually clean their data after AI-assisted cleanup failed, requiring a manual review of 1,400 records. The team also had to make nuanced design choices about how many responses to surface, in what order, and whether to show exact matches or related services families might not think to ask for.&lt;/p&gt; 
&lt;p&gt;Brightpoint’s advice centers on expecting iteration, embracing imperfection, and remembering that staff are often key users and cocreators in the process. The organization acknowledges that many more lessons are sure to come as they continue rolling CARA out to families across Illinois, but their partnership with AWS has positioned them to adapt and scale their solution as they learn from real-world implementation.&lt;/p&gt; 
&lt;h3&gt;How you can support Brightpoint&lt;/h3&gt; 
&lt;p&gt;Visit &lt;a href="https://www.brightpoint.org/" target="_blank" rel="noopener"&gt;Brightpoint&lt;/a&gt; to learn more about and support their work to advance the wellbeing of children and families here. Even small donations make a big difference!&lt;/p&gt; 
&lt;p&gt;To learn more about how AWS helps public sector organizations deploy AI-driven solutions, &lt;a href="https://aws.amazon.com/government-education/contact/" target="_blank" rel="noopener"&gt;connect with the AWS Public Sector team today&lt;/a&gt;.&lt;/p&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>CMMC Level 2 compliance on AWS: Why control ownership is where organizations struggle</title>
		<link>https://aws.amazon.com/blogs/publicsector/cmmc-level-2-compliance-on-aws-why-control-ownership-is-where-organizations-struggle/</link>
		
		<dc:creator><![CDATA[Alexandria Burke]]></dc:creator>
		<pubDate>Wed, 29 Apr 2026 21:04:35 +0000</pubDate>
				<category><![CDATA[Amazon GuardDuty]]></category>
		<category><![CDATA[Amazon Simple Storage Service (S3)]]></category>
		<category><![CDATA[AWS CloudTrail]]></category>
		<category><![CDATA[AWS GovCloud (US)]]></category>
		<category><![CDATA[AWS Security Hub]]></category>
		<category><![CDATA[Public Sector]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[AWS Shared Responsibility Model]]></category>
		<guid isPermaLink="false">c872242cf424e478ffdcb543feeb307986f67c98</guid>

					<description>This post brings guidance on Customer Responsibility Matrices (CRMs), authorization boundary definitions, and multi-provider control ownership into a single actionable framework for defense contractors preparing for third-party assessment.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-30899 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/29/CMMC-Level-2-compliance-on-AWS-Why-control-ownership-is-where-organizations-struggle.png" alt="CMMC Level 2 compliance on AWS: Why control ownership is where organizations struggle" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://dodcio.defense.gov/cmmc/About/" target="_blank" rel="noopener"&gt;Cybersecurity Maturity Model Certification (CMMC) Level 2&lt;/a&gt; compliance challenges stem not from a lack of security controls but from a fundamental misunderstanding of who owns them. This post brings guidance on Customer Responsibility Matrices (CRMs), authorization boundary definitions, and multi-provider control ownership into a single actionable framework for defense contractors preparing for third-party assessment.&lt;/p&gt; 
&lt;p&gt;With Certified Third-Party Assessment Organization (C3PAO)-assessed certifications becoming a condition of contract award and noncompliance carrying risks including disqualification and &lt;a href="https://www.justice.gov/civil/false-claims-act" target="_blank" rel="noopener"&gt;False Claims Act&lt;/a&gt; liability, the window for preparation is closing.&lt;/p&gt; 
&lt;p&gt;If your organization handles Controlled Unclassified Information (CUI) and operates on &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS)&lt;/a&gt;, you’re well positioned to address CMMC Level 2 requirements, but only if you understand the shared responsibility model and how control ownership works in practice.&lt;/p&gt; 
&lt;p&gt;The critical takeaway is that hosting in a &lt;a href="https://www.fedramp.gov/" target="_blank" rel="noopener"&gt;Federal Risk and Authorization Management Program (FedRAMP)&lt;/a&gt;-authorized environment doesn’t automatically satisfy CMMC requirements. Every control must answer three questions: who implements the control, where it operates, and what evidence proves it.&lt;/p&gt; 
&lt;p&gt;Organizations that can’t clearly answer all three of these questions for each of the 110 &lt;a href="https://www.nist.gov/" target="_blank" rel="noopener"&gt;National Institute of Standards and Technology (NIST) Special Publication (SP) 800-171 Rev 2&lt;/a&gt; controls could struggle during assessment. This is especially consequential in multi-provider environments where managed service providers (MSPs) build enclaves on top of cloud service provider (CSP) infrastructure. These enclaves fall outside the CSP’s FedRAMP authorization boundary and must independently demonstrate compliance with all &lt;a href="https://www.google.com/url?sa=t&amp;amp;source=web&amp;amp;rct=j&amp;amp;opi=89978449&amp;amp;url=https://dodcio.defense.gov/Portals/0/Documents/Library/FEDRAMP-EquivalencyCloudServiceProviders.pdf&amp;amp;ved=2ahUKEwiXvKSH1oSUAxUZEVkFHS1gEP4QFnoECB4QAQ&amp;amp;usg=AOvVaw2OoFo-SC99peULJAQae5pA" target="_blank" rel="noopener"&gt;323 FedRAMP Moderate Baseline controls&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;This post also addresses boundary drift, which is the unmanaged expansion of an assessment scope as new applications and services are deployed. Boundary drift can silently erode compliance posture over time. It’s recommended that organizations embed continuous boundary validation in change management workflows and request explicit, independently maintained CRMs from every provider in their chain. The absence of a provider CRM is treated as a risk factor: Under CMMC, if responsibility isn’t explicitly defined and provable, then the control isn’t met.&lt;/p&gt; 
&lt;h3&gt;CMMC Level 2 timeline: Where we stand&lt;/h3&gt; 
&lt;p&gt;The regulatory framework behind CMMC 2.0 is now operational. The &lt;a href="https://www.ecfr.gov/current/title-32/subtitle-A/chapter-I/subchapter-G/part-170" target="_blank" rel="noopener"&gt;CMMC Program Rule (32 CFR Part 170)&lt;/a&gt; became effective on December 16, 2024, and CMMC assessments formally began on January 2, 2025. &lt;a href="https://www.acquisition.gov/dfars" target="_blank" rel="noopener"&gt;The Acquisition Rule (48 CFR)&lt;/a&gt; followed, which was published on September 10, 2025 and took effect on November 10, 2025. It was codified in Defense Federal Acquisition Regulation Supplement (DFARS) clauses &lt;a href="https://www.acquisition.gov/dfars/252.204-7021-contractor-compliance-cybersecurity-maturity-model-certification-level-requirements." target="_blank" rel="noopener"&gt;252.204-7021&lt;/a&gt; and &lt;a href="https://www.acquisition.gov/dfars/252.204-7025-notice-cybersecurity-maturity-model-certification-level-requirements." target="_blank" rel="noopener"&gt;252.204-7025&lt;/a&gt;. This means CMMC compliance is a mandatory, enforceable element of applicable contracts.&lt;/p&gt; 
&lt;p&gt;The rollout follows a deliberate four-phase schedule, as illustrated in Figure 1:&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/29/Figure-1-CMMC-Level-2-phased-rollout-timeline.jpg" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30894 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/29/Figure-1-CMMC-Level-2-phased-rollout-timeline.jpg" alt="A timeline diagram showing four phases of CMMC rollout from November 2025 through November 2028 and beyond, with each phase expanding the scope of Level 2 certification requirements." width="576" height="158"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 1: CMMC Level 2 phased rollout timeline&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;The &lt;a href="https://www.ecfr.gov/current/title-32/subtitle-A/chapter-I/subchapter-G/part-170" target="_blank" rel="noopener"&gt;phased rollout&lt;/a&gt; progresses as follows:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Phase 1, November 2025 to November 2026&lt;/strong&gt; – Requires Level 1 and Level 2 self-assessments with scores uploaded to the Supplier Performance Risk System (SPRS) in most new solicitations.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Phase 2, November 2026 to November 2027&lt;/strong&gt; – The critical inflection point where C3PAO-assessed Level 2 certifications become a condition of award. Only companies authorized by &lt;a href="https://cyberab.org/" target="_blank" rel="noopener"&gt;The Cyber AB&lt;/a&gt; and listed in the &lt;a href="https://cmmcmarketplace.org/" target="_blank" rel="noopener"&gt;CMMC Marketplace&lt;/a&gt; can issue certifications. Contractors are advised to coordinate with a C3PAO at least 6 to 12 months in advance.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Phase 3, November 2027 to November 2028&lt;/strong&gt; – Extends Level 2 certification requirements to options on existing contracts.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Phase 4, November 2028 onward&lt;/strong&gt; – Achieves full implementation across all applicable contracts except commercial off-the-shelf (COTS) contracts.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;One point that organizations often overlook is that &lt;a href="https://www.acquisition.gov/dfars/252.204-7012-safeguarding-covered-defense-information-and-cyber-incident-reporting." target="_blank" rel="noopener"&gt;DFARS 252.204-7012&lt;/a&gt; remains independently enforceable. CMMC Level 2 validates the same 110 NIST SP 800-171 Rev 2 controls that DFARS 252.204-7012 has required since 2017. CMMC adds a third-party verification layer to existing obligations. Noncompliance carries risks including False Claims Act liability, making it important that organizations treat compliance preparation as an urgent operational priority rather than a future concern.&lt;/p&gt; 
&lt;h3&gt;Why this matters for organizations using AWS&lt;/h3&gt; 
&lt;p&gt;CMMC Level 2 doesn’t mandate a specific AWS Region. It requires that you implement NIST SP 800-171 controls, and that any CSP you use meets the FedRAMP Moderate Baseline equivalent, as required by DFARS 252.204-7012. Both &lt;a href="https://aws.amazon.com/govcloud-us/" target="_blank" rel="noopener"&gt;AWS GovCloud (US)&lt;/a&gt; and the commercial US East/West Regions meet this requirement. The right deployment target depends on your specific contract requirements:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;If your contract involves &lt;a href="https://www.pmddtc.state.gov/ddtc_public?id=ddtc_kb_article_page&amp;amp;sys_id=24d528fddbfc930044f9ff621f961987" target="_blank" rel="noopener"&gt;International Traffic in Arms Regulations (ITAR)&lt;/a&gt; or &lt;a href="https://www.bis.gov/regulations/ear" target="_blank" rel="noopener"&gt;Export Administration Regulations (EAR)&lt;/a&gt;-controlled data, deploy in AWS GovCloud (US). ITAR workloads require the jurisdictional isolation that AWS GovCloud (US) provides.&lt;/li&gt; 
 &lt;li&gt;If your contract requires Impact Level 4 or 5, deploy in AWS GovCloud (US). Commercial regions only support Impact Level 2.&lt;/li&gt; 
 &lt;li&gt;If your workloads involve CUI without ITAR/EAR restrictions and without Impact Level 4/5 requirements, you can deploy in commercial US East/West Regions using &lt;a href="https://www.nist.gov/standardsgov/compliance-faqs-federal-information-processing-standards-fips" target="_blank" rel="noopener"&gt;Federal Information Processing Standards (FIPS)&lt;/a&gt;-validated endpoints and still meet CMMC Level 2 requirements.&lt;/li&gt; 
 &lt;li&gt;If you have mixed workloads with different regulatory overlays, consider a hybrid approach: AWS GovCloud (US) for ITAR programs and commercial regions for standard CUI workloads. Use separate &lt;a href="https://aws.amazon.com/organizations/" target="_blank" rel="noopener"&gt;AWS Organizations&lt;/a&gt; to maintain clear boundaries.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Regardless of which region you choose, services such as &lt;a href="https://aws.amazon.com/cloudtrail/" target="_blank" rel="noopener"&gt;AWS CloudTrail&lt;/a&gt;, &lt;a href="https://aws.amazon.com/config/" target="_blank" rel="noopener"&gt;AWS Config&lt;/a&gt;, &lt;a href="https://aws.amazon.com/security-hub/" target="_blank" rel="noopener"&gt;AWS Security Hub&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/guardduty/" target="_blank" rel="noopener"&gt;Amazon GuardDuty&lt;/a&gt; are designed to support the monitoring, logging, and configuration management that CMMC controls require. However, the availability of these services doesn’t mean the controls are met. Your organization must configure, operate, and evidence these controls, and clearly document who owns each one.&lt;/p&gt; 
&lt;p&gt;The sections that follow explain the shared responsibility model in the context of CMMC, how to build and maintain a CRM, how to scope your assessment boundary, and how to manage compliance across multiple providers.&lt;/p&gt; 
&lt;h3&gt;The shared responsibility model in practice: CSPs, MSPs, and your organization&lt;/h3&gt; 
&lt;p&gt;Perhaps the most common misconception in cloud-based CMMC compliance is that hosting in a FedRAMP-authorized environment automatically satisfies security requirements. It doesn’t. Control inheritance is determined by boundary placement, not by the cloud provider. Deploying in any FedRAMP-authorized AWS Region doesn’t automatically mean controls are inherited. The Organization Seeking Assessment (OSA) must define where its boundary begins and ends.&lt;/p&gt; 
&lt;p&gt;The &lt;a href="https://aws.amazon.com/compliance/shared-responsibility-model/" target="_blank" rel="noopener"&gt;AWS Shared Responsibility Model&lt;/a&gt; operates on a layered principle. Inherited controls exist only at the AWS infrastructure layer, covering physical security, hypervisor management, and core network infrastructure. All higher layers introduce shared or customer-owned responsibility.&lt;/p&gt; 
&lt;p&gt;This means that solution configuration, access controls, monitoring, and operational controls all carry obligations that fall partially or entirely on the customer and their service providers. As illustrated in Figure 2, control responsibility is distributed across four layers in a typical multi-provider CMMC environment:&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/29/Figure-2-Multi-provider-control-ownership-model.jpg" target="_blank" rel="noopener"&gt;&lt;img loading="lazy" class="size-full wp-image-30895 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/29/Figure-2-Multi-provider-control-ownership-model.jpg" alt="A layered diagram showing how control responsibility is distributed across four layers: CSP (AWS) at the foundation, MSP enclave, Managed Security Service Provider (MSSP) security services, and the customer organization at the top. Each layer introduces new shared or customer-owned responsibilities." width="576" height="380"&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Figure 2: Multi-provider control ownership model&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;The CSP (AWS) provides the foundational infrastructure with inherited controls, but each additional layer, including the MSP enclave, the Managed Security Service Provider (MSSP) security services, and the organization itself introduces new shared or customer-owned responsibilities that must be independently documented and evidenced.&lt;/p&gt; 
&lt;h3&gt;The MSP enclave challenge&lt;/h3&gt; 
&lt;p&gt;One of the most consequential misunderstandings in the defense industrial base involves MSP enclaves. FedRAMP authorization applies to a specific system boundary, not to everything built on top of it. An MSP isn’t a CSP. When an MSP builds an enclave on AWS, they introduce new configurations, new access controls, new monitoring layers, and new administrative functions. These aren’t covered by the CSP’s FedRAMP authorization.&lt;/p&gt; 
&lt;p&gt;The MSP effectively creates a new authorization boundary, and that boundary must independently meet FedRAMP Moderate or an equivalent with full evidence. However, some MSPs market “CMMC-compliant enclaves” with “fully inherited controls,” a claim that might not withstand assessment scrutiny.&lt;/p&gt; 
&lt;h3&gt;FedRAMP equivalency requirements&lt;/h3&gt; 
&lt;p&gt;The December 2023 memorandum tightened FedRAMP equivalency requirements. Under the updated requirements, a cloud service offering is a FedRAMP Moderate equivalent only if it achieves 100% compliance with all 323 FedRAMP Moderate Baseline security controls, which are validated through an assessment by a FedRAMP-recognized Third-Party Assessment Organization (3PAO).&lt;/p&gt; 
&lt;p&gt;CSPs must either be FedRAMP Moderate/High-authorized or secure a third-party assessment confirming compliance with all FedRAMP Moderate Baseline controls. Defense contractors bear the verification responsibility: DFARS 252.204-7012 mandates that contractors “require and ensure” their CSP’s compliance.&lt;/p&gt; 
&lt;h3&gt;The CRM is a nonnegotiable compliance artifact&lt;/h3&gt; 
&lt;p&gt;If there’s one document that separates organizations that pass CMMC assessments from those that don’t, it’s the CRM. A CRM defines which controls are inherited from the CSP, which are shared between the CSP and the customer, and which are fully implemented by the Organization Seeking Certification (OSC) or MSP, along with the implementation location and the evidence source. Under CMMC, the shared responsibility matrix formally documents accountability for each of the 110 NIST SP 800-171 Rev 2 requirements and their 320 assessment objectives.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Three questions every control must answer&lt;/strong&gt;&lt;br&gt; As mentioned at the beginning of this post, a CRM must define three things for every control:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Who implements it&lt;/li&gt; 
 &lt;li&gt;Where it is implemented&lt;/li&gt; 
 &lt;li&gt;What evidence supports it&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;This per-control traceability is what assessors look for. From an assessor perspective, the following questions are nonnegotiable: Where is each control implemented? Who owns it? What evidence proves it? Is this control inherited, shared, or customer-managed? If the answer is unclear, the control isn’t met.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Layered ownership across multiple providers&lt;/strong&gt;&lt;br&gt; When multiple providers are involved, which is a common scenario in defense contracting, the CRM must reflect a layered ownership model. Inherited controls exist only at the CSP infrastructure layer. All higher layers (MSP enclave, MSSP monitoring, customer application) introduce shared or customer-owned responsibility.&lt;/p&gt; 
&lt;p&gt;MSPs operating enclaves must provide their own CRM with control-by-control mapping and clear ownership between CSP, MSP, and OSA. When MSPs don’t provide their own independent CRM, the gap compounds significantly. Inherited controls require documentation in the System Security Plan (SSP), and shared controls require proof of active configuration.&lt;/p&gt; 
&lt;p&gt;The fundamental rule is unambiguous: If responsibility isn’t explicitly defined, it’s not inherited. And in CMMC, if it’s not provable, it’s not compliant.&lt;/p&gt; 
&lt;h3&gt;Boundary scoping and the five asset categories&lt;/h3&gt; 
&lt;p&gt;Proper boundary scoping is the structural foundation upon which CRM documentation and control ownership rest. The CMMC Level 2 Scoping Guidance, &lt;a href="https://www.ecfr.gov/current/title-32/subtitle-A/chapter-I/subchapter-D/part-170/subpart-B/section-170.19" target="_blank" rel="noopener"&gt;32 CFR §170.19&lt;/a&gt;, defines five asset categories that organizations must use to classify every component within their environment:&lt;/p&gt; 
&lt;table border="2"&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Asset category&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;strong&gt;Assessment implications&lt;/strong&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;CUI assets&lt;/td&gt; 
   &lt;td&gt;Process, store, or transmit CUI&lt;/td&gt; 
   &lt;td&gt;Assessed against all 110 Level 2 security requirements&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Security Protection Assets (SPAs)&lt;/td&gt; 
   &lt;td&gt;Provide security functions: security information and event management (SIEM), firewalls, virtual private network (VPN), identity providers&lt;/td&gt; 
   &lt;td&gt;Assessed against relevant Level 2 requirements&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Contractor Risk Managed Assets (CRMAs)&lt;/td&gt; 
   &lt;td&gt;Can but aren’t intended to handle CUI because of policies in place&lt;/td&gt; 
   &lt;td&gt;Documented in SSP; subject to limited assessor checks&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Specialized assets&lt;/td&gt; 
   &lt;td&gt;Can handle CUI but can’t be fully secured: Internet of Things (IoT), operational technology (OT), government-furnished equipment (GFE), test equipment&lt;/td&gt; 
   &lt;td&gt;Managed using risk-based policies&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Out-of-scope assets&lt;/td&gt; 
   &lt;td&gt;Can’t process, store, or transmit CUI&lt;/td&gt; 
   &lt;td&gt;Physically or logically separated from CUI environment&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p style="text-align: center"&gt;&lt;em&gt;Table 1: CMMC Level 2 asset categories per 32 CFR §170.19&lt;/em&gt;&lt;/p&gt; 
&lt;p&gt;All assets must be documented in an asset inventory, treated in the SSP, and included in the network diagram of the CMMC assessment scope. The classification of each asset directly determines which controls apply and how they must be evidenced. Getting this wrong (for example, not identifying a SPA that provides logging functions) can cascade into multiple control findings during assessment.&lt;/p&gt; 
&lt;p&gt;AWS services can play a role across several of these categories. For example, AWS CloudTrail and AWS Security Hub are likely SPAs in your environment, while &lt;a href="https://aws.amazon.com/s3/" target="_blank" rel="noopener"&gt;Amazon Simple Storage Service (Amazon S3)&lt;/a&gt; buckets storing CUI would be classified as CUI assets. Proper classification of each AWS service in your environment is a prerequisite for accurate boundary scoping.&lt;/p&gt; 
&lt;h3&gt;The silent compliance risk of boundary drift&lt;/h3&gt; 
&lt;p&gt;Boundaries can initially be well-defined only to shift over time. Boundary drift is a risk that occurs when an OSA deploys new applications or services into a managed enclave, changing the boundary without formal reassessment. Those new components must be reassessed and incorporated into the CRM, meaning boundaries aren’t static and must be continuously validated.&lt;/p&gt; 
&lt;p&gt;This risk is particularly relevant in dynamic cloud environments where new workloads can be provisioned rapidly. When an OSA deploys applications into an MSP-managed enclave, the boundary changes. Those new components introduce potential new CUI touchpoints, new access paths, and new control requirements. If the CRM and SSP aren’t updated to reflect these changes, the organization’s documented compliance posture diverges from its actual environment, which is a gap that assessors are likely to identify.&lt;/p&gt; 
&lt;p&gt;Common patterns related to boundary drift include treating MSP enclaves as inside the CSP’s FedRAMP boundary, making blanket 100% inherited controls claims, and not distinguishing between infrastructure and managed services. These each represent a boundary definition issue that can result in assessment findings across multiple control families.&lt;/p&gt; 
&lt;p&gt;The mitigation strategy requires:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Maintaining explicit CRM and supply chain risk management (SCRM) documentation that maps control ownership across all parties&lt;/li&gt; 
 &lt;li&gt;Continuous boundary validation when changes occur&lt;/li&gt; 
 &lt;li&gt;Verifying that every external service provider that builds an enclave independently meets FedRAMP Moderate or an equivalent&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;AWS services such as AWS Config and AWS Security Hub are designed to support continuous monitoring of your environment’s configuration state, which can help your organization detect when boundary changes occur and trigger the appropriate review processes.&lt;/p&gt; 
&lt;h3&gt;Actionable steps for multi-provider compliance&lt;/h3&gt; 
&lt;p&gt;Preparing for a CMMC Level 2 assessment in a multi-provider cloud environment demands disciplined, proactive effort. The following steps represent the most important recommended actions for organizations:&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Step 1: Request explicit CRMs from every provider&lt;/strong&gt;&lt;br&gt; Each CRM must define who implements each control, where it’s implemented, and what evidence supports it. This applies not only to the CSP but independently to every MSP and MSSP in the environment. If an MSP is offering a managed enclave handling CUI, they must provide their own CRM with control-by-control mapping and clear ownership between CSP, MSP, and OSC. It’s recommended that organizations treat the absence of a provider CRM as a risk factor. If your MSP or MSSP isn’t prepared, that risk falls on you.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Step 2: Contractually require DFARS flow-down and assessment-ready artifacts&lt;/strong&gt;&lt;br&gt; MSP contracts must include cyber incident reporting requirements (72-hour notification), media preservation obligations, and access provisions for damage assessment. Beyond contractual language, MSPs must be obligated to produce evidence artifacts (configuration baselines, access review logs, vulnerability scans) that the C3PAO requires during assessment. Practices must match documentation: If an SSP states that vulnerability assessments are conducted weekly but the actual cadence is quarterly, assessors could identify the discrepancy and the control won’t pass.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Step 3: Conduct mock assessments before engaging a C3PAO&lt;/strong&gt;&lt;br&gt; Without a mock assessment led by someone experienced in CMMC, gaps in documentation, missing evidence, and scope confusion often go unnoticed until it’s too late. A practice run can surface issues that can be remediated before the stakes are real. We recommend that organizations maintain version-controlled SSPs, traceable evidence logs, and tailored plans of action and milestones (POA&amp;amp;Ms) as part of their continuous compliance posture.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Step 4: Implement continuous boundary management&lt;/strong&gt;&lt;br&gt; Given the risks of boundary drift, organizations must establish processes that trigger CRM and SSP updates whenever new applications, services, or configurations are deployed into the assessment scope. This isn’t a one-time exercise. It’s an ongoing operational discipline that must be embedded into change management workflows.&lt;/p&gt; 
&lt;h3&gt;Conclusion&lt;/h3&gt; 
&lt;p&gt;The transition from self-assessed to third-party verified CMMC compliance represents a fundamental shift in how the protection of CUI is validated. With C3PAO assessments becoming a condition of award and contractors facing disqualification for noncompliance, the window for preparation is narrowing.&lt;/p&gt; 
&lt;p&gt;The core lesson is that compliance isn’t about where your data is hosted, it’s about who owns each control, where that control operates, and what evidence proves it. Organizations that assume inheritance without documentation, treat MSP enclaves as extensions of CSP authorization boundaries, or allow boundary drift to go unmanaged are building their compliance posture on a foundation that might not withstand assessment.&lt;/p&gt; 
&lt;p&gt;The path forward requires:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Explicit control ownership through thorough CRMs&lt;/li&gt; 
 &lt;li&gt;Rigorous boundary definition and continuous monitoring&lt;/li&gt; 
 &lt;li&gt;Contractual accountability across every provider in the chain&lt;/li&gt; 
 &lt;li&gt;Evidence-based verification that aligns documentation with operational reality&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;AWS provides a broad set of services and solutions designed to support organizations on this path, from AWS GovCloud (US) and commercial US Regions for hosting CUI workloads, to &lt;a href="https://aws.amazon.com/artifact/" target="_blank" rel="noopener"&gt;AWS Artifact&lt;/a&gt; for accessing AWS compliance reports, to services such as AWS Security Hub, AWS Config, AWS CloudTrail, and Amazon GuardDuty that are designed to help support continuous monitoring and evidence collection. CMMC compliance is achievable, but only for organizations willing to own every layer of their security posture.&lt;/p&gt; 
&lt;p&gt;For more information on CMMC compliance on AWS, contact &lt;a href="https://aws.amazon.com/compliance/cmmc/" target="_blank" rel="noopener"&gt;AWS compliance support&lt;/a&gt;.&lt;/p&gt; 
&lt;h3&gt;Next steps and resources&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/security-assurance-services/" target="_blank" rel="noopener"&gt;AWS Security Assurance Services&lt;/a&gt; – Reach out to AWS Security Assurance Services to speak with a trusted advisor to support your CMMC certification journey&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://dodcio.defense.gov/CMMC/about/" target="_blank" rel="noopener"&gt;CMMC program overview&lt;/a&gt; – Official CMMC program page with the latest guidance, rules, and assessment information&lt;/li&gt; 
 &lt;li&gt;&lt;a href="https://aws.amazon.com/compliance/shared-responsibility-model" target="_blank" rel="noopener"&gt;AWS Shared Responsibility Model&lt;/a&gt; – Understand the division of security responsibilities between AWS and the customer&lt;/li&gt; 
&lt;/ul&gt;</content:encoded>
					
		
		
			</item>
		<item>
		<title>How Leidos enhanced intelligent document processing using agentic AI on AWS</title>
		<link>https://aws.amazon.com/blogs/publicsector/how-leidos-enhanced-intelligent-document-processing-using-agentic-ai-on-aws/</link>
		
		<dc:creator><![CDATA[Gabriel Fuentes]]></dc:creator>
		<pubDate>Wed, 29 Apr 2026 20:27:48 +0000</pubDate>
				<category><![CDATA[Amazon Bedrock]]></category>
		<category><![CDATA[Amazon Bedrock Guardrails]]></category>
		<category><![CDATA[AWS Fargate]]></category>
		<category><![CDATA[AWS GovCloud (US)]]></category>
		<category><![CDATA[Public Sector]]></category>
		<guid isPermaLink="false">73544c1c22136ae59e59c0bde3c24f7f5ddf1834</guid>

					<description>This post shares how Leidos enhanced ManagedX by adopting multi-agent workflow patterns using the open source Strands Agents SDK, and how AWS Enterprise Support helped guide that evolution.</description>
										<content:encoded>&lt;p&gt;&lt;img loading="lazy" class="size-full wp-image-30824 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/9e6a55b6b4563e652a23be9d623ca5055c356940/2026/04/26/How-Leidos-enhanced-intelligent-document-processing-using-agentic-AI-on-AWS.png" alt="How Leidos enhanced intelligent document processing using agentic AI on AWS" width="1152" height="576"&gt;&lt;/p&gt; 
&lt;p&gt;When government agencies process millions of documents monthly—each with unique compliance requirements—traditional intelligent document processing (IDP) pipelines hit a wall. &lt;a href="https://www.leidos.com/" target="_blank" rel="noopener"&gt;Leidos&lt;/a&gt; faced exactly this challenge with their &lt;a href="https://leidos.widen.net/s/flxrgctsrl" target="_blank" rel="noopener"&gt;ManagedX&lt;/a&gt; platform, an AI-powered, cloud-based IDP solution built on &lt;a href="https://aws.amazon.com/" target="_blank" rel="noopener"&gt;Amazon Web Services (AWS).&lt;/a&gt;&lt;/p&gt; 
&lt;p&gt;This post shares how Leidos enhanced ManagedX by adopting multi-agent workflow patterns using the open source &lt;a href="https://github.com/strands-agents/sdk-python" target="_blank" rel="noopener"&gt;Strands Agents SDK&lt;/a&gt;, and how &lt;a href="https://aws.amazon.com/premiumsupport/plans/enterprise/" target="_blank" rel="noopener"&gt;AWS Enterprise Support&lt;/a&gt; helped guide that evolution.&lt;/p&gt; 
&lt;h3&gt;The challenge: Scaling document processing for diverse government needs&lt;/h3&gt; 
&lt;p&gt;ManagedX was already automating the extraction and processing of structured, semi-structured, and unstructured documents using AI and machine learning (ML). But as Leidos expanded across government agencies—each with distinct document types and compliance requirements—the team identified limitations in the existing architecture.&lt;/p&gt; 
&lt;p&gt;The core challenge was flexibility. Some agencies process thousands of short forms daily; others analyze single documents spanning hundreds of pages. The existing pipeline made it difficult to scale individual capabilities independently or offer a modular, pay-per-component model.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“Our government customers don’t just process one type of document—a single case file might include medical records, legal briefs, financial forms, and handwritten notes—all requiring different extraction strategies and compliance rules,” said Bill Zhou, Cloud DevOps Engineer at Leidos.&lt;/p&gt; 
 &lt;p&gt;“In our previous setup, introducing a new document type or agency-specific workflow meant modifying the core pipeline, which slowed our ability to onboard new missions. We needed a flexible system where each processing capability—optical character recognition (OCR), classification, extraction, validation—could evolve independently and be composed differently for each agency’s needs,” said Justin Miles, Systems Engineer at Leidos.&lt;/p&gt; 
 &lt;p&gt;“We were spending more time adapting the pipeline for each new agency than actually improving our AI capabilities. The multi-agent approach lets us compose processing workflows like building blocks—each agent is specialized, testable, and reusable across missions.” Bill Zhou, Cloud DevOps Engineer at Leidos.&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;p&gt;Leidos needed a modular architecture that could adapt to each agency’s requirements without rebuilding the pipeline for every new use case.&lt;/p&gt; 
&lt;h3&gt;Why Strands Agents SDK and the workflow pattern&lt;/h3&gt; 
&lt;p&gt;Rather than iterating on the existing monolithic pipeline, Leidos adopted a multi-agent architecture—decomposing the document processing workflow into specialized agents, each responsible for a distinct task: OCR, text extraction, document classification, field extraction (structured data pull), validation, and graph/search indexing. This architecture powered a variety of use cases, from healthcare claims processing, legal e-discovery, and financial documentation to insurance underwriting. In every instance, auditability and deterministic ordering mattered in the regulated environments, which helped Leidos establish efficient workflow patterns.&lt;/p&gt; 
&lt;p&gt;The &lt;a href="https://github.com/strands-agents/sdk-python" target="_blank" rel="noopener"&gt;Strands Agents SDK&lt;/a&gt;—an open source, Apache 2.0-licensed framework already powering production features in AWS services—proved to be a natural fit for several reasons.&lt;/p&gt; 
&lt;p&gt;First, Strands supports a &lt;a href="https://aws.amazon.com/blogs/machine-learning/multi-agent-collaboration-patterns-with-strands-agents-and-amazon-nova/" target="_blank" rel="noopener"&gt;workflow agent pattern&lt;/a&gt; designed for sequential, deterministic processing with explicit dependency management. For government document processing, where the sequence matters—classify, then extract, then validate, then store—this determinism provides the auditability and consistency that regulated environments require. Leidos evaluated collaborative and swarm patterns, where agents negotiate dynamically, but chose the workflow pattern because predictable execution was non-negotiable.&lt;/p&gt; 
&lt;p&gt;Second, Strands’s Python-native tool definitions made it possible for Leidos to wrap existing capabilities as tools using simple decorators, without rewriting them. Each agent received its own tailored toolset, keeping responsibilities cleanly separated.&lt;/p&gt; 
&lt;p&gt;Third, Strands’s built-in support for &lt;a href="https://github.com/strands-agents/mcp-server" target="_blank" rel="noopener"&gt;Model Context Protocol (MCP)&lt;/a&gt; servers simplified how agents connected to AWS services. Rather than building custom integrations for each service, MCP provided pre-built tool interfaces—reducing development effort and making it simpler to add new service connections as the platform evolves.&lt;/p&gt; 
&lt;p&gt;Finally, Strands is model-agnostic, so Leidos can assign different &lt;a href="https://aws.amazon.com/bedrock/" target="_blank" rel="noopener"&gt;Amazon Bedrock&lt;/a&gt; foundation models to different agents based on task complexity. Lighter models handle classification; more capable models handle validation with personally identifiable information (PII) guardrails powered by &lt;a href="https://aws.amazon.com/bedrock/guardrails/" target="_blank" rel="noopener"&gt;Amazon Bedrock Guardrails&lt;/a&gt;. Each model operates with separate throughput quotas, improving both cost-efficiency and performance isolation. Strands also provides built-in observability, logging every tool execution with its parameters—a critical capability for the auditability that government document processing demands.&lt;/p&gt; 
&lt;p&gt;With the architecture pattern selected, the next challenge was deployment in a regulated environment.&lt;/p&gt; 
&lt;h3&gt;Deploying multi-agent AI in AWS GovCloud with containers&lt;/h3&gt; 
&lt;p&gt;Deploying agentic AI in &lt;a href="https://aws.amazon.com/govcloud-us/" target="_blank" rel="noopener"&gt;AWS GovCloud (US)&lt;/a&gt; presented its own constraints. At the time of implementation, managed agent orchestration services were not yet available in the AWS Region, so Leidos needed an alternative approach to run and scale their multi-agent system.&lt;/p&gt; 
&lt;p&gt;Because Strands is lightweight and runs anywhere Python runs, the team chose a container-based deployment using &lt;a href="https://aws.amazon.com/fargate/" target="_blank" rel="noopener"&gt;AWS Fargate&lt;/a&gt;. This container-based pattern proved to be a practical blueprint for running multi-agent workloads in regulated environments where managed services might not yet be available.&lt;/p&gt; 
&lt;h3&gt;The role of AWS Enterprise Support&lt;/h3&gt; 
&lt;p&gt;Throughout this evolution, Leidos worked with their AWS Technical Account Manager (TAM) through &lt;a href="https://aws.amazon.com/premiumsupport/plans/enterprise/" target="_blank" rel="noopener"&gt;AWS Enterprise Support&lt;/a&gt;. Rather than prescribing a solution, the TAM served as a strategic advisor—helping the team evaluate agent patterns, navigate service constraints in GovCloud, and think through trade-offs between deployment approaches.&lt;/p&gt; 
&lt;p&gt;This type of engagement reflects &lt;a href="https://aws.amazon.com/blogs/aws/new-and-enhanced-aws-support-plans-add-ai-capabilities-to-expert-guidance/" target="_blank" rel="noopener"&gt;a broader shift within AWS Enterprise Support&lt;/a&gt; toward proactive, strategic guidance—helping customers connect emerging capabilities like agentic AI with specific mission requirements.&lt;/p&gt; 
&lt;p&gt;The engagement was mutually beneficial for Leidos, who decided to build on AWS due to a combination of customer alignment, platform maturity, and developer velocity. With many of its government customers heavily invested in AWS—particularly within GovCloud—Leidos worked within their existing AWS footprint while maintaining compliance and security requirements.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“Ultimately, it was AWS’s flexibility, scalability, and partnership support that helped to bring ManagedX to life and allowed us to deliver solutions quickly and effectively for our customers,” said Zhou.&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;h3&gt;Lessons learned&lt;/h3&gt; 
&lt;p&gt;For organizations considering a similar path, Leidos’s experience offers several takeaways:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Match the agent pattern to the processing sequence&lt;/strong&gt; – If your document workflow follows a predictable path, the workflow pattern in Strands provides the determinism and auditability that government environments require.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Use existing code as tools&lt;/strong&gt; – Strands’s decorator-based tool definitions and MCP support mean you don’t have to rewrite working logic—wrap it and assign it to the right agent.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Plan for Regional constraints&lt;/strong&gt; – In regulated environments like GovCloud, not every managed service is available on day one. Strands’s lightweight footprint makes container-based orchestration with Fargate and Amazon SQS a viable alternative.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Engage your support relationship early&lt;/strong&gt; – AWS Enterprise Support TAMs can help evaluate emerging patterns and navigate constraints—engaging them early in the architectural decision process can save significant time and rework.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Conclusion&lt;/h3&gt; 
&lt;p&gt;Leidos’s enhancement of ManagedX demonstrates how an established IDP platform can evolve to meet growing government demands through multi-agent AI patterns. By selecting the Strands Agents SDK for its workflow capabilities, built-in tooling support, and model flexibility—and deploying on containers in GovCloud—Leidos built a modular, scalable solution that adapts to diverse agency needs.&lt;/p&gt; 
&lt;blockquote&gt;
 &lt;p&gt;“The modular architecture significantly improved ManagedX’s onboarding speed and processing efficiency, allowing us to establish new agency workflows much faster,” said Miles.&lt;/p&gt; 
 &lt;p&gt;“ManagedX started as a document processing platform, but the agent architecture unlocked something bigger—it’s quickly becoming an intelligent case management system where AI plays an important role from intake to final recommendations. And with our foundation in place, we’re excited to introduce advanced capabilities like multi-level case intelligence, automated artifact generation, and a conversational AI interface that will help us boost insight and decision-making for case workers,” said Zhou.&lt;/p&gt;
&lt;/blockquote&gt; 
&lt;p&gt;For other public sector organizations exploring agentic AI, the path doesn’t require starting from scratch. It starts with understanding your processing requirements, choosing the right agent pattern, and engaging strategic partners like AWS Enterprise Support to help guide the journey.&lt;/p&gt; 
&lt;p&gt;To learn more about how AWS Enterprise Support can help your organization accelerate innovation, contact your &lt;a href="https://aws.amazon.com/government-education/contact/" target="_blank" rel="noopener"&gt;AWS account team&lt;/a&gt; or visit the &lt;a href="https://aws.amazon.com/premiumsupport/plans/enterprise/" target="_blank" rel="noopener"&gt;Enterprise Support homepage&lt;/a&gt;. To learn more about building with the Strands Agents SDK, visit the &lt;a href="https://strandsagents.com/" target="_blank" rel="noopener"&gt;Strands Agents documentation&lt;/a&gt;. Learn more about &lt;a href="https://leidos.widen.net/s/flxrgctsrl" target="_blank" rel="noopener"&gt;Managed X&lt;/a&gt; and speak with a member of the &lt;a href="mailto:InformationAdvantage@leidos.com" target="_blank" rel="noopener"&gt;ManagedX team&lt;/a&gt;.&lt;/p&gt;</content:encoded>
					
		
		
			</item>
	</channel>
</rss>