<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Centrilogic</title>
	<atom:link href="http://www.centrilogic.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.centrilogic.com/</link>
	<description></description>
	<lastBuildDate>Wed, 08 Apr 2026 13:57:41 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>Centrilogic Achieves the Data Analytics on Microsoft Azure Specialization </title>
		<link>https://www.centrilogic.com/centrilogic-achieves-the-data-analytics-on-microsoft-azure-specialization/</link>
		
		<dc:creator><![CDATA[Denise Faustino]]></dc:creator>
		<pubDate>Wed, 08 Apr 2026 13:57:41 +0000</pubDate>
				<category><![CDATA[General]]></category>
		<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://www.centrilogic.com/?p=1960</guid>

					<description><![CDATA[<p>Centrilogic has earned the Data Analytics on Microsoft Azure Specialization . This designation validates Centrilogic’s expertise in planning, designing, and delivering advanced analytics on Microsoft Azure.</p>
<p>The post <a href="https://www.centrilogic.com/centrilogic-achieves-the-data-analytics-on-microsoft-azure-specialization/">Centrilogic Achieves the Data Analytics on Microsoft Azure Specialization </a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h4 style="text-align: center;"><strong><span class="TextRun SCXW51602850 BCX0" lang="EN-CA" xml:lang="EN-CA" data-contrast="auto"><span class="NormalTextRun SCXW51602850 BCX0">Centrilogic Achieves </span><span class="NormalTextRun SCXW51602850 BCX0">the </span><span class="NormalTextRun SCXW51602850 BCX0">Data Analytics on Microsoft Azure Specialization</span></span></strong></h4>
<p><img fetchpriority="high" decoding="async" class="wp-image-1961 size-medium alignleft" src="https://www.centrilogic.com/wp-content/uploads/2026/04/11SPEC1-300x280.png" alt="" width="300" height="280" srcset="https://www.centrilogic.com/wp-content/uploads/2026/04/11SPEC1-300x280.png 300w, https://www.centrilogic.com/wp-content/uploads/2026/04/11SPEC1-1024x955.png 1024w, https://www.centrilogic.com/wp-content/uploads/2026/04/11SPEC1-768x716.png 768w, https://www.centrilogic.com/wp-content/uploads/2026/04/11SPEC1.png 1043w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<p><strong>TORONTO, ON, CANADA – April 8</strong><b><span data-contrast="auto">, 2026 – </span></b><span data-contrast="auto">Centrilogic, a global provider of IT transformation solutions, today announced it has earned the </span><b><span data-contrast="auto">Data Analytics on Microsoft Azure Specialization</span></b><span data-contrast="auto">, further validating the company’s expertise in planning, designing, and delivering advanced analytics on Microsoft Azure. </span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">The Data Analytics on Microsoft Azure Specialization recognizes partners that demonstrate extensive capabilities in delivering end-to-end data analytics solutions, from data strategy and architecture design through implementation, optimization, and ongoing support. </span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">To earn this specialization, Centrilogic met Microsoft’s highest standards for analytics service delivery, governance, security, and support. Centrilogic also demonstrated proven customer success, strong technical capabilities, and mature delivery practices across Azure data and analytics services, including modern data platforms, advanced analytics, and AI-driven insights. </span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">This recognition also highlights Centrilogic’s ability to help mid-market and enterprise customers maximize the value of their data assets and build secure, scalable, enterprise-grade analytics platforms that enable transformative insights and better business decision-making. </span><span data-ccp-props="{}"> </span></p>
<blockquote><p><em>“Earning the Data Analytics on Microsoft Azure Specialization underscores our focus on helping organizations build strong, trusted data foundations that enable better decision-making at scale,” said <b>Doug Tracy, CEO of Centrilogic</b>. “As data and analytics become central to AI adoption and enterprise strategy, organizations need partners who can design, govern, and operationalize analytics platforms that deliver real business insights and benefits. This recognition reinforces our ability to help customers move from data complexity to clarity with confidence.”  </em></p></blockquote>
<p><span data-contrast="auto">Centrilogic and Microsoft have built a trusted partnership spanning more than 15 years, working together to help midmarket and enterprise organizations modernize data platforms, accelerate analytics adoption, and enable AI-driven decision-making across the Microsoft cloud ecosystem. Backed by a team of senior Microsoft-certified experts, Centrilogic delivers comprehensive solutions across application modernization, cloud engineering, data &amp; AI, security, DevOps, and managed services. </span><span data-ccp-props="{}"> </span></p>
<p><b><span data-contrast="auto">About Centrilogic:</span></b><span data-contrast="auto"> </span><span data-contrast="auto"> </span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">Centrilogic is a global provider of IT transformation solutions that empower organizations to realize their full digital potential. Armed with capabilities that span the stack – including multi-cloud management, application innovation, data &amp; AI, and IT advisory – Centrilogic delivers resilient end-to-end digital solutions that help companies reshape the role of their technology platforms as business-driving assets. With regional headquarters in Canada, USA, and India, Centrilogic delivers solutions to innovative companies worldwide. For more information, visit www.centrilogic.com.  </span><span data-ccp-props="{}"> </span></p>
<p>The post <a href="https://www.centrilogic.com/centrilogic-achieves-the-data-analytics-on-microsoft-azure-specialization/">Centrilogic Achieves the Data Analytics on Microsoft Azure Specialization </a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How Do I Run AI Workloads When My Regulator Says the Data Can&#8217;t Leave the Country?</title>
		<link>https://www.centrilogic.com/how-to-run-ai-workloads-regulated-environments/</link>
		
		<dc:creator><![CDATA[Denise Faustino]]></dc:creator>
		<pubDate>Tue, 31 Mar 2026 17:11:55 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://www.centrilogic.com/?p=1955</guid>

					<description><![CDATA[<p>Regulatory and data‑sovereignty constraints don’t have to stop AI innovation. In this article, Dave Manning explores how organizations can run AI workloads while meeting strict regulatory and data‑residency requirements.</p>
<p>The post <a href="https://www.centrilogic.com/how-to-run-ai-workloads-regulated-environments/">How Do I Run AI Workloads When My Regulator Says the Data Can&#8217;t Leave the Country?</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This article was written by <a href="https://www.linkedin.com/in/davemanninggta/" target="_blank" rel="noopener">Dave Manning</a>, our Director of Architecture. It was first published on <a href="https://www.linkedin.com/pulse/agentic-ai-coming-enterprise-walt-smith-obcme/" target="_blank" rel="noopener">LinkedIn</a>.</p>
<h1 class="reader-article-header__title" dir="ltr"><span data-scaffold-immersive-reader-title="">How Do I Run AI Workloads When My Regulator Says the Data Can&#8217;t Leave the Country?</span></h1>
<p id="ember284" class="ember-view reader-text-block__paragraph">I often find myself in conversations with Canadian executives who are wrestling with the same question: &#8220;We want to deploy AI. Our regulator says the data stays in Canada. Now what?&#8221;</p>
<p id="ember285" class="ember-view reader-text-block__paragraph">It&#8217;s a fair question. And it&#8217;s one that most of the public conversation around AI is failing to answer clearly.</p>
<p id="ember286" class="ember-view reader-text-block__paragraph">The vendor keynotes talk about transformation. The compliance team talks about risk. The board wants both. The people in the middle, the Architects, CIOs, VPs of IT, are left trying to reconcile two mandates that feel like they&#8217;re on a collision course.</p>
<p id="ember287" class="ember-view reader-text-block__paragraph">I lead an architecture team that designs and delivers cloud and AI environments for regulated Canadian organizations. We&#8217;re not theorizing about what&#8217;s possible. We&#8217;re building what works today, within the constraints that actually exist. This is the ground-level view.</p>
<h3 id="ember288" class="ember-view reader-text-block__heading-3">&#8220;Data Can&#8217;t Leave the Country&#8221; Is More Complicated Than It Sounds</h3>
<p id="ember289" class="ember-view reader-text-block__paragraph">The first thing executives need to understand is that &#8220;Canadian data residency&#8221; isn&#8217;t a single rule. It&#8217;s a stack of overlapping obligations that vary by province, sector, and data classification.</p>
<p id="ember290" class="ember-view reader-text-block__paragraph">At the federal level, PIPEDA doesn&#8217;t actually mandate that data stay in Canada. It requires organizations to inform individuals about cross-border transfers and obtain meaningful consent. That&#8217;s an important distinction, but it&#8217;s also just the baseline.</p>
<p id="ember291" class="ember-view reader-text-block__paragraph">The real teeth come from the provinces and sector regulators. Quebec&#8217;s Law 25 imposes the strictest regime in the country: any transfer of personal information outside the province requires a privacy impact assessment, and the destination jurisdiction must offer protection &#8220;essentially equivalent&#8221; to Quebec law. The penalties, up to $10 million or 2% of worldwide revenue, are not theoretical. Ontario&#8217;s PHIPA is widely believed to mandate that personal health information stay in Canada, but the reality is more nuanced. PHIPA restricts unconsented <em>disclosures</em> to independent third parties outside the province, not the hosting of data by a cloud provider acting as an agent under contract. A hospital using Azure to run an AI diagnostic model isn&#8217;t &#8220;disclosing&#8221; data to Microsoft in the PHIPA sense, provided the right contractual safeguards are in place. The distinction matters because it opens architectural options that many healthcare organizations assume are off the table. Alberta&#8217;s PIPA adds yet another layer.</p>
<p id="ember292" class="ember-view reader-text-block__paragraph">Then there are the sector regulators. OSFI&#8217;s Guideline B-13 (Technology and Cyber Risk Management) is often cited as requiring in-country processing, but it&#8217;s actually a principles-based risk management framework, not a geographic embargo. It requires rigorous oversight of third-party vendors, robust incident reporting, and comprehensive risk documentation. Many financial institutions choose to localize data processing because it simplifies their risk posture, but B-13 itself doesn&#8217;t forbid foreign processing if the controls are in place. What it does mean is that if you process data outside Canada, you own the burden of proving you&#8217;ve managed every associated risk.</p>
<p id="ember293" class="ember-view reader-text-block__paragraph">Separately, OSFI finalized the updated Guideline E-23 (Model Risk Management) in late 2025, with a mandatory effective date of May 1, 2027. This is the one that changes the game for AI. E-23 now explicitly covers AI and machine learning systems, requiring enterprise-wide model identification, risk assessment, deployment controls, continuous monitoring, and formal decommissioning processes. If you&#8217;re in financial services, your AI models will need the same governance rigor as your credit risk models. This isn&#8217;t guidance. It&#8217;s regulatory expectation with a hard deadline.</p>
<p id="ember294" class="ember-view reader-text-block__paragraph">The point is: when a regulator says &#8220;the data stays here,&#8221; the specific meaning depends entirely on who your regulator is, what province you&#8217;re in, and what kind of data you&#8217;re handling. Most organizations underestimate this complexity until they&#8217;re already deep into an AI initiative.</p>
<h3 id="ember295" class="ember-view reader-text-block__heading-3">PBMM: The Standard That Gates Public Sector, and Increasingly Everything Else</h3>
<p id="ember296" class="ember-view reader-text-block__paragraph">If you work with the Canadian federal government, you already know Protected B, Medium Integrity, Medium Availability (PBMM). It&#8217;s the cloud security standard defined in ITSG-33 and evolved into the CCCS Medium Cloud Profile. It&#8217;s the bar your cloud environment has to clear to handle non-classified government information.</p>
<p id="ember297" class="ember-view reader-text-block__paragraph">What&#8217;s changed is that PBMM is no longer just a public sector concern. We see it referenced in provincial procurement, in RFPs from regulated private sector organizations, and as a proxy standard for &#8220;serious about security&#8221; in industries that don&#8217;t have their own cloud certification framework.</p>
<p id="ember298" class="ember-view reader-text-block__paragraph">The major cloud providers have all invested in PBMM assessments. Azure was one of the first to qualify. AWS reports 162 services assessed against CCCS Medium requirements as of late 2025. Google Cloud is approved for Protected B Medium and High Value Asset workloads. IBM maintains PBMM in their Toronto and Montreal facilities.</p>
<p id="ember299" class="ember-view reader-text-block__paragraph">But here&#8217;s the gap that matters: PBMM certification for AI-specific services (Azure OpenAI, Cognitive Services, the inference endpoints you actually need) is not explicitly published in the same way that general IaaS and PaaS services are. General cloud PBMM is not the same as AI service PBMM. This distinction is where deals stall, and projects get delayed. If you&#8217;re building an AI workload for a regulated environment, you need to understand exactly which services have been assessed and which are running on assumption.</p>
<h3 id="ember300" class="ember-view reader-text-block__heading-3">What You Can Actually Run Today</h3>
<p id="ember301" class="ember-view reader-text-block__paragraph">Let me be direct about the current state, because I think the market needs more honesty here and less aspiration.</p>
<p id="ember302" class="ember-view reader-text-block__paragraph"><strong>Azure OpenAI</strong> offers models in Canada Central and Canada East, but model availability varies significantly by deployment type. Established models like GPT-4 and GPT-4o support standard regional deployments with strict data residency: both inference and data processing happen exclusively within Canadian data centres. However, newer frontier models (the GPT-4.1 series, advanced reasoning models like o1 and o3) are currently unavailable for standard regional deployment in Canada Central. To access these, architects are pushed toward &#8220;Global Standard&#8221; or &#8220;Data Zone&#8221; deployment types, which dynamically route inference across Azure&#8217;s global infrastructure, fundamentally breaking strict data residency. This is a critical distinction: you can run production AI in-region today, but the most capable models may require you to choose between cognitive performance and geographic isolation.</p>
<p id="ember303" class="ember-view reader-text-block__paragraph"><strong>AWS Bedrock</strong> offers foundation model access from ca-central-1 (Toronto) and ca-west-1 (Calgary), including Anthropic&#8217;s Claude models. But there&#8217;s a critical nuance: AWS achieves this through cross-region inference. Your data at rest (logs, knowledge bases, stored configurations) stays in Canada, but the inference processing itself may route to US regions. For workloads where the regulator cares about where compute happens, not just where data sits, this distinction matters. You need to understand whether your compliance framework draws the line at data residency or processing residency.</p>
<p id="ember304" class="ember-view reader-text-block__paragraph"><strong>Anthropic&#8217;s Claude API</strong> (direct, not through a cloud provider) currently processes all data in the United States. There is no Canadian data residency option. Anthropic has introduced EU data residency for API customers, which suggests regional expansion is on the roadmap, but nothing is announced for Canada. For organizations that need Claude&#8217;s capabilities within Canadian borders today, the path runs through Bedrock, with the cross-region inference caveat above.</p>
<p id="ember305" class="ember-view reader-text-block__paragraph"><strong>Google Vertex AI</strong> offers Claude and Gemini models in the Toronto region, with regional endpoints that provide guaranteed data routing through specific geographic regions. This adds another option, though at a 10% pricing premium over global endpoints.</p>
<p id="ember306" class="ember-view reader-text-block__paragraph">The honest summary: you can run production AI inference in Canada today, with real data residency guarantees, if you choose the right platform, the right models, and the right deployment configuration. But the most capable frontier models often require global routing that breaks strict residency, and the managed orchestration services are either deprecated or unavailable in-country. The gap is narrowing fast, but if you&#8217;re making architecture decisions right now, make them based on what&#8217;s certified and available, not what&#8217;s on a roadmap.</p>
<h3 id="ember307" class="ember-view reader-text-block__heading-3">The Architecture Decisions That Actually Matter</h3>
<p id="ember308" class="ember-view reader-text-block__paragraph">These are the conversations that separate a successful sovereign AI deployment from one that stalls in procurement review.</p>
<p id="ember309" class="ember-view reader-text-block__paragraph"><strong>Region selection is a governance decision, not a performance decision.</strong> Lock Canada Central as your AI compute region. Document the rationale. Make the decision auditable. This sounds obvious, but I&#8217;ve seen organizations leave region selection to a developer&#8217;s default configuration and then scramble to explain it to an auditor six months later.</p>
<p id="ember310" class="ember-view reader-text-block__paragraph"><strong>Classify your data before you select your models.</strong> Know what&#8217;s Protected B, what&#8217;s sensitive personal information, what&#8217;s proprietary but not regulated, and what&#8217;s public. Then map each classification to the services that are certified to handle it. Most organizations want to start with the model and work backward to the data. That&#8217;s exactly the wrong order for regulated environments.</p>
<p id="ember311" class="ember-view reader-text-block__paragraph"><strong>Understand that &#8220;available in Canada&#8221; means different things on different platforms.</strong> Azure OpenAI in Canada Central runs inference in-region for supported models, but frontier models may require Global routing that breaks residency guarantees. Bedrock cross-region inference from ca-central-1 keeps data at rest in Canada but may process inference in the US. Anthropic&#8217;s direct API runs entirely in the US. These are not equivalent from a compliance perspective, even though all three give you access to powerful models. Your architecture needs to reflect the actual data flow, not the marketing summary.</p>
<p id="ember312" class="ember-view reader-text-block__paragraph"><strong>Stay current on platform deprecations.</strong> If you&#8217;ve been planning agent-style orchestration around Azure OpenAI&#8217;s Assistants API, stop. Microsoft has officially deprecated the Assistants API, with complete retirement scheduled for August 26, 2026. It has been replaced by the Microsoft Foundry Agents Service, a modernized framework with stronger observability, version control, and enterprise governance. Any organization building custom middleware while waiting for Assistants to land in Canada is investing in a dead end. Architects should evaluate Foundry Agents now and plan migration paths for any existing Assistants-based work. More broadly, this is a reminder that the AI platform landscape is moving fast enough to invalidate architectural assumptions within months. Build modularly, stay close to the deprecation notices, and don&#8217;t bet your architecture on a single service remaining stable.</p>
<p id="ember313" class="ember-view reader-text-block__paragraph"><strong>Contractual data residency is not the same as technical data residency.</strong> Azure&#8217;s contractual commitments around Canadian data processing are strong. But your architecture has to enforce them. Private Endpoints, Azure Policy definitions that deny non-Canadian resource creation, network security groups, diagnostic settings that keep logs in-region: these are the controls your auditor will actually ask about. A contract tells your regulator you intend to keep data in Canada. Your technical controls prove you did.</p>
<p id="ember314" class="ember-view reader-text-block__paragraph"><strong>Treat model risk management as a first-class governance discipline.</strong> This one is specifically for financial services today, but it&#8217;s heading everywhere. OSFI E-23 takes full effect May 1, 2027, and it covers AI/ML explicitly. Model identification, risk assessment, deployment controls, performance monitoring, bias testing, explainability, and formal decommissioning are all in scope. Build these into your AI platform from day one. Retrofitting governance to satisfy an OSFI audit after the fact will be far more expensive and disruptive than doing it right up front.</p>
<p id="ember315" class="ember-view reader-text-block__paragraph">The underlying principle: sovereign AI is not a product you buy. It&#8217;s an architecture you design. The decisions that matter most are the unglamorous ones: region locks, data classification, policy enforcement, model governance.</p>
<h3 id="ember316" class="ember-view reader-text-block__heading-3">The CLOUD Act &#8211; Elephant in the Room</h3>
<p id="ember317" class="ember-view reader-text-block__paragraph">I&#8217;d be doing a disservice if I didn&#8217;t address this directly, because every informed client raises it and too many advisors dodge it.</p>
<p id="ember318" class="ember-view reader-text-block__paragraph">The U.S. CLOUD Act gives American authorities the legal right to compel U.S.-headquartered companies to produce data in their custody, regardless of where that data is physically stored. Microsoft, AWS, Google, and Anthropic are all U.S.-headquartered. Your data can be sitting in a Toronto data centre, running on Azure, with a Canadian data residency contract, and the U.S. government can still, in principle, issue a lawful demand for it.</p>
<p id="ember319" class="ember-view reader-text-block__paragraph">Microsoft has responded with a five-point Digital Sovereignty Plan: a Threat Intelligence Hub in Ottawa, expanded confidential computing capabilities, enhanced data residency commitments, sovereign landing zone architectures, and contractual protections designed to keep Canadian data under Canadian legal authority. These are meaningful steps. AWS and Google have made similar commitments around data sovereignty, though with different implementation approaches.</p>
<p id="ember320" class="ember-view reader-text-block__paragraph">Here&#8217;s what honest advisors tell their clients: the combination of Canadian data residency, encryption at rest and in transit, customer-managed keys, access controls, and contractual commitments significantly reduces the practical risk. For the vast majority of regulated workloads, this combination meets the bar that Canadian regulators are setting. For a narrow set of the highest-sensitivity workloads (think national security, certain government classified environments), it may not, and those organizations may need sovereign cloud options or on-premises deployments.</p>
<p id="ember321" class="ember-view reader-text-block__paragraph">The pragmatic position: don&#8217;t pretend the CLOUD Act doesn&#8217;t exist. Don&#8217;t pretend it blocks everything, either. Understand where your data sits on the sensitivity spectrum and architect to the actual risk, not the headline.</p>
<h3 id="ember322" class="ember-view reader-text-block__heading-3">What the Next 12 Months Look Like</h3>
<p id="ember323" class="ember-view reader-text-block__paragraph">The investment wave is real. Microsoft&#8217;s $7.5 billion. The federal government&#8217;s $925 million. The AI Compute Challenge calling for 100+ megawatt sovereign data centres. The University of Toronto receiving $42.5 million for AI compute infrastructure. This isn&#8217;t speculative. It&#8217;s funded and in motion.</p>
<p id="ember324" class="ember-view reader-text-block__paragraph">But there are things the money doesn&#8217;t solve yet. Canada still has no comprehensive AI legislation. Bill C-27 and AIDA died when Parliament dissolved in 2025, and while a successor is expected, the timing is uncertain. In the meantime, sector regulators (OSFI, Health Canada, Transport Canada) are filling the gap with guidance that has practical regulatory weight even without a new law.</p>
<p id="ember325" class="ember-view reader-text-block__paragraph">The AI platform landscape is also converging on Canada. Anthropic is expanding its regional data residency options (EU is live, other regions are expected to follow). AWS is broadening Bedrock&#8217;s in-region capabilities. Microsoft is pouring infrastructure dollars into Canadian capacity that comes online in H2 2026. Cohere, a Canadian-founded AI company, has models available on Azure, adding a sovereign-origin option to the platform.</p>
<p id="ember326" class="ember-view reader-text-block__paragraph">What this means for executives making decisions today: the infrastructure is coming, the regulatory framework is converging, and the service gaps are closing. But you should not architect for next year&#8217;s availability. Build on what&#8217;s certified and available now. Design your governance framework so it can absorb new services and new regulations as they arrive. The organizations moving now, with proper controls, will have 12 to 18 months of operational learning and institutional knowledge that their competitors won&#8217;t.</p>
<p id="ember327" class="ember-view reader-text-block__paragraph">Waiting for perfect regulatory clarity before deploying AI in a regulated Canadian environment means waiting indefinitely. The organizations that will lead didn&#8217;t wait for perfection. They moved with governance.</p>
<h3 id="ember328" class="ember-view reader-text-block__heading-3">Reframing the Question</h3>
<p id="ember329" class="ember-view reader-text-block__paragraph">The question I started with (&#8220;How do I run AI workloads when my regulator says the data can&#8217;t leave the country?&#8221;) is actually the wrong question, or at least an incomplete one.</p>
<p id="ember330" class="ember-view reader-text-block__paragraph">The better question is: <strong>How do I architect AI workloads that satisfy my regulator, my board, and my users, simultaneously?</strong></p>
<p id="ember331" class="ember-view reader-text-block__paragraph">The answer isn&#8217;t a single product or a single certification. It&#8217;s an architecture designed with intention: the right region, the right data classifications, the right technical controls, the right governance model, and an honest assessment of what&#8217;s certified today versus what&#8217;s on someone&#8217;s roadmap.</p>
<p id="ember332" class="ember-view reader-text-block__paragraph">The infrastructure investment is happening. The regulatory frameworks are converging. The AI services are landing in Canadian regions. The window for competitive advantage is open right now, for the organizations willing to build thoughtfully rather than wait for someone else to make it easy.</p>
<p id="ember333" class="ember-view reader-text-block__paragraph">I&#8217;d like to hear what you&#8217;re seeing in your own environments. If you&#8217;re navigating sovereign AI in a regulated Canadian industry, what&#8217;s working? Where are you stuck? The more practitioners share what they&#8217;re learning, the faster this market matures for everyone.</p>
<aside class="scaffold-layout__aside ">
<div class="scaffold-layout__sticky scaffold-layout__sticky--is-active scaffold-layout__sticky--md ">
<div class="scaffold-layout__sticky-content">
<div class="reader-social-activity__right-rail">
<h3 class="text-heading-medium mb4"></h3>
</div>
</div>
</div>
</aside>
<p>The post <a href="https://www.centrilogic.com/how-to-run-ai-workloads-regulated-environments/">How Do I Run AI Workloads When My Regulator Says the Data Can&#8217;t Leave the Country?</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Centrilogic Achieves Agentic DevOps with Microsoft Azure and GitHub Specialization</title>
		<link>https://www.centrilogic.com/centrilogic-achieves-agentic-devops-with-azure-and-github-specialization/</link>
		
		<dc:creator><![CDATA[Denise Faustino]]></dc:creator>
		<pubDate>Tue, 31 Mar 2026 13:54:34 +0000</pubDate>
				<category><![CDATA[General]]></category>
		<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://www.centrilogic.com/?p=1953</guid>

					<description><![CDATA[<p>Centrilogic has earned the Agentic DevOps with Microsoft Azure and GitHub Specialization, This designation validates Centrilogic’s ability to help mid-market and enterprise organizations establish secure, modern software development practices by applying DevOps principles and using GitHub and Azure solutions.</p>
<p>The post <a href="https://www.centrilogic.com/centrilogic-achieves-agentic-devops-with-azure-and-github-specialization/">Centrilogic Achieves Agentic DevOps with Microsoft Azure and GitHub Specialization</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h4 style="text-align: center;"><strong><span class="EOP SCXW216982027 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:2,&quot;335551620&quot;:2,&quot;335559740&quot;:276}">Centrilogic Achieves Agentic DevOps with Microsoft Azure and GitHub Specialization </span></strong></h4>
<p><img decoding="async" class="alignleft wp-image-1956" src="https://www.centrilogic.com/wp-content/uploads/2026/03/11SPEC1-1.png" alt="Centrilogic Agentic DevOps with Microsoft Azure &amp; GitHub" width="255" height="256" srcset="https://www.centrilogic.com/wp-content/uploads/2026/03/11SPEC1-1.png 1043w, https://www.centrilogic.com/wp-content/uploads/2026/03/11SPEC1-1-300x300.png 300w, https://www.centrilogic.com/wp-content/uploads/2026/03/11SPEC1-1-1019x1024.png 1019w, https://www.centrilogic.com/wp-content/uploads/2026/03/11SPEC1-1-150x150.png 150w, https://www.centrilogic.com/wp-content/uploads/2026/03/11SPEC1-1-768x772.png 768w, https://www.centrilogic.com/wp-content/uploads/2026/03/11SPEC1-1-750x750.png 750w" sizes="(max-width: 255px) 100vw, 255px" /></p>
<p><strong>TORONTO, ON, CANADA – </strong><b><span data-contrast="auto">March 31, 2026 – </span></b>Centrilogic, a global provider of IT transformation solutions, today announced it has earned the <em>Agentic DevOps with Microsoft Azure and GitHub Specialization</em>, validating the company’s ability to help mid-market and enterprise organizations establish secure, modern software development practices by applying DevOps principles and using GitHub and Azure solutions.</p>
<p>This specialization recognizes Microsoft partners that demonstrate deep expertise in delivering end‑to‑end agentic DevOps capabilities, spanning initial assessment and solution design through pilot, implementation, and post‑implementation support. Achieving this designation confirms that Centrilogic has adopted robust, repeatable processes that support customer success across all phases of agentic DevOps.</p>
<p>Earning this specialization places Centrilogic among a select group of Microsoft partners that meet Microsoft’s stringent technical, performance, and audit requirements. It reflects proven customer success, strong technical capabilities, and mature delivery practices aligned with Microsoft’s Azure DevOps and GitHub solutions.</p>
<blockquote><p><em>“Achieving the Agentic DevOps with Microsoft Azure and GitHub Specialization reflects our continued investment in modern engineering practices and our commitment to helping customers build, deploy, and operate software securely and at scale,” </em>said Doug Tracy, CEO of Centrilogic<em>. “As organizations look to adopt agentic and AI‑driven development models, DevOps discipline becomes even more critical. This recognition reinforces our ability to guide customers through that journey with confidence.”</em></p></blockquote>
<p>Centrilogic and Microsoft have built a trusted partnership spanning more than 15 years, working together to help mid‑market and enterprise organizations modernize applications, improve developer productivity, and accelerate innovation across the Microsoft cloud ecosystem. Backed by a team of senior Microsoft‑certified experts, Centrilogic delivers comprehensive solutions across application modernization, cloud engineering, DevOps, data &amp; AI, security, and managed services.</p>
<p><strong>About Centrilogic:</strong></p>
<p>Centrilogic is a global provider of IT transformation solutions that empower organizations to realize their full digital potential. Armed with capabilities that span the stack – including multi-cloud management, application innovation, data &amp; AI, and IT advisory – Centrilogic delivers resilient end-to-end digital solutions that help companies reshape the role of their technology platforms as business-driving assets. With regional headquarters in Canada, USA, and India, Centrilogic delivers solutions to innovative companies worldwide. For more information, visit www.centrilogic.com.</p>
<p>&nbsp;</p>
<p>The post <a href="https://www.centrilogic.com/centrilogic-achieves-agentic-devops-with-azure-and-github-specialization/">Centrilogic Achieves Agentic DevOps with Microsoft Azure and GitHub Specialization</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Agentic AI Is Coming to the Enterprise &#8211; How to Design and Build Trustworthy Agentic AI</title>
		<link>https://www.centrilogic.com/how-to-design-and-build-trustworthy-agentic-ai/</link>
		
		<dc:creator><![CDATA[Denise Faustino]]></dc:creator>
		<pubDate>Thu, 26 Mar 2026 18:16:49 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://www.centrilogic.com/?p=1948</guid>

					<description><![CDATA[<p>This is the third article in a four‑part series exploring operations and security in the agentic AI landscape. It focuses on why security must be treated as a foundational design principle for agentic systems and how to implement it effectively from the start.</p>
<p>The post <a href="https://www.centrilogic.com/how-to-design-and-build-trustworthy-agentic-ai/">Agentic AI Is Coming to the Enterprise &#8211; How to Design and Build Trustworthy Agentic AI</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This article was written by <a href="https://www.linkedin.com/in/waltlsmith/" target="_blank" rel="noopener">Walter Smith</a>, our VP Application Development Practice. It was first published on <a href="https://www.linkedin.com/pulse/agentic-ai-coming-enterprise-walt-smith-eezre/?trackingId=aURED5%2Bl6iRZaSoKYJEWyg%3D%3D" target="_blank" rel="noopener">LinkedIn</a>.</p>
<p><span data-teams="true">This is the 3rd post in a 4 part series. To read part 1 titled: </span><span data-scaffold-immersive-reader-title="">Agentic AI Is Coming to the Enterprise — Are You Ready? visit <a href="https://www.centrilogic.com/agentic-ai-enterprise-readiness/" target="_blank" rel="noopener">https://www.centrilogic.com/agentic-ai-enterprise-readiness/</a></span></p>
<p><span data-teams="true">To read part 2, titled: </span><span data-scaffold-immersive-reader-title="">Agentic AI is Coming to the Enterprise: &#8211; Threat Assessment</span><span data-teams="true">, visit <a href="https://www.centrilogic.com/agentic-ai-enterprise-risks.">https://www.centrilogic.com/agentic-ai-enterprise-risks.</a></span></p>
<p id="ember53" class="ember-view reader-text-block__paragraph"><em>This is the third in a four-part series about operations and security in the agentic AI world. It is intended for informational purposes only. Please engage an AI security professional before implementing agentic AI.</em></p>
<p id="ember54" class="ember-view reader-text-block__paragraph">In the first two parts of this series, we explored why agentic AI represents a fundamentally new category of enterprise risk, and what the threat landscape actually looks like for leaders who are preparing to deploy autonomous systems. Now it&#8217;s time to get practical.</p>
<p id="ember55" class="ember-view reader-text-block__paragraph">The good news: trustworthy agentic AI is absolutely achievable. The catch: you can&#8217;t bolt security on after the fact. With agentic systems, safety must be designed in from the very beginning — before the first line of code is written.</p>
<p id="ember56" class="ember-view reader-text-block__paragraph">Here is a practical blueprint for doing exactly that.</p>
<h3 id="ember57" class="ember-view reader-text-block__paragraph"><strong>1. Start With &#8220;Security by Design&#8221;</strong></h3>
<p id="ember58" class="ember-view reader-text-block__paragraph">Most technology initiatives begin with capability. Agentic AI requires flipping that script. Security isn&#8217;t a feature you add at the end. It&#8217;s the foundation everything else is built on. Three principles should guide every agentic architecture from day one:</p>
<ul>
<li><strong>Defense-in-depth.</strong> Layer your controls so that a single failure doesn&#8217;t compromise the entire system.</li>
<li><strong>Least privilege.</strong> Give agents the minimum permissions needed to do their job — nothing more.</li>
<li><strong>Zero trust.</strong> Validate every action an agent takes, even within your own environment. Assume nothing is inherently safe.</li>
</ul>
<p id="ember60" class="ember-view reader-text-block__paragraph">These aren&#8217;t abstract ideals. They are practical design decisions that prevent oversights from becoming liabilities.</p>
<h3 id="ember61" class="ember-view reader-text-block__paragraph"><strong>2. Threat Model the Agent in Business Language</strong></h3>
<p id="ember62" class="ember-view reader-text-block__paragraph">Threat modeling sounds like an IT exercise. It doesn&#8217;t have to be. Business leaders can and should participate in this process. The questions are straightforward:</p>
<ul>
<li>What could go wrong with this agent? ·</li>
<li>Who could misuse it — internally or externally?</li>
<li>What is the worst-case scenario for this workflow?</li>
<li>What data might be exposed, altered, or corrupted?</li>
</ul>
<p id="ember64" class="ember-view reader-text-block__paragraph">Once you&#8217;ve answered those questions, translate the risks into concrete design requirements:</p>
<ul>
<li>Should certain actions require human approval before the agent proceeds?</li>
<li>Should the agent be sandboxed from sensitive systems?</li>
<li>What should never happen under any circumstances?</li>
<li>What must be logged for auditing and accountability?</li>
</ul>
<p id="ember66" class="ember-view reader-text-block__paragraph">This conversation produces a practical security roadmap for the engineering team and keeps business intent at the center of the design.</p>
<h3 id="ember67" class="ember-view reader-text-block__paragraph"><strong>3. Define Agency vs. Autonomy</strong></h3>
<p id="ember68" class="ember-view reader-text-block__paragraph">This is one of the most powerful mental models for enterprise AI adoption, and one of the least understood.</p>
<p id="ember69" class="ember-view reader-text-block__paragraph"><strong>Agency</strong> is what the agent is <em>allowed to do</em> — reading data, writing records, sending communications, triggering workflows.</p>
<p id="ember70" class="ember-view reader-text-block__paragraph"><strong>Autonomy</strong> is <em>when</em> it&#8217;s allowed to act without human oversight — never, sometimes, always, or only within specific conditions.</p>
<p id="ember71" class="ember-view reader-text-block__paragraph">These two dimensions can be mixed and matched deliberately:</p>
<ul>
<li>High agency, low autonomy — the agent can do a lot, but a human approves every significant action.</li>
<li>Low agency, high autonomy — the agent acts freely, but within a very narrow scope.</li>
<li>High agency, high autonomy — powerful, but rare and high-risk. Reserved for mature, well-monitored systems.</li>
<li>Low agency, low autonomy — common in pilots. Safe starting point for most organizations.</li>
</ul>
<p id="ember73" class="ember-view reader-text-block__paragraph">Defining this operational envelope gives leaders precise control over what an agent can do and when. It also makes the conversation between business and technical teams dramatically more productive.</p>
<h3 id="ember74" class="ember-view reader-text-block__paragraph"><strong>4. Build in Observability and Control</strong></h3>
<p id="ember75" class="ember-view reader-text-block__paragraph">Agentic AI should never be a black box. Your architecture needs to answer these questions at any point in time:</p>
<ul>
<li>Why did the agent take that action?</li>
<li>What sequence of decisions led to this outcome?</li>
<li>Who or what authorized it?</li>
</ul>
<p id="ember77" class="ember-view reader-text-block__paragraph">That means building in:</p>
<p id="ember78" class="ember-view reader-text-block__paragraph"><strong>Rationale logging</strong> — a record of why the agent acted, not just what it did.</p>
<p id="ember79" class="ember-view reader-text-block__paragraph"><strong>Traceability</strong>— a full audit trail of the decision chain.</p>
<p id="ember80" class="ember-view reader-text-block__paragraph"><strong>Circuit breakers</strong> — automated mechanisms that halt agent activity when behavior falls outside acceptable boundaries.</p>
<p id="ember81" class="ember-view reader-text-block__paragraph"><strong>Human review workflows</strong> — escalation paths for actions with significant business, financial, or regulatory impact.</p>
<p id="ember82" class="ember-view reader-text-block__paragraph">These aren&#8217;t overhead. They are the mechanisms that make agentic AI trustworthy to executives, auditors, regulators, and the customers you serve.</p>
<h3 id="ember83" class="ember-view reader-text-block__paragraph"><strong>5. Align With Industry Frameworks</strong></h3>
<p id="ember84" class="ember-view reader-text-block__paragraph">There are several established frameworks that support responsible agentic AI design. Use them as tools, not textbooks:</p>
<p id="ember85" class="ember-view reader-text-block__paragraph"><strong>NIST AI RMF</strong> &#8211; excellent for governance structure and risk context mapping.</p>
<p id="ember86" class="ember-view reader-text-block__paragraph"><strong>ISO/IEC 42001</strong> &#8211; provides an organizational framework for AI management systems.</p>
<p id="ember87" class="ember-view reader-text-block__paragraph"><strong>CSA MAESTRO </strong>&#8211; purpose-built for threat modeling AI and multi-agent systems.</p>
<p id="ember88" class="ember-view reader-text-block__paragraph"><strong>OWASP LLM Guidelines</strong> &#8211; critical for addressing prompt injection, output handling, and integration security.</p>
<p id="ember89" class="ember-view reader-text-block__paragraph">Together, these guides give your organization a holistic, defensible approach to AI Threat mitigation that will hold up to audit and legal scrutiny both internally and externally.</p>
<h3 id="ember90" class="ember-view reader-text-block__paragraph"><strong>6. Produce Deliverables Your Organization Can Actually Use</strong></h3>
<p id="ember91" class="ember-view reader-text-block__paragraph">By the end of the design phase, your team should be able to hand the following to engineering, operations, legal, and risk:</p>
<ul>
<li>System architecture diagrams with security components clearly identified</li>
<li>A threat model with specific, actionable mitigations</li>
<li>Defined agency and autonomy levels for each agent</li>
<li>An accountability model that answers &#8220;who owns this agent&#8217;s actions?&#8221;</li>
<li>Logging and audit requirements</li>
<li>Guardrail policies and decision thresholds</li>
<li>Clear boundaries on what the agent is allowed and not allowed to do</li>
</ul>
<p id="ember93" class="ember-view reader-text-block__paragraph">This isn&#8217;t documentation for its own sake. It&#8217;s the difference between an agentic system that scales confidently and one that generates a crisis the moment something unexpected happens.</p>
<h3 id="ember94" class="ember-view reader-text-block__paragraph"><strong>The Takeaway</strong></h3>
<p id="ember95" class="ember-view reader-text-block__paragraph">Enterprises that design agentic AI correctly will be positioned to adopt the technology safely, quickly, and with a meaningful competitive advantage. Those who rush in without a blueprint risk failures that are operational, financial, and reputational — and increasingly public.</p>
<p id="ember96" class="ember-view reader-text-block__paragraph">The design phase is where trust is either built or forfeited. It is worth the investment.</p>
<p id="ember97" class="ember-view reader-text-block__paragraph">In the final article of this series, we will explore how to operate, monitor, and govern agentic AI in real time with practices that keep autonomous systems aligned, auditable, and effective long after they go live.</p>
<p id="ember98" class="ember-view reader-text-block__paragraph">Remember, the easy part of agentic AI is building it. The hard part is making it a trustworthy tool for your business.</p>
<p>The post <a href="https://www.centrilogic.com/how-to-design-and-build-trustworthy-agentic-ai/">Agentic AI Is Coming to the Enterprise &#8211; How to Design and Build Trustworthy Agentic AI</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>New Features in Azure DevOps &#8211; March 2026</title>
		<link>https://www.centrilogic.com/new-features-azure-devops-march-2026/</link>
		
		<dc:creator><![CDATA[Denise Faustino]]></dc:creator>
		<pubDate>Fri, 13 Mar 2026 14:57:25 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://www.centrilogic.com/?p=1943</guid>

					<description><![CDATA[<p>In this article, Dave Lloyd presents the most recent features and enhancements in Azure DevOps - March 2026.</p>
<p>The post <a href="https://www.centrilogic.com/new-features-azure-devops-march-2026/">New Features in Azure DevOps &#8211; March 2026</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This article was written by <a href="https://www.linkedin.com/in/daveklloyd/" target="_blank" rel="noopener">Dave Lloyd</a>. It was first published on <a href="https://medium.com/objectsharp/azure-devops-love-f7a37869bacd" target="_blank" rel="noopener">Medium</a>.</p>
<h2 id="c03a" class="pw-post-title ig ih ii bb ij ik il im in io ip iq ir is it iu iv iw ix iy iz ja jb jc jd je jf jg jh ji bg" data-testid="storyTitle">Azure DevOps Love</h2>
<p>New features and enhancements are coming to Azure DevOps all the time, and it makes me so happy.</p>
<div data-scaffold-immersive-reader="">
<article>
<div class="relative reader__grid">
<div data-scaffold-immersive-reader-content="">
<div class="reader-article-content reader-article-content--content-blocks" dir="ltr">
<div class="reader-content-blocks-container">
<p id="8fc7" class="pw-post-body-paragraph nq nr ii ns b nt nu nv nw nx ny nz oa gj ob oc od gm oe of og gp oh oi oj ok hj bg" data-selectable-paragraph="">Here are a few that just dropped this month. They are so new you may not see them in your instance of Azure DevOps yet, as they roll out gradually. Actually some of the images below are taken from release notes because the feature has not shown up in my Azure DevOps yet. 🙂</p>
<h3 id="f0db" class="ol om ii bb on gd oo ge gf gg op gh gi gj oq gk gl gm or gn go gp os gq gr ot bg" data-selectable-paragraph="">Azure Boards — Condensed card display.</h3>
<p id="bd58" class="pw-post-body-paragraph nq nr ii ns b nt ou nv nw nx ov nz oa gj ow oc od gm ox of og gp oy oi oj ok hj bg" data-selectable-paragraph="">I love this new feature. Adding information to the cards in on your Kanban board can be so useful. However it’s easy to get carried away and the card ends up very big.</p>
<p id="3b81" class="pw-post-body-paragraph nq nr ii ns b nt nu nv nw nx ny nz oa gj ob oc od gm oe of og gp oh oi oj ok hj bg" data-selectable-paragraph="">A new toggle switch has been added called C<strong class="ns ij">ollapse card fields</strong>.<strong class="ns ij"> </strong>Highlighted in the image below, and turned off.</p>
<figure class="pc pd pe pf pg ph oz pa paragraph-image">
<div class="pi pj ej pk bd pl" tabindex="0" role="button">
<div class="oz pa pb"><picture><source srcset="https://miro.medium.com/v2/resize:fit:640/format:webp/1*tI85kMa-4Z8YSZ4_WEKM3Q.png 640w, https://miro.medium.com/v2/resize:fit:720/format:webp/1*tI85kMa-4Z8YSZ4_WEKM3Q.png 720w, https://miro.medium.com/v2/resize:fit:750/format:webp/1*tI85kMa-4Z8YSZ4_WEKM3Q.png 750w, https://miro.medium.com/v2/resize:fit:786/format:webp/1*tI85kMa-4Z8YSZ4_WEKM3Q.png 786w, https://miro.medium.com/v2/resize:fit:828/format:webp/1*tI85kMa-4Z8YSZ4_WEKM3Q.png 828w, https://miro.medium.com/v2/resize:fit:1100/format:webp/1*tI85kMa-4Z8YSZ4_WEKM3Q.png 1100w, https://miro.medium.com/v2/resize:fit:1400/format:webp/1*tI85kMa-4Z8YSZ4_WEKM3Q.png 1400w" type="image/webp" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" /><source srcset="https://miro.medium.com/v2/resize:fit:640/1*tI85kMa-4Z8YSZ4_WEKM3Q.png 640w, https://miro.medium.com/v2/resize:fit:720/1*tI85kMa-4Z8YSZ4_WEKM3Q.png 720w, https://miro.medium.com/v2/resize:fit:750/1*tI85kMa-4Z8YSZ4_WEKM3Q.png 750w, https://miro.medium.com/v2/resize:fit:786/1*tI85kMa-4Z8YSZ4_WEKM3Q.png 786w, https://miro.medium.com/v2/resize:fit:828/1*tI85kMa-4Z8YSZ4_WEKM3Q.png 828w, https://miro.medium.com/v2/resize:fit:1100/1*tI85kMa-4Z8YSZ4_WEKM3Q.png 1100w, https://miro.medium.com/v2/resize:fit:1400/1*tI85kMa-4Z8YSZ4_WEKM3Q.png 1400w" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" data-testid="og" /><img decoding="async" class="bd fr pm c" role="presentation" src="https://miro.medium.com/v2/resize:fit:700/1*tI85kMa-4Z8YSZ4_WEKM3Q.png" alt="" width="808" height="330" /></picture></div>
</div><figcaption class="pn po pp oz pa pq pr bb b bc u eb" data-selectable-paragraph="">Collapse card fields off</figcaption></figure>
<p id="7766" class="pw-post-body-paragraph nq nr ii ns b nt nu nv nw nx ny nz oa gj ob oc od gm oe of og gp oh oi oj ok hj bg" data-selectable-paragraph="">Click that button and all the extra info is hidden and you just see the work item with it’s ID and description.</p>
<figure class="pc pd pe pf pg ph oz pa paragraph-image">
<div class="pi pj ej pk bd pl" tabindex="0" role="button">
<div class="oz pa ps"><picture><source srcset="https://miro.medium.com/v2/resize:fit:640/format:webp/1*uCSQHTmdNxjbLEyseIthhQ.png 640w, https://miro.medium.com/v2/resize:fit:720/format:webp/1*uCSQHTmdNxjbLEyseIthhQ.png 720w, https://miro.medium.com/v2/resize:fit:750/format:webp/1*uCSQHTmdNxjbLEyseIthhQ.png 750w, https://miro.medium.com/v2/resize:fit:786/format:webp/1*uCSQHTmdNxjbLEyseIthhQ.png 786w, https://miro.medium.com/v2/resize:fit:828/format:webp/1*uCSQHTmdNxjbLEyseIthhQ.png 828w, https://miro.medium.com/v2/resize:fit:1100/format:webp/1*uCSQHTmdNxjbLEyseIthhQ.png 1100w, https://miro.medium.com/v2/resize:fit:1400/format:webp/1*uCSQHTmdNxjbLEyseIthhQ.png 1400w" type="image/webp" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" /><source srcset="https://miro.medium.com/v2/resize:fit:640/1*uCSQHTmdNxjbLEyseIthhQ.png 640w, https://miro.medium.com/v2/resize:fit:720/1*uCSQHTmdNxjbLEyseIthhQ.png 720w, https://miro.medium.com/v2/resize:fit:750/1*uCSQHTmdNxjbLEyseIthhQ.png 750w, https://miro.medium.com/v2/resize:fit:786/1*uCSQHTmdNxjbLEyseIthhQ.png 786w, https://miro.medium.com/v2/resize:fit:828/1*uCSQHTmdNxjbLEyseIthhQ.png 828w, https://miro.medium.com/v2/resize:fit:1100/1*uCSQHTmdNxjbLEyseIthhQ.png 1100w, https://miro.medium.com/v2/resize:fit:1400/1*uCSQHTmdNxjbLEyseIthhQ.png 1400w" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" data-testid="og" /><img loading="lazy" decoding="async" class="bd fr pm c" role="presentation" src="https://miro.medium.com/v2/resize:fit:700/1*uCSQHTmdNxjbLEyseIthhQ.png" alt="" width="813" height="325" /></picture></div>
</div><figcaption class="pn po pp oz pa pq pr bb b bc u eb" data-selectable-paragraph="">Collapse card fields on</figcaption></figure>
<h3 id="295f" class="ol om ii bb on gd oo ge gf gg op gh gi gj oq gk gl gm or gn go gp os gq gr ot bg" data-selectable-paragraph="">Azure Repos — Auto-complete pull requests by default</h3>
<p id="8ba6" class="pw-post-body-paragraph nq nr ii ns b nt ou nv nw nx ov nz oa gj ow oc od gm ox of og gp oy oi oj ok hj bg" data-selectable-paragraph="">I have always loved the auto-complete feature in Azure Repos. For those not familiar, you can create a PR and set it to auto-complete. Which means after all the checks have passed and it’s been approved by reviewers, the PR will be automatically closed and merged. No need for you to come back later and merge the PR.</p>
<p id="1ee5" class="pw-post-body-paragraph nq nr ii ns b nt nv nw nx nz oa gj oc od gm of og gp oi oj pt ok hj bg" data-selectable-paragraph="">Well now you can make this the default setting either for the whole project or for each Repo individually.</p>
<figure class="pc pd pe pf pg ph oz pa paragraph-image">
<div class="pi pj ej pk bd pl" tabindex="0" role="button">
<div class="oz pa pu"><picture><source srcset="https://miro.medium.com/v2/resize:fit:640/format:webp/0*oVjgyccQM97XAcfB.png 640w, https://miro.medium.com/v2/resize:fit:720/format:webp/0*oVjgyccQM97XAcfB.png 720w, https://miro.medium.com/v2/resize:fit:750/format:webp/0*oVjgyccQM97XAcfB.png 750w, https://miro.medium.com/v2/resize:fit:786/format:webp/0*oVjgyccQM97XAcfB.png 786w, https://miro.medium.com/v2/resize:fit:828/format:webp/0*oVjgyccQM97XAcfB.png 828w, https://miro.medium.com/v2/resize:fit:1100/format:webp/0*oVjgyccQM97XAcfB.png 1100w, https://miro.medium.com/v2/resize:fit:1400/format:webp/0*oVjgyccQM97XAcfB.png 1400w" type="image/webp" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" /><source srcset="https://miro.medium.com/v2/resize:fit:640/0*oVjgyccQM97XAcfB.png 640w, https://miro.medium.com/v2/resize:fit:720/0*oVjgyccQM97XAcfB.png 720w, https://miro.medium.com/v2/resize:fit:750/0*oVjgyccQM97XAcfB.png 750w, https://miro.medium.com/v2/resize:fit:786/0*oVjgyccQM97XAcfB.png 786w, https://miro.medium.com/v2/resize:fit:828/0*oVjgyccQM97XAcfB.png 828w, https://miro.medium.com/v2/resize:fit:1100/0*oVjgyccQM97XAcfB.png 1100w, https://miro.medium.com/v2/resize:fit:1400/0*oVjgyccQM97XAcfB.png 1400w" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" data-testid="og" /><img loading="lazy" decoding="async" class="bd fr pm c" role="presentation" src="https://miro.medium.com/v2/resize:fit:700/0*oVjgyccQM97XAcfB.png" alt="" width="853" height="503" /></picture></div>
</div>
</figure>
<p id="ea68" class="pw-post-body-paragraph nq nr ii ns b nt nu nv nw nx ny nz oa gj ob oc od gm oe of og gp oh oi oj ok hj bg" data-selectable-paragraph="">When this is enabled, every new PR will automatically have Set auto-complete turned on. When disabled, new PRs will start with Set auto-complete turned off, and authors can choose to enable it manually.</p>
<h3 id="4500" class="ol om ii bb on gd oo ge gf gg op gh gi gj oq gk gl gm or gn go gp os gq gr ot bg" data-selectable-paragraph="">Azure Test Plans — Exploratory Testing</h3>
<p id="860c" class="pw-post-body-paragraph nq nr ii ns b nt ou nv nw nx ov nz oa gj ow oc od gm ox of og gp oy oi oj ok hj bg" data-selectable-paragraph="">As you may have read in past posts, I love Azure Test Plans. I also love the <a class="z pv" href="https://medium.com/objectsharp/exploratory-testing-6f247adb62c0" target="_blank" rel="noopener" data-discover="true">Exploratory Testing</a> tool. Microsoft recently updated the test run user interface, it was previously terrible and ignored for too long. It’s much better now, perhaps a post about that is required. Well the exploratory Test runs was also terrible, as it was just tucked away in with test runs.</p>
<p id="2983" class="pw-post-body-paragraph nq nr ii ns b nt nu nv nw nx ny nz oa gj ob oc od gm oe of og gp oh oi oj ok hj bg" data-selectable-paragraph="">As of this month it has it’s own place of honour in the main navigation bar under Test Plans.</p>
<figure class="pc pd pe pf pg ph oz pa paragraph-image">
<div class="pi pj ej pk bd pl" tabindex="0" role="button">
<div class="oz pa pw"><picture><source srcset="https://miro.medium.com/v2/resize:fit:640/format:webp/1*nip6bWlSIjzwusJiSiF6qg.png 640w, https://miro.medium.com/v2/resize:fit:720/format:webp/1*nip6bWlSIjzwusJiSiF6qg.png 720w, https://miro.medium.com/v2/resize:fit:750/format:webp/1*nip6bWlSIjzwusJiSiF6qg.png 750w, https://miro.medium.com/v2/resize:fit:786/format:webp/1*nip6bWlSIjzwusJiSiF6qg.png 786w, https://miro.medium.com/v2/resize:fit:828/format:webp/1*nip6bWlSIjzwusJiSiF6qg.png 828w, https://miro.medium.com/v2/resize:fit:1100/format:webp/1*nip6bWlSIjzwusJiSiF6qg.png 1100w, https://miro.medium.com/v2/resize:fit:1400/format:webp/1*nip6bWlSIjzwusJiSiF6qg.png 1400w" type="image/webp" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" /><source srcset="https://miro.medium.com/v2/resize:fit:640/1*nip6bWlSIjzwusJiSiF6qg.png 640w, https://miro.medium.com/v2/resize:fit:720/1*nip6bWlSIjzwusJiSiF6qg.png 720w, https://miro.medium.com/v2/resize:fit:750/1*nip6bWlSIjzwusJiSiF6qg.png 750w, https://miro.medium.com/v2/resize:fit:786/1*nip6bWlSIjzwusJiSiF6qg.png 786w, https://miro.medium.com/v2/resize:fit:828/1*nip6bWlSIjzwusJiSiF6qg.png 828w, https://miro.medium.com/v2/resize:fit:1100/1*nip6bWlSIjzwusJiSiF6qg.png 1100w, https://miro.medium.com/v2/resize:fit:1400/1*nip6bWlSIjzwusJiSiF6qg.png 1400w" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" data-testid="og" /><img loading="lazy" decoding="async" class="bd fr pm c" role="presentation" src="https://miro.medium.com/v2/resize:fit:700/1*nip6bWlSIjzwusJiSiF6qg.png" alt="" width="850" height="345" /></picture></div>
</div>
</figure>
<p id="ember110" class="ember-view reader-text-block__paragraph">
</div>
</div>
</div>
</div>
</article>
</div>
<p>&nbsp;</p>
<aside class="scaffold-layout__aside ">
<div class="scaffold-layout__sticky scaffold-layout__sticky--is-active scaffold-layout__sticky--md ">
<div class="scaffold-layout__sticky-content">
<div class="reader-social-activity__right-rail">
<h3 class="text-heading-medium mb4"></h3>
</div>
</div>
</div>
</aside>
<p>The post <a href="https://www.centrilogic.com/new-features-azure-devops-march-2026/">New Features in Azure DevOps &#8211; March 2026</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Agentic AI is Coming to the Enterprise: Part 2 &#8211; Threat Assessment</title>
		<link>https://www.centrilogic.com/agentic-ai-enterprise-risks/</link>
		
		<dc:creator><![CDATA[Denise Faustino]]></dc:creator>
		<pubDate>Mon, 23 Feb 2026 20:30:27 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://www.centrilogic.com/?p=1936</guid>

					<description><![CDATA[<p>As agentic AI systems begin to act autonomously inside the enterprise, familiar security risks are amplified. This article explores the key threats, accountability gaps, and governance challenges leaders must address.</p>
<p>The post <a href="https://www.centrilogic.com/agentic-ai-enterprise-risks/">Agentic AI is Coming to the Enterprise: Part 2 &#8211; Threat Assessment</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This article was written by <a href="https://www.linkedin.com/in/waltlsmith/" target="_blank" rel="noopener">Walter Smith</a>, our VP Application Development Practice. It was first published on <a href="https://www.linkedin.com/pulse/agentic-ai-coming-enterprise-walt-smith-obcme/" target="_blank" rel="noopener">LinkedIn</a>.</p>
<p>This is the 2nd post in a 4 part series, to read part 1, visit: <a href="https://www.centrilogic.com/agentic-ai-enterprise-readiness">https://www.centrilogic.com/agentic-ai-enterprise-readiness</a>.</p>
<h1 class="reader-article-header__title" dir="ltr"><span data-scaffold-immersive-reader-title="">Agentic AI is Coming to the Enterprise: Part 2 &#8211; Threat Assessment</span></h1>
<p>&nbsp;</p>
<div data-scaffold-immersive-reader="">
<article>
<div class="relative reader__grid">
<div data-scaffold-immersive-reader-content="">
<div>
<div class="reader-article-content reader-article-content--content-blocks" dir="ltr">
<div class="reader-content-blocks-container">
<p id="ember73" class="ember-view reader-text-block__paragraph"><em>This is the second in a four-part series about operations and security in the agentic AI world. It is intended for informational purposes only. Please engage an AI security professional before implementing agentic AI.</em></p>
<p id="ember74" class="ember-view reader-text-block__paragraph">You&#8217;ve heard agentic AI has the potential to transform operations, accelerate decision‑making, and free teams to focus on higher‑value work. In most cases, the upside <em>is</em> very real — and very measurable. That&#8217;s exciting stuff.</p>
<p id="ember75" class="ember-view reader-text-block__paragraph">But before deploying systems that can reason, decide, and act autonomously, leaders need to balance that excitement with one additional question: What needs to be true for agents to work safely and sustainably in our company?</p>
<p id="ember76" class="ember-view reader-text-block__paragraph">Unlike its cousin Generative AI, Agentic AI doesn’t just provide passive insights. Agents go much further. They can execute financially significant actions, trigger mission critical business workflows, and interact directly with core enterprise systems of record. That capability is what makes them powerful — and what makes thoughtful governance essential.</p>
<p id="ember77" class="ember-view reader-text-block__paragraph">Below is a look at the <strong>agentic AI risk landscape</strong>, framed not as a reason to slow down innovation, but as guidance for how to scale it responsibly.</p>
<h3 id="ember78" class="ember-view reader-text-block__paragraph"><strong>Familiar Risks Amplified by Autonomy</strong></h3>
<p id="ember79" class="ember-view reader-text-block__paragraph">At their core, agentic systems still face the same foundational security risks as any digital platform: <strong>confidentiality, integrity, and availability</strong>. The main difference is the scale and speed with which agents can unleash threats in these areas:</p>
<ul>
<li><strong>Confidentiality</strong> Agents often require broad access to data and systems to be effective. When well‑designed, this enables efficiency. When poorly governed, it can create exposure of private or confidential data if an agent is misused or compromised.</li>
<li><strong>Integrity</strong> Agentic systems don’t just explore information — they interpret it and act on it. If their inputs or logic are manipulated, the results can range from incorrect decisions to unauthorized actions.</li>
<li><strong>Availability</strong> Because agents can operate continuously and autonomously <em>at internet speed</em>, errors can propagate much faster than in human‑driven workflows, potentially stressing systems or triggering outages. Self-inflicted denial of service is a real thing in a poorly governed agentic system.</li>
</ul>
<p id="ember81" class="ember-view reader-text-block__paragraph">And with agentic AI systems, we add one more risk area to the threat landscape:</p>
<ul>
<li><strong>Accountability</strong> AI agents act on behalf of the business. Just like leaders are responsible for the actions of their employees, so they are responsible for the actions of their agents. Clear ownership for every action, every decision, and every outcome must be established, managed and enforced.</li>
</ul>
<p id="ember83" class="ember-view reader-text-block__paragraph">Of course, none of these are reasons to avoid implementing agentic AI. They are reminders that traditional controls need to evolve alongside agentic autonomy.</p>
<h3 id="ember84" class="ember-view reader-text-block__paragraph"><strong>AI‑Specific Risks That Leaders Should Understand</strong></h3>
<p id="ember85" class="ember-view reader-text-block__paragraph">In addition to the traditional threats that Agentic AI amplifies, it also introduces a few risks that are new — or at least newly visible — to business leaders.</p>
<ul>
<li><strong>Data poisoning</strong> If training or reference data is compromised, whether intentionally or not, agent behavior can subtly degrade over time, producing hallucinations that may creep into previously trusted data.</li>
<li><strong>Prompt and instruction manipulation</strong> Agents are intrinsically trusting of their input. Poorly protected agents can be influenced by carefully crafted inputs that override intended constraints or attempt to alter agent behavior.</li>
<li><strong>Behavioral drift</strong> Over time, agents may find “creative” ways to achieve objectives that technically meet stated goals but miss the spirit of business intent. Long term monitoring and adjustment of agent behavior is a must-have for a production system.</li>
<li><strong>Supply Chain Attacks</strong> Agents rely on plugins, APIs, models, and third‑party libraries to accomplish their goals. If any component in that chain is compromised the results can be far reaching. For instance, the agent can become an attacker’s beachhead into the enterprise. Malicious logic can be injected directly into the agent’s toolset. Compromised agents can then silently manipulate the behavior of their workflows.</li>
</ul>
<p id="ember87" class="ember-view reader-text-block__paragraph">It’s important to note that these threats often target business and financial decision processes, not just IT infrastructure and data. That’s why they depend on the business owners’ attention, not just technical mitigation by IT.</p>
<h3 id="ember88" class="ember-view reader-text-block__paragraph"><strong>Speed Is a Competitive Advantage — and a Responsibility</strong></h3>
<p id="ember89" class="ember-view reader-text-block__paragraph">One of the most compelling benefits of agentic AI is speed. Agents don’t wait, hesitate, or get distracted — they execute at a dizzying pace.</p>
<p id="ember90" class="ember-view reader-text-block__paragraph">But speed cuts both ways.</p>
<p id="ember91" class="ember-view reader-text-block__paragraph">An agent can:</p>
<ul>
<li>Act faster than humans can intervene</li>
<li>Chain decisions together across systems</li>
<li>Use force multiplication to transform small configuration errors into large outcomes</li>
</ul>
<p id="ember93" class="ember-view reader-text-block__paragraph">This doesn’t mean agents are inherently dangerous. It means organizations need <strong>clear boundaries, escalation paths, and mechanisms to halt improper activities</strong> — just as they would for any mission critical operational capability.</p>
<h3 id="ember94" class="ember-view reader-text-block__paragraph"><strong>Aligning AI Objectives with Business Intent</strong></h3>
<p id="ember95" class="ember-view reader-text-block__paragraph">Agentic systems are goal‑driven. They optimize what they are asked to achieve literally — not what leaders <em>intended</em>.</p>
<p id="ember96" class="ember-view reader-text-block__paragraph">For example:</p>
<ul>
<li>A cost‑reduction agent could sacrifice customer experience to achieve its goal of saving money on a transaction.</li>
<li>A productivity agent may bypass governance controls to save processing time, unaware of why those controls exist or the impact of non-compliance.</li>
<li>A sales agent may unintentionally oversell or misconfigure a sale, much the way a junior salesperson could.</li>
</ul>
<p id="ember98" class="ember-view reader-text-block__paragraph">This isn’t failure. It’s misalignment.</p>
<p id="ember99" class="ember-view reader-text-block__paragraph">Successful organizations treat the definition of agent objectives as a leadership responsibility, not a configuration task. The clearer the intent and directions, the safer — and more valuable — the outcome.</p>
<h3 id="ember100" class="ember-view reader-text-block__paragraph"><strong>Governance That Enables Innovation</strong></h3>
<p id="ember101" class="ember-view reader-text-block__paragraph">A common concern is that onerous governance of agents will slow AI adoption and the realization of the resulting benefits. In practice, the opposite is true.</p>
<p id="ember102" class="ember-view reader-text-block__paragraph">Strong agent governance:</p>
<ul>
<li>Increases executive and shareholder confidence</li>
<li>Enables faster scaling in production</li>
<li>Reduces the risk and attention of high‑profile failures</li>
<li>Builds trust with regulators, customers, and employees</li>
</ul>
<p id="ember104" class="ember-view reader-text-block__paragraph">The most effective organizations move from periodic oversight to continuous control of agentic operations — monitoring behaviors, not just outcomes.</p>
<p id="ember105" class="ember-view reader-text-block__paragraph">Here is the elevator statement: When governance is built in, innovation accelerates.</p>
<h3 id="ember106" class="ember-view reader-text-block__paragraph"><strong>The Takeaway</strong></h3>
<p id="ember107" class="ember-view reader-text-block__paragraph">Agentic AI represents a genuine opportunity to rethink how work gets done. Organizations that approach it thoughtfully can unlock meaningful gains in efficiency, quality, and speed.</p>
<p id="ember108" class="ember-view reader-text-block__paragraph">The goal of understanding the threat landscape of agentic AI is not to eliminate risk — that’s neither realistic nor necessary. Instead, the aim is to understand the risks well enough to manage them, so the autonomy of agentic AI systems becomes a competitive advantage rather than a liability.</p>
<p id="ember109" class="ember-view reader-text-block__paragraph">Leaders who strike this balance will find that agentic AI isn’t something to fear, it’s a new way to lead.</p>
<p id="ember110" class="ember-view reader-text-block__paragraph"><em>Our next article will provide a blueprint for how to design secure agentic AI systems from the ground up, using practical architectural principles and proven risk frameworks.</em></p>
</div>
</div>
</div>
</div>
</div>
</article>
</div>
<p>&nbsp;</p>
<aside class="scaffold-layout__aside ">
<div class="scaffold-layout__sticky scaffold-layout__sticky--is-active scaffold-layout__sticky--md ">
<div class="scaffold-layout__sticky-content">
<div class="reader-social-activity__right-rail">
<h3 class="text-heading-medium mb4"></h3>
</div>
</div>
</div>
</aside>
<p>The post <a href="https://www.centrilogic.com/agentic-ai-enterprise-risks/">Agentic AI is Coming to the Enterprise: Part 2 &#8211; Threat Assessment</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Agentic AI Is Coming to the Enterprise — Are You Ready?</title>
		<link>https://www.centrilogic.com/agentic-ai-enterprise-readiness/</link>
		
		<dc:creator><![CDATA[Denise Faustino]]></dc:creator>
		<pubDate>Mon, 23 Feb 2026 20:09:31 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://www.centrilogic.com/?p=1933</guid>

					<description><![CDATA[<p>This is the first in a four-part series about operations and security in the agentic AI world.  It focus on how Agentic AI is no longer a future concept. As autonomous systems begin to plan, decide, and act inside the enterprise, organizations must rethink security, governance, and accountability.</p>
<p>The post <a href="https://www.centrilogic.com/agentic-ai-enterprise-readiness/">Agentic AI Is Coming to the Enterprise — Are You Ready?</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This article was written by <a href="https://www.linkedin.com/in/waltlsmith/" target="_blank" rel="noopener">Walter Smith</a>, our VP Application Development Practice. It was first published on <a href="https://www.linkedin.com/pulse/agentic-ai-coming-enterprise-you-ready-walt-smith-qeqee/" target="_blank" rel="noopener">LinkedIn</a>.</p>
<p><span data-teams="true">To read part 2, titled: </span><span data-scaffold-immersive-reader-title="">Agentic AI is Coming to the Enterprise: Part 2 &#8211; Threat Assessment</span><span data-teams="true">, visit <a href="https://www.centrilogic.com/agentic-ai-enterprise-risks.">https://www.centrilogic.com/agentic-ai-enterprise-risks.</a></span></p>
<h2 class="reader-article-header__title" dir="ltr"><span data-scaffold-immersive-reader-title="">Agentic AI Is Coming to the Enterprise — Are You Ready?</span></h2>
<p id="ember72" class="ember-view reader-text-block__paragraph"><em>This is the first in a four-part series about operations and security in the agentic AI world. It is intended for informational purposes only. Please engage an AI professional before implementing agentic AI.</em></p>
<p id="ember73" class="ember-view reader-text-block__paragraph">In case you looked away for a moment, you might not have noticed that agentic AI is no longer a far off, futuristic concept. It’s here today, rapidly making its way into enterprises of all sizes. Unlike simpler, generative AI models that respond to a single prompt and then stop and wait for their next task, agentic AI systems can plan and execute multi‑step tasks, invoke external tools to do work, integrate seamlessly with other business systems, and most importantly, can act autonomously to reach a goal. As my colleague says, “Agentic AI doesn’t just tell you what to do — it actually does it.”</p>
<p id="ember74" class="ember-view reader-text-block__paragraph">And from a security perspective, that shift changes everything.</p>
<h2 id="ember75" class="ember-view reader-text-block__heading-2">AI Autonomy Creates Risk</h2>
<p id="ember76" class="ember-view reader-text-block__paragraph">Agentic AI introduces additional threats to the business that generative AI operational frameworks were not designed to address:</p>
<ul>
<li><strong>Unauthorized operations:</strong> An agent with system access can perform actions directly — including harmful ones — if compromised or simply inadequately trained or constrained.</li>
<li><strong>Goal drift:</strong> Agents can creatively pursue perceived objectives in unintended ways and cause unintended, collateral damage.</li>
<li><strong>Adversarial manipulation:</strong> Malicious prompts or poisoned data can redirect agent to malicious behavior in the production environment.</li>
<li><strong>Integration exploitation: </strong>Because agents often interact directly with APIs, tools, and enterprise systems, they expand the organizational security attack surface.</li>
<li><strong>Memory poisoning: </strong>Persistent agent memory can become a vector for misinformation, bias, or manipulation.</li>
<li><strong>Accountability gaps: </strong>If an autonomous agent performs a harmful action, it may be unclear who is responsible and accountable for those acts.</li>
</ul>
<p id="ember78" class="ember-view reader-text-block__paragraph">Just like humans, the very autonomy that makes agentic AI so powerful also makes it vulnerable to attack and, without proper support and controls, potentially dangerous. The enterprise must treat AI as it treats any worker in the organization, and plan ahead for the potential vulnerabilities those workers may introduce.</p>
<h2 id="ember79" class="ember-view reader-text-block__heading-2">Existing Security Frameworks Aren’t Enough</h2>
<p id="ember80" class="ember-view reader-text-block__paragraph">Standards like the NIST AI Risk Management Framework and ISO/IEC 42001 give broad guidance on managing AI risk. But they weren’t built for autonomous systems capable of initiating actions, chaining decisions, or using tools on their own.</p>
<p id="ember81" class="ember-view reader-text-block__paragraph">Adopting agentic AI requires the enterprise to also adopt:</p>
<ul>
<li>New architectural safeguards that design security and integrity directly into agentic systems.</li>
<li>New runtime monitoring approaches to catch and prevent errors and breaches proactively.</li>
<li>New responsibility and accountability models that merge AI and human activities.</li>
<li>New threat categories directly related to AI specific activities.</li>
<li>New lifecycle controls to prevent the introduction of threats or vulnerabilities into change pipelines, which are much more dynamic in an AI environment.</li>
</ul>
<p id="ember83" class="ember-view reader-text-block__paragraph">The existing security frameworks at most organizations simply aren’t prepared to deploy agentic AI into the production environment, and the opportunity to have your business processes compromised is very real.</p>
<h2 id="ember84" class="ember-view reader-text-block__heading-2">The Path Forward</h2>
<p id="ember85" class="ember-view reader-text-block__paragraph">While all this might sound daunting, there is no difference in deploying AI into your organization as there would be with any human or system based operational or decision support technology. There is a framework and set of steps to follow. At a high level. organizations that want to safely capture agentic AI’s benefits and beyond must:</p>
<ul>
<li>Understand the new threat landscape that agentic AI brings</li>
<li>Architect and Design your autonomous AI systems with proper guardrails</li>
<li>Shift from “model‑centric” to “agent‑centric” governance</li>
<li>Build accountability, explainability, and oversight into every layer of the agentic system</li>
<li>Treat agentic AI as a living system that evolves — and must be monitored and adjusted continuously</li>
</ul>
<p id="ember87" class="ember-view reader-text-block__paragraph">The agentic AI future is coming fast, and securing it properly could be the difference between a great success and a hasty, and costly retreat.</p>
<p id="ember88" class="ember-view reader-text-block__paragraph"><em>In the next article, we’ll explore the full threat landscape of agentic AI and break down the categories of risk executives must understand before deploying autonomous systems.</em></p>
<p id="ember89" class="ember-view reader-text-block__paragraph">
<p>The post <a href="https://www.centrilogic.com/agentic-ai-enterprise-readiness/">Agentic AI Is Coming to the Enterprise — Are You Ready?</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>OneLake Strategy: How to Treat Data Like a Product (Not a Project)</title>
		<link>https://www.centrilogic.com/onelake-strategy-data-as-a-product/</link>
		
		<dc:creator><![CDATA[Denise Faustino]]></dc:creator>
		<pubDate>Wed, 18 Feb 2026 14:51:08 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://www.centrilogic.com/?p=1931</guid>

					<description><![CDATA[<p>This is the second post in the Fabric Operating Model Article Series, and focuses on why OneLake only becomes a multiplier when ownership, semantics, and trust become default.</p>
<p>The post <a href="https://www.centrilogic.com/onelake-strategy-data-as-a-product/">OneLake Strategy: How to Treat Data Like a Product (Not a Project)</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This article was written by <a href="https://www.linkedin.com/in/aliuzair/" target="_blank" rel="noopener">Ali Uzair</a>, our Senior Manager of Data &amp; Analytics. It was first published on <a href="https://www.linkedin.com/pulse/onelake-strategy-how-treat-data-like-product-project-ali-uzair-ekxbc/?trackingId=rKzMk4MG5KcPaSQgmijP9Q%3D%3D" target="_blank" rel="noopener">LinkedIn</a>.</p>
<p>Build once. Use many times.</p>
<h3>Fabric Operating Model Series (Post 2): Why OneLake only becomes a multiplier when ownership, semantics, and trust become default.</h3>
<p>I’m still riding the high of Arsenal reaching a cup final. And it’s a useful reminder for this topic: you don’t get there on hero moments alone. You get there because the system holds up week after week, even when the game gets messy. That’s basically the difference between building data like a project and building it like a product.</p>
<p>This is the OneLake post I teased in my last article, focused on what it actually takes to make reuse and trust scale.</p>
<p>Most organizations don’t have a data problem. They have a product problem. They build data like a project: one-off pipelines, one-off models, one-off dashboards. Success gets measured by delivery, not adoption. Then the next team rebuilds the same thing because it’s faster than figuring out what already exists.</p>
<p>Treating data like a product is the shift from “we delivered it” to “it’s used, trusted, governed, and improving.” And OneLake, done right, makes that shift easier because it nudges teams toward shared foundations and reuse instead of copy-and-customize.</p>
<h3>What “data as a product” actually means</h3>
<p>A data product is not a dataset with a fancy name. It’s a business capability with standards.</p>
<p>A useful definition is simple: A data product is a governed, reusable, trusted asset with clear ownership and measurable expectations.</p>
<p>That means every serious data product has:</p>
<ul>
<li><strong>An owner:</strong> Not “the data team.” An accountable domain owner who can answer, “Is this right?” and “What changed?”</li>
<li><strong>A consumer:</strong> If nobody consumes it, it’s not a product. It’s inventory.</li>
<li><strong>A contract:</strong> What it contains, how it’s defined, how fresh it is, and what “quality” means.</li>
<li><strong>A lifecycle:</strong> Versioning, change management, deprecation. Businesses change. Products need to keep up.</li>
<li><strong>A support model:</strong> When something breaks, who fixes it, and how fast?</li>
</ul>
<p>This is where many programs quietly fail. They ship assets, not products.</p>
<h3>Why OneLake is strategic (and not just “storage”)</h3>
<p>OneLake matters because it supports a different default behavior &#8211; Build once, use many times.</p>
<p>When the operating model is “copy data to make it usable,” you get:</p>
<ul>
<li>duplicated truth</li>
<li>duplicated cost</li>
<li>duplicated governance work</li>
<li>duplicated failure modes</li>
</ul>
<p>A OneLake mindset doesn’t magically eliminate duplication. But it creates the possibility of shared access patterns and consistent governance without rebuilding the foundation for every team and every use case. That matters because reuse is where the ROI lives.</p>
<h3>The three disciplines that separate a “nice lake” from a real strategy</h3>
<p>If you want OneLake to become an advantage, focus on these three disciplines. They’re not glamorous, but they’re decisive.</p>
<p><strong>1) Ownership (who is accountable):</strong> Domain ownership with platform guardrails. This is how you scale without creating a central bottleneck.</p>
<p><strong>2) Semantics (what things mean):</strong> This is the part most teams skip, and the part AI will punish the hardest.</p>
<p>If “net revenue” means five different things, the platform is not the issue. The meaning is. We’ve seen this play out in the wild: Finance defines net revenue after returns and discounts, Sales reports bookings, RevOps blends pipeline with recognized revenue, and someone else quietly excludes a region “for consistency.” All of them have a rationale. None of them are the same metric. And it usually gets discovered when the exec deck is already out.</p>
<p>This is the data equivalent of a casual back pass that gets intercepted. It feels harmless until it ends up in the net during an exec review. Semantics is what turns raw data into decision-grade data.</p>
<p><strong>3) Trust (how you prove it’s reliable):</strong> Quality signals, lineage, certification, freshness expectations, access controls. Trust that’s designed into the system, not trust that depends on escalations and late-night triage.</p>
<p><strong>A practical way to start (without boiling the ocean)</strong></p>
<p>Pick one domain where the business already cares about outcomes (finance, sales, supply chain, customer ops). Then define one high-value data product.</p>
<p>Not ten. One.</p>
<p>Trying to launch ten data products at once is how you score an own goal. Everyone is busy, nobody is aligned, and the basics get missed. Make it boringly clear:</p>
<ul>
<li>Owner</li>
<li>Consumers</li>
<li>Definitions (semantic model)</li>
<li>Refresh expectations</li>
<li>Quality checks</li>
<li>Access pattern</li>
</ul>
<p>You’ll learn more from one real product than a hundred pages of “framework.” And yes, this is the part where someone will ask, “Do we really need all this?” <strong>You don’t need it for a demo. You need it for scale.</strong></p>
<h3>The operating model that makes OneLake pay off</h3>
<p>If OneLake is on your roadmap, don’t treat it as a storage story. Treat it as a product operating model. The difference between “promising” and “scaled” usually comes down to a few shifts:</p>
<ul>
<li><strong>Measure adoption, not delivery:</strong> Reuse rate, consumer satisfaction, freshness &amp; quality SLAs, and time-to-trusted-data matter more than counts of pipelines or dashboards.</li>
<li><strong>Treat semantics as strategic:</strong> AI runs on meaning. If definitions drift, trust in automation breaks fast.</li>
<li><strong>Bake trust into defaults:</strong> Lineage, certification, access patterns, and lifecycle management should ship with the work, not show up after.</li>
</ul>
<p>OneLake is not the answer to everything. It scales what you standardize. If ownership and semantics are fuzzy, you won’t scale value. You’ll scale inconsistency.</p>
<p>That’s the takeaway: OneLake isn’t the strategy. Data products are the strategy. Ownership, semantics, and trust built on shared foundations make reuse the default, and help teams spend more time making decisions than reconciling numbers.</p>
<p>The post <a href="https://www.centrilogic.com/onelake-strategy-data-as-a-product/">OneLake Strategy: How to Treat Data Like a Product (Not a Project)</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>From Hype to Outcomes: The Agentic AI Advantage Webinar (Recording)</title>
		<link>https://www.centrilogic.com/from-hype-to-outcomes-agentic-ai-advantage-webinar/</link>
		
		<dc:creator><![CDATA[Denise Faustino]]></dc:creator>
		<pubDate>Fri, 13 Feb 2026 17:08:16 +0000</pubDate>
				<category><![CDATA[Event]]></category>
		<category><![CDATA[General]]></category>
		<category><![CDATA[On-Demand]]></category>
		<guid isPermaLink="false">https://www.centrilogic.com/?p=1928</guid>

					<description><![CDATA[<p>Join us for an informative webinar that cuts through AI hype and focuses on real outcomes. See what works, why it pays off, and how to scale safely. </p>
<p>The post <a href="https://www.centrilogic.com/from-hype-to-outcomes-agentic-ai-advantage-webinar/">From Hype to Outcomes: The Agentic AI Advantage Webinar (Recording)</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" decoding="async" class=" wp-image-1939 aligncenter" src="https://www.centrilogic.com/wp-content/uploads/2026/02/CEN-ON-DEMAND-On-Demand-size-1280.jpg" alt="From Hype to Outcomes - The Agentic AI Advantage" width="397" height="397" srcset="https://www.centrilogic.com/wp-content/uploads/2026/02/CEN-ON-DEMAND-On-Demand-size-1280.jpg 800w, https://www.centrilogic.com/wp-content/uploads/2026/02/CEN-ON-DEMAND-On-Demand-size-1280-300x300.jpg 300w, https://www.centrilogic.com/wp-content/uploads/2026/02/CEN-ON-DEMAND-On-Demand-size-1280-150x150.jpg 150w, https://www.centrilogic.com/wp-content/uploads/2026/02/CEN-ON-DEMAND-On-Demand-size-1280-768x768.jpg 768w, https://www.centrilogic.com/wp-content/uploads/2026/02/CEN-ON-DEMAND-On-Demand-size-1280-750x750.jpg 750w" sizes="auto, (max-width: 397px) 100vw, 397px" /></p>
<p>The post <a href="https://www.centrilogic.com/from-hype-to-outcomes-agentic-ai-advantage-webinar/">From Hype to Outcomes: The Agentic AI Advantage Webinar (Recording)</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Business Automation at Scale with Agentic AI &#8211; Multi-Agent AI Demo (Recording)</title>
		<link>https://www.centrilogic.com/business-automation-at-scale-with-agentic-ai-multi-agent-ai-demo/</link>
		
		<dc:creator><![CDATA[Matt Callahan]]></dc:creator>
		<pubDate>Tue, 10 Feb 2026 20:14:18 +0000</pubDate>
				<category><![CDATA[General]]></category>
		<category><![CDATA[On-Demand]]></category>
		<guid isPermaLink="false">https://www.centrilogic.com/?p=1925</guid>

					<description><![CDATA[<p>Watch a demo of a fully automated multi‑agent voice workflow that authenticates callers, gathers structured details, and files insurance claims. </p>
<p>The post <a href="https://www.centrilogic.com/business-automation-at-scale-with-agentic-ai-multi-agent-ai-demo/">Business Automation at Scale with Agentic AI &#8211; Multi-Agent AI Demo (Recording)</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In this on‑demand session, <em>Shane Castle</em>, Managing Director of AI at <strong>Centrilogic</strong>, walks through how companies can move beyond AI pilots to scalable, governed agentic AI production systems using Microsoft&#8217;s AI technologies.</p>
<p>&nbsp;</p>
<h2><span style="color: #f59c00;">Multi-Agent AI Demo &amp; Context:</span></h2>
<p>The demo showcases a multi-agent, voice-based claims experience for an insurance company handling a car accident claim. One agent authenticates the caller and identifies intent;  a claims specialist agent gathers details; police and claims agents validate coverage and file a claim end-to-end. In this call-center scenario, identity verification and after-call summarization labeling can reduce handle time by 30-40% per call.</p>
<p>The post <a href="https://www.centrilogic.com/business-automation-at-scale-with-agentic-ai-multi-agent-ai-demo/">Business Automation at Scale with Agentic AI &#8211; Multi-Agent AI Demo (Recording)</a> appeared first on <a href="https://www.centrilogic.com">Centrilogic</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
