<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

<channel>
	<title>EdgeIR.com - Digital Infrastructure News</title>
	<atom:link href="https://www.edgeir.com/feed" rel="self" type="application/rss+xml"/>
	<link>https://www.edgeir.com</link>
	<description>Daily news, insights and contributed articles in the digital infrastructure ecosystem. </description>
	<lastBuildDate>Wed, 08 Apr 2026 22:09:56 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
<site xmlns="com-wordpress:feed-additions:1">170853209</site>	<item>
		<title>Lambda doubles down on NVIDIA stack with 10,000+ Blackwell GPUs and CPO networking push</title>
		<link>https://www.edgeir.com/lambda-doubles-down-on-nvidia-stack-with-10000-blackwell-gpus-and-cpo-networking-push-20260410</link>
					<comments>https://www.edgeir.com/lambda-doubles-down-on-nvidia-stack-with-10000-blackwell-gpus-and-cpo-networking-push-20260410#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Fri, 10 Apr 2026 10:00:12 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[GPUs]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162978</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173028/Lambda-NVIDIA-Blackwell-GPUs-.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Lambda NVIDIA Blackwell GPUs" decoding="async" fetchpriority="high" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173028/Lambda-NVIDIA-Blackwell-GPUs-.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173028/Lambda-NVIDIA-Blackwell-GPUs--300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173028/Lambda-NVIDIA-Blackwell-GPUs--1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173028/Lambda-NVIDIA-Blackwell-GPUs--150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173028/Lambda-NVIDIA-Blackwell-GPUs--768x480.png 768w" sizes="(max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.edgeir.com/companies/lambda"><span style="font-weight: 400;">Lambda</span></a><span style="font-weight: 400;"> recently announced it’s becoming a launch partner for the NVIDIA Vera CPU platform and NVIDIA STX.</span>

<span style="font-weight: 400;">The GPU-native AI infrastructure provider will deploy NVIDIA Quantum-X800 infiniBand photonics co-packaged optics in an </span><a href="https://www.edgeir.com/what-are-ai-factories-20250804"><span style="font-weight: 400;">AI factory</span></a><span style="font-weight: 400;"> with 10,000+ NVIDIA Blackwell Ultra GPUs. </span>

<span style="font-weight: 400;">Lambda's bare metal instances made its way out of the lab and into the core cloud offering, giving users direct access to hardware while avoiding virtualization overhead for distributed AI training workloads.</span>

<span style="font-weight: 400;">Designed for launching thousands of parallel AI environments, the </span><a href="https://www.nvidia.com/en-us/data-center/vera-cpu/"><span style="font-weight: 400;">NVIDIA Vera CPU platform</span></a><span style="font-weight: 400;"> enables maximally high memory bandwidth which optimizes reinforcement learning and agentic AI workloads.</span>

<span style="font-weight: 400;">The NVIDIA STX is a modular architecture for AI storage that augments inference, analytics, and training with next-gen hardware optimized KV-cache management.</span>

<span style="font-weight: 400;">Co-Packaged Optics (CPO) networking enables faster, cost-efficient AI infrastructure suitable for large-scale AI factories, alleviating major efficiency bottlenecks found in current approaches.</span>

<span style="font-weight: 400;">“The race to build AI factories isn’t won on GPU counts alone,” says Dave Salvator, director of accelerated computing at NVIDIA. “Network architecture is what determines whether those systems can perform at scale. Getting this right is what allows AI infrastructure to power services used by hundreds of millions of people around the world.“​​​​​​​​​​​​​​​​</span>

<span style="font-weight: 400;">Lambda oversees one of the largest deployments of NVIDIA Quantum-X800 CPO switches, highlighting how critical network architecture is when scaling AI systems.</span>

<span style="font-weight: 400;">These announcements further bolster Lambda's AI infrastructure platform, which empowers frontier labs, enterprises, and hyperscalers with proven and energy-efficient workhorses built for reliability at scale.</span>

<span style="font-weight: 400;">Lambda continues its mission to make AI compute ubiquitous, leveraging a decade-long collaboration with NVIDIA to advance its Superintelligence Cloud platform.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/lambda-doubles-down-on-nvidia-stack-with-10000-blackwell-gpus-and-cpo-networking-push-20260410/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162978</post-id>	</item>
		<item>
		<title>DDN and Zadara target sovereign AI deployments with multi-tenant NVIDIA factory stack</title>
		<link>https://www.edgeir.com/ddn-and-zadara-target-sovereign-ai-deployments-with-multi-tenant-nvidia-factory-stack-20260409</link>
					<comments>https://www.edgeir.com/ddn-and-zadara-target-sovereign-ai-deployments-with-multi-tenant-nvidia-factory-stack-20260409#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 10:00:08 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[cloud infrastructure]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162974</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06172745/Zadara-DDN.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Zadara + DDN" decoding="async" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06172745/Zadara-DDN.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06172745/Zadara-DDN-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06172745/Zadara-DDN-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06172745/Zadara-DDN-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06172745/Zadara-DDN-768x480.png 768w" sizes="(max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.ddn.com/"><span style="font-weight: 400;">DDN</span></a><span style="font-weight: 400;"> and </span><a href="https://www.edgeir.com/companies/zadara"><span style="font-weight: 400;">Zadara</span></a><span style="font-weight: 400;"> announced a partnership to deliver high-performance AI infrastructure for </span><a href="https://www.edgeir.com/infrastructure-directory/cloud-sovereign"><span style="font-weight: 400;">sovereign clouds</span></a><span style="font-weight: 400;"> and multi-tenant AI factories, leveraging NVIDIA reference designs. </span>

<span style="font-weight: 400;">The integration combines the </span><a href="https://www.ddn.com/products/exascaler-cloud/"><span style="font-weight: 400;">DDN EXAScaler</span></a><span style="font-weight: 400;"> AI data platform with Zadara’s cloud-native, AI-optimized infrastructure for scalable, secure, and efficient AI deployments.</span>

<span style="font-weight: 400;">The solution solves critical problems of enterprise AI such as real-time infrastructure complexity, GPU performance, compliance and multi-tenant governance that hinder faster deployment of AI.</span>

<span style="font-weight: 400;">“NVIDIA reference designs are accelerating the broad adoption of AI factories. Yet organizations need a cloud-native platform that can operationalize them simply and efficiently in real-world, multi-tenant environments while complying with data sovereignty regulations,” says Yoram Novick, CEO at Zadara. “DDN EXAScaler brings the high-performance AI data foundation needed to meet demanding sovereign and enterprise requirements while Zadara delivers the orchestration, isolation, and policy control required to deploy AI infrastructure quickly, securely, and efficiently.”</span>

<span style="font-weight: 400;">Zadara’s platform decouples AI infrastructure operations with performance, compliance and tenant isolation in accordance with NVIDIA best practices.</span>

<span style="font-weight: 400;">This partnership powers NVIDIA-enabled </span><a href="https://www.edgeir.com/what-are-ai-factories-20250804"><span style="font-weight: 400;">AI factories</span></a><span style="font-weight: 400;"> for secure multi-tenancy, predictable performance and policy-based orchestration with rapid time to import.</span>

<span style="font-weight: 400;">The integrated solution enables sovereign AI and multi-tenant environments with high throughput, GPU-aware scheduling, and compliance controls.</span>

<span style="font-weight: 400;">This initiative aims to make enterprise AI more accessible, affordable and practical by minimizing operational complexity while improving time-to-value.</span>

<span style="font-weight: 400;">The partnership between DDN and Zadara reinforces the need for AI factories as a cornerstone of enterprise, telcos, and service providers.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/ddn-and-zadara-target-sovereign-ai-deployments-with-multi-tenant-nvidia-factory-stack-20260409/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162974</post-id>	</item>
		<item>
		<title>Premio targets multi-camera edge AI with new Jetson Orin systems</title>
		<link>https://www.edgeir.com/premio-targets-multi-camera-edge-ai-with-new-jetson-orin-systems-20260408</link>
					<comments>https://www.edgeir.com/premio-targets-multi-camera-edge-ai-with-new-jetson-orin-systems-20260408#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Wed, 08 Apr 2026 10:00:45 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[edge AI]]></category>
		<category><![CDATA[edge infrastructure]]></category>
		<category><![CDATA[Premio]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162989</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174021/Premio-JCO-Series.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Premio JCO Series" decoding="async" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174021/Premio-JCO-Series.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174021/Premio-JCO-Series-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174021/Premio-JCO-Series-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174021/Premio-JCO-Series-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174021/Premio-JCO-Series-768x480.png 768w" sizes="(max-width: 1200px) 100vw, 1200px" /></p><span style="font-weight: 400;">Rugged edge AI and embedded computing provider </span><a href="https://www.edgeir.com/companies/premio-inc"><span style="font-weight: 400;">Premio</span></a><span style="font-weight: 400;"> debuted two new models  to its </span><a href="https://premioinc.com/collections/jco-1000-orn-series-ai-edge-computer?source=EIN&amp;campaign=03_2026_JCO1000ORN_Launch"><span style="font-weight: 400;">JCO-Series</span></a><span style="font-weight: 400;"> of rugged NVIDIA Jetson Orin </span><a href="https://www.edgeir.com/what-is-edge-ai-and-what-is-it-used-for-20250321"><span style="font-weight: 400;">edge AI</span></a><span style="font-weight: 400;"> computers at ISC West 2026.</span>

<span style="font-weight: 400;">Designed for next-gen vision AI applications, these models power multi-camera deployments used in multiple industries including security, transportation and smart infrastructure by supporting up to four GMSL2 cameras.</span>

<span style="font-weight: 400;">“As AI-driven vision systems continue to transform industries such as security, transportation, and smart infrastructure, organizations need reliable computing platforms capable of processing data directly at the edge," says Dustin Seetoo, VP of product marketing at Premio. “With the JCO-1000-ORN-B and JCO-1000-ORN-C, we are expanding the </span><a href="https://www.edgeir.com/premio-launches-rugged-jetson-orin-edge-computer-for-harsh-ai-deployments-20250923"><span style="font-weight: 400;">JCO-1000-ORN Series</span></a><span style="font-weight: 400;"> to support multi-camera AI deployments while maintaining the rugged performance required for real-world edge environments.”</span>

<span style="font-weight: 400;">These supercomputers come equipped with NVIDIA Jetson Orin NX/Nano modules, which provide up to 157 TOPS of AI performance and ruggedized I/O for edge environments.</span>

<span style="font-weight: 400;">Highlights will include dual LAN or rugged M12 LAN connectors, high-speed NVMe M.2 storage, 4G/5G and Wi-Fi support including CAN Bus and extended operating temperature range (-20°C to 55°C).</span>

<span style="font-weight: 400;">The systems are small form factor, vehicle-ready with 9–36VDC input and certified to CE, FCC and UL global standards.</span>

<span style="font-weight: 400;">The new models will be available for order by mid-Q2 2026. </span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/premio-targets-multi-camera-edge-ai-with-new-jetson-orin-systems-20260408/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162989</post-id>	</item>
		<item>
		<title>Hosted.ai raises $19M to tackle GPU underutilization and reshape AI infrastructure economics</title>
		<link>https://www.edgeir.com/hosted-ai-raises-19m-to-tackle-gpu-underutilization-and-reshape-ai-infrastructure-economics-20260407</link>
					<comments>https://www.edgeir.com/hosted-ai-raises-19m-to-tackle-gpu-underutilization-and-reshape-ai-infrastructure-economics-20260407#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Tue, 07 Apr 2026 22:42:11 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[M&A & Investment]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[GPUs]]></category>
		<category><![CDATA[Hosted.ai]]></category>
		<category><![CDATA[infrastructure investment]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163006</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/07184128/Hosted.ai-Seed-Round.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Hosted.ai Seed Round" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/07184128/Hosted.ai-Seed-Round.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/07184128/Hosted.ai-Seed-Round-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/07184128/Hosted.ai-Seed-Round-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/07184128/Hosted.ai-Seed-Round-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/07184128/Hosted.ai-Seed-Round-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><span style="font-weight: 400;">AI infrastructure startup </span><a href="https://www.edgeir.com/companies/hosted-ai"><span style="font-weight: 400;">hosted·ai</span></a><span style="font-weight: 400;"> secured $19M in seed funding to simplify and optimize GPU infrastructure for AI, led by Creandum with participation from other investors like Repeat VC and People Ventures. </span>

<span style="font-weight: 400;">The startup focuses on the inefficiencies of GPU infrastructure, including high costs, low utilization — with average usage at just 40% —  and limited access to regional suppliers.</span>

<span style="font-weight: 400;">hosted·ai's software stack includes: </span>

<span style="font-weight: 400;">hosted·ai: A </span><a href="https://www.edgeir.com/what-is-gpu-as-a-service-gpuaas-20250212"><span style="font-weight: 400;">GPUaaS</span></a><span style="font-weight: 400;"> platform improving GPU utilization by up to 5x, reducing costs, and enabling resource sharing. </span>

<a href="https://packet.ai/"><span style="font-weight: 400;">packet·ai</span></a><span style="font-weight: 400;">: A neocloud service leveraging optimized GPU infrastructure for competitive pricing. </span>

<a href="http://gpuaas.com"><span style="font-weight: 400;">GPUaaS.com</span></a><span style="font-weight: 400;">: A matchmaking service connecting enterprises with GPU providers for scalable solutions.</span>

<span style="font-weight: 400;">The company will target the “GPU waste” problem and allow regional service providers a piece of this AI infrastructure pie.</span>

<span style="font-weight: 400;">“The GPU market has a waste problem, not a scarcity problem,” says Ditlev Bredahl, CEO of Hosted.ai. “We’ve spent 25 years building infrastructure software that makes service providers competitive and the GPU opportunity is the biggest we’ve seen. This funding lets us move faster: more platform, more partners, more regions. We’re building the operating system for the GPU economy, and this round puts us in a strong position to do exactly that.”</span>

<span style="font-weight: 400;">Some of the founding team members have previous experience working with infrastructure technologies at VMware, NVIDIA or XenSource.</span>

<span style="font-weight: 400;">Founded in 2024 and launched in 2025, hosted·ai is a global company offering products across the US, EMEA and Asia-Pacific.</span>

<span style="font-weight: 400;">Their mission is to revolutionize the economics of GPU computing so that AI infrastructure becomes affordable and accessible for developers and companies alike.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/hosted-ai-raises-19m-to-tackle-gpu-underutilization-and-reshape-ai-infrastructure-economics-20260407/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163006</post-id>	</item>
		<item>
		<title>NVIDIA and T-Mobile push AI-RAN to turn 5G networks into distributed edge compute platforms</title>
		<link>https://www.edgeir.com/nvidia-and-t-mobile-push-ai-ran-to-turn-5g-networks-into-distributed-edge-compute-platforms-20260407</link>
					<comments>https://www.edgeir.com/nvidia-and-t-mobile-push-ai-ran-to-turn-5g-networks-into-distributed-edge-compute-platforms-20260407#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Tue, 07 Apr 2026 10:00:23 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[edge AI]]></category>
		<category><![CDATA[edge infrastructure]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[T-Mobile]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162957</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/03155601/NVIDIA-and-T-Mobile-advance-AI-RAN.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="NVIDIA and T-Mobile advance AI-RAN" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/03155601/NVIDIA-and-T-Mobile-advance-AI-RAN.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/03155601/NVIDIA-and-T-Mobile-advance-AI-RAN-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/03155601/NVIDIA-and-T-Mobile-advance-AI-RAN-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/03155601/NVIDIA-and-T-Mobile-advance-AI-RAN-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/03155601/NVIDIA-and-T-Mobile-advance-AI-RAN-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.edgeir.com/companies/nvidia"><span style="font-weight: 400;">NVIDIA</span></a><span style="font-weight: 400;"> and T-Mobile are working with Nokia and a growing ecosystem of developers to deploy vision AI applications over distributed edge networks, using AI-RAN infrastructure and the NVIDIA Metropolis platform to turn wireless networks into platforms for </span><a href="https://www.edgeir.com/what-is-edge-ai-and-what-is-it-used-for-20250321"><span style="font-weight: 400;">edge AI</span></a><span style="font-weight: 400;"> computing.</span>

<span style="font-weight: 400;">T-Mobile works with NVIDIA and has started to pilot AI-RAN infrastructure to enable edge AI workloads distributed across the network while maintaining robust 5G connectivity.</span>

<a href="https://build.nvidia.com/nvidia/video-search-and-summarization/blueprintcard"><span style="font-weight: 400;">NVIDIA Metropolis VSS 3 Blueprint</span></a><span style="font-weight: 400;"> accelerates reasoning video analytics through modular architecture, multimodal understanding and agentic search capabilities.</span>

<span style="font-weight: 400;">“Turning networks into distributed AI computing platforms to unlock the full potential of physical AI will require ultra-low latency and space time coherency at the network edge for billions of endpoints, and that's what we've built at T-Mobile,” says Srini Gopalan, chief executive officer of T-Mobile. “With the first nationwide 5G Standalone and 5G Advanced network, we are uniquely positioned to help power a future where intelligent systems don’t wait on the cloud but rely on intelligent networks that allow them to act in real time.”</span>

<span style="font-weight: 400;">Some of the use cases are smart city operations, automated inspections for utilities, vision-based facility management and real-time industrial safety.</span>

<span style="font-weight: 400;">T-Mobile’s full 5G standalone radio access network (RAN) enables low-latency, secure and wide-area connectivity, a requirement to scale physical AI applications.</span>

<span style="font-weight: 400;">Leveraging NVIDIA's infrastructure, developers such as Siemens Energy, Levatas and Fogsphere are building AI agents capable of real-time action and predictive maintenance.</span>

<span style="font-weight: 400;">Together, this collaboration aims to turn 5G networks into distributed AI platforms that enable billions of interacting devices in real time.</span>

<span style="font-weight: 400;">T-Mobile also recently tapped Red Hat OpenShift to </span><a href="https://www.edgeir.com/t-mobile-taps-red-hat-openshift-to-streamline-edge-and-5g-cloud-operations-20250219"><span style="font-weight: 400;">streamline edge and 5G cloud operations</span></a><span style="font-weight: 400;">.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/nvidia-and-t-mobile-push-ai-ran-to-turn-5g-networks-into-distributed-edge-compute-platforms-20260407/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162957</post-id>	</item>
		<item>
		<title>Nscale moves into power with AIPCorp deal, building 8GW U.S. AI campus to bypass energy bottlenecks</title>
		<link>https://www.edgeir.com/nscale-moves-into-power-with-aipcorp-deal-building-8gw-u-s-ai-campus-to-bypass-energy-bottlenecks-20260406</link>
					<comments>https://www.edgeir.com/nscale-moves-into-power-with-aipcorp-deal-building-8gw-u-s-ai-campus-to-bypass-energy-bottlenecks-20260406#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Mon, 06 Apr 2026 06:00:28 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[data centers]]></category>
		<category><![CDATA[Nscale]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162960</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/03155905/Nscale-AIPCorp-Monarch.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Nscale + AIPCorp + Monarch" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/03155905/Nscale-AIPCorp-Monarch.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/03155905/Nscale-AIPCorp-Monarch-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/03155905/Nscale-AIPCorp-Monarch-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/03155905/Nscale-AIPCorp-Monarch-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/03155905/Nscale-AIPCorp-Monarch-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><span style="font-weight: 400;">UK-based neocloud</span> <a href="https://www.edgeir.com/companies/nscale"><span style="font-weight: 400;">Nscale</span></a><span style="font-weight: 400;"> recently secured the largest Series C round in European history, </span><a href="https://www.edgeir.com/nscale-lands-2b-to-expand-global-ai-infrastructure-platform-20260324"><span style="font-weight: 400;">raising $2 billion</span></a><span style="font-weight: 400;">.</span>

<span style="font-weight: 400;">Nscale then made good on its full-stack AI hyperscaler vision by acquiring two key assets: American Intelligence &amp; Power Corporation (</span><a href="https://www.aipcorp.com/"><span style="font-weight: 400;">AIPCorp</span></a><span style="font-weight: 400;">) and the </span><a href="https://www.nscale.com/press-releases/nscale-west-virginia-ai-factory"><span style="font-weight: 400;">Monarch Compute Campus</span></a><span style="font-weight: 400;"> in West Virginia.</span>

<span style="font-weight: 400;">Monarch Compute Campus will be the first state-certified AI microgrid in the U.S. It is scalable up to 8 gigawatts of clean, renewable power by 2031.</span>

<span style="font-weight: 400;">”Nscale is a global company, and the US is the world's largest AI infrastructure market. AI infrastructure needs to be built where demand is, and right now a significant share of that demand is in the United States,” says Josh Payne, CEO of Nscale. “Monarch allows us to meet that demand. The acquisition builds on our existing US footprint and reflects the pace at which we are scaling to serve customers around the world."</span>

<span style="font-weight: 400;">It will initially have 2 gigawatts of power capacity to be operational by 2028 to support </span><a href="https://www.edgeir.com/ai-infrastructure"><span style="font-weight: 400;">AI infrastructure</span></a><span style="font-weight: 400;"> growth in the U.S.</span>

<span style="font-weight: 400;">To manage the integration of energy and compute, Nscale created a new division based in Houston, Texas -  Nscale Energy &amp; Power.</span>

<span style="font-weight: 400;">Nscale will be joined by the leadership and staff of AIPCorp, which has nearly a decade of experience building AI infrastructure, as it develops a global presence.</span>

<span style="font-weight: 400;">The reason behind the acquisition is an attempt to resolve bottlenecks in AI compute demand with a vertically integrated power-and-compute model.</span>

<span style="font-weight: 400;">Nscale specializes in scalable AI infrastructure across Europe and North America for all aspects of the AI life-cycle such as training, fine-tuning, and inference.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/nscale-moves-into-power-with-aipcorp-deal-building-8gw-u-s-ai-campus-to-bypass-energy-bottlenecks-20260406/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162960</post-id>	</item>
		<item>
		<title>Zededa and Submer target off-grid AI with modular, liquid-cooled edge GPU systems</title>
		<link>https://www.edgeir.com/zededa-and-submer-target-off-grid-ai-with-modular-liquid-cooled-edge-gpu-systems-20260403</link>
					<comments>https://www.edgeir.com/zededa-and-submer-target-off-grid-ai-with-modular-liquid-cooled-edge-gpu-systems-20260403#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Fri, 03 Apr 2026 10:00:49 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[data centers]]></category>
		<category><![CDATA[edge AI]]></category>
		<category><![CDATA[edge infrastructure]]></category>
		<category><![CDATA[Submer]]></category>
		<category><![CDATA[Zededa]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162925</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31210639/Submer-Zededa.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Submer + Zededa" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31210639/Submer-Zededa.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31210639/Submer-Zededa-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31210639/Submer-Zededa-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31210639/Submer-Zededa-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31210639/Submer-Zededa-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.edgeir.com/companies/zededa"><span style="font-weight: 400;">Zededa</span></a><span style="font-weight: 400;"> and </span><a href="https://www.edgeir.com/companies/submer"><span style="font-weight: 400;">Submer</span></a><span style="font-weight: 400;"> have partnered to deliver modular, liquid-cooled edge AI infrastructure for high-density GPU inference in locations without traditional data centers. </span>

<span style="font-weight: 400;">Linking Submer’s liquid-cooled AI infrastructure with Zededa’s edge intelligence platform, the partners will deliver scalable, secure and resilient </span><a href="https://www.edgeir.com/what-is-edge-ai-and-what-is-it-used-for-20250321"><span style="font-weight: 400;">edge AI</span></a><span style="font-weight: 400;"> deployments.</span>

<span style="font-weight: 400;">“AI is rapidly moving from centralized cloud environments into real-world operations, from industrial sites to telecom networks and remote energy infrastructure,” says Patrick Smets, CEO of Submer. “Delivering that intelligence requires purpose-built AI infrastructure that operates efficiently in environments where traditional data centers simply cannot exist. By combining Submer’s liquid-cooled high-density AI infrastructure with Zededa’s edge intelligence platform, we’re enabling organizations to deploy scalable, resilient AI infrastructure anywhere it is needed.”</span>

<span style="font-weight: 400;">Three </span><a href="https://submer.com/blog/rapid-edge-ai-infrastructure-with-zededa/"><span style="font-weight: 400;">modular solutions</span></a><span style="font-weight: 400;"> are offered: edge pods (from 2 to 8 GPUs); ruggedized micro-data centers (to 168 GPUs), and; megawatt-scale containerized systems (up to 800 GPUs).</span>

<span style="font-weight: 400;">Submer’s </span><a href="https://www.edgeir.com/infrastructure-directory/cooling-thermal-management"><span style="font-weight: 400;">liquid cooling</span></a><span style="font-weight: 400;"> technology improves energy efficiency, reduces water consumption and enables high-density GPUs in harsh environments.</span>

<span style="font-weight: 400;">Zededa’s software-defined resilience reduces hardware redundancy costs by redistributing workloads during node failures, ensuring uptime and lowering total cost of ownership.</span>

<span style="font-weight: 400;">The deployments address a wide range of AI workloads such as real-time computer vision, predictive maintenance and industrial automation so that AI can run at the edge in remote or extreme environments.</span>

<span style="font-weight: 400;">The first pilot deployments are expected to begin with industrial and telecommunications customers later this year.</span>

<span style="font-weight: 400;">The partners will focus on delivering the emerging edge AI infrastructure necessary to bring workloads from centralized cloud environments to operational and industrial settings.</span>

<span style="font-weight: 400;">Late last year Zededa rolled out full-stack </span><a href="https://www.edgeir.com/zededa-rolls-out-full-stack-edge-kubernetes-to-tackle-large-scale-ai-deployments-20251113"><span style="font-weight: 400;">edge Kubernetes to tackle large-scale AI deployments</span></a><span style="font-weight: 400;">.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/zededa-and-submer-target-off-grid-ai-with-modular-liquid-cooled-edge-gpu-systems-20260403/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162925</post-id>	</item>
		<item>
		<title>Crusoe rolls out edge zones as neocloud race shifts toward distributed AI infrastructure</title>
		<link>https://www.edgeir.com/crusoe-rolls-out-edge-zones-as-neocloud-race-shifts-toward-distributed-ai-infrastructure-20260402</link>
					<comments>https://www.edgeir.com/crusoe-rolls-out-edge-zones-as-neocloud-race-shifts-toward-distributed-ai-infrastructure-20260402#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Thu, 02 Apr 2026 10:00:54 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[Crusoe]]></category>
		<category><![CDATA[data centers]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162921</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31210319/Crusoe-Edge-Zones.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Crusoe Edge Zones" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31210319/Crusoe-Edge-Zones.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31210319/Crusoe-Edge-Zones-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31210319/Crusoe-Edge-Zones-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31210319/Crusoe-Edge-Zones-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31210319/Crusoe-Edge-Zones-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><span style="font-weight: 400;">Neocloud provider </span><a href="https://www.edgeir.com/companies/crusoe"><span style="font-weight: 400;">Crusoe</span></a><span style="font-weight: 400;"> recently announced modular AI data centers that provide low-latency, sovereign and rapidly deployable AI infrastructure at the edge.</span>

<span style="font-weight: 400;">These edge zones deliver AI compute capabilities in location-specific areas based on geography, and the use cases, not supported by traditional hyperscale providers.</span>

<span style="font-weight: 400;">“Crusoe Edge Zones powered by </span><a href="https://www.crusoe.ai/resources/newsroom/crusoe-introduces-crusoe-spark-modular-ai-data-centers?utm_source=google&amp;utm_medium=ppc&amp;utm_campaign=2026+AI+Infrastructure+Report&amp;utm_content=188133370455&amp;utm_term=crusoe%20cloud&amp;hsa_acc=9210726700&amp;hsa_cam=22976202197&amp;hsa_grp=188133370455&amp;hsa_ad=781642271942&amp;hsa_src=g&amp;hsa_tgt=kwd-2368739290222&amp;hsa_kw=crusoe%20cloud&amp;hsa_mt=b&amp;hsa_net=adwords&amp;hsa_ver=3&amp;gad_source=1&amp;gad_campaignid=23210935828&amp;gbraid=0AAAAA-m1bxGOOSAKs9R4EBTUqh5pa76pD&amp;gclid=Cj0KCQjwmunNBhDbARIsAOndKpkfJsw5EBZLDF6FZ9sanGJy3-StA8I6cuNwlvNOCJKp4G4ktUtGdiUaAj8hEALw_wcB"><span style="font-weight: 400;">Crusoe Spark</span></a><span style="font-weight: 400;"> represent the continued expansion of our vertically integrated ‘</span><a href="https://www.edgeir.com/what-are-ai-factories-20250804"><span style="font-weight: 400;">AI Factory</span></a><span style="font-weight: 400;">’ vision," says Cully Cavness, co-founder, president, and chief strategy officer of Crusoe. “By optimizing these modular AI factories to run both the Crusoe Cloud platform and our Managed Inference product, we are delivering a high-performance, distributed solution that provides the speed, sovereignty, and quality that the next generation of AI requires.”</span>

<span style="font-weight: 400;">Crusoe can quickly deploy new cloud zones, within three months in some instances, expanding cost-saving AI capacity due to its vertically integrated approach.</span>

<span style="font-weight: 400;">The main use cases are low-latency inference, dedicated enterprise clusters in multiple locations and sovereign AI deployments to serve regulated industries and governments.</span>

<span style="font-weight: 400;">Crusoe envisions large-scale campuses for model training overhead and modular distributed compute for high-performance edge delivery.</span>

<span style="font-weight: 400;">The company focuses on driving energy efficient, scalable AI infrastructure in its mission to accelerate energy and intelligence abundance.</span>

<span style="font-weight: 400;">Organizations can reach out to Crusoe for deployment opportunities if they have geographic expansion, low-latency, or sovereign infrastructure requirements.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/crusoe-rolls-out-edge-zones-as-neocloud-race-shifts-toward-distributed-ai-infrastructure-20260402/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162921</post-id>	</item>
		<item>
		<title>Meta locks in up to $27B AI capacity deal with Nebius, signaling neocloud scale-up</title>
		<link>https://www.edgeir.com/meta-locks-in-up-to-27b-ai-capacity-deal-with-nebius-signaling-neocloud-scale-up-20260401</link>
					<comments>https://www.edgeir.com/meta-locks-in-up-to-27b-ai-capacity-deal-with-nebius-signaling-neocloud-scale-up-20260401#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Wed, 01 Apr 2026 09:00:14 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[data centers]]></category>
		<category><![CDATA[Meta]]></category>
		<category><![CDATA[Nebius]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162917</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31205747/Nebius-Meta.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Nebius + Meta" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31205747/Nebius-Meta.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31205747/Nebius-Meta-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31205747/Nebius-Meta-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31205747/Nebius-Meta-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/31205747/Nebius-Meta-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><span style="font-weight: 400;">Amsterdam based neocloud </span><a href="https://www.edgeir.com/companies/nebius"><span style="font-weight: 400;">Nebius</span></a><span style="font-weight: 400;"> signed a five-year AI infrastructure deal with hyperscaler </span><a href="https://www.edgeir.com/companies/meta"><span style="font-weight: 400;">Meta</span></a><span style="font-weight: 400;">, valued at up to $27 billion.</span>

<span style="font-weight: 400;">Starting in early 2027, Nebius will deliver dedicated NVIDIA Vera Rubin-based AI capacity with a total commitment of $12 billion. Nebius is a unified cloud solution for AI, providing end-to-end tools for the development and deployment of AI.</span>

<span style="font-weight: 400;">“We are pleased to expand our significant partnership with Meta as part of securing more large, long-term capacity contracts to accelerate the build-out and growth of our core AI cloud business,” says Arkady Volozh, founder and CEO of Nebius.</span>

<span style="font-weight: 400;">Meta committed to purchasing additional compute capacity worth up to $15 billion over the next five years. </span>

<span style="font-weight: 400;">This deal will accelerate the expansion of Nebius's AI cloud product line and deepen its partnership with Meta.</span>

<span style="font-weight: 400;">Nebius plans to resell its excess capacity to third-party customers with Meta purchasing all unutilized capacity. Meta previously purchased capacity from Nebius worth $3 billion that went live in February, 2026. </span>

<a href="https://www.structureresearch.net/2026/02/19/nebius-reports-another-strong-quarter-demand-still-outstripping-supply/"><span style="font-weight: 400;">Nebius's 2026 financial guidance</span></a><span style="font-weight: 400;"> is unchanged under the new agreement. </span>

<span style="font-weight: 400;">Nvidia also doubled down on neoclouds with a recent </span><a href="https://www.edgeir.com/nvidia-doubles-down-on-neoclouds-with-2b-investment-in-nebius-20260326"><span style="font-weight: 400;">$2B investment in Nebius</span></a><span style="font-weight: 400;">.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/meta-locks-in-up-to-27b-ai-capacity-deal-with-nebius-signaling-neocloud-scale-up-20260401/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162917</post-id>	</item>
		<item>
		<title>Cisco and NVIDIA push secure AI Infrastructure from core data centers to the edge</title>
		<link>https://www.edgeir.com/cisco-and-nvidia-push-secure-ai-infrastructure-from-core-data-centers-to-the-edge-20260331</link>
					<comments>https://www.edgeir.com/cisco-and-nvidia-push-secure-ai-infrastructure-from-core-data-centers-to-the-edge-20260331#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Tue, 31 Mar 2026 09:00:41 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[Cisco]]></category>
		<category><![CDATA[data centers]]></category>
		<category><![CDATA[Nvidia]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162903</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/30143409/Cisco-NVIDIA.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Cisco + NVIDIA" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/30143409/Cisco-NVIDIA.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/30143409/Cisco-NVIDIA-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/30143409/Cisco-NVIDIA-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/30143409/Cisco-NVIDIA-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/03/30143409/Cisco-NVIDIA-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.edgeir.com/companies/cisco"><span style="font-weight: 400;">Cisco</span></a><span style="font-weight: 400;"> and </span><a href="https://www.edgeir.com/companies/nvidia"><span style="font-weight: 400;">NVIDIA</span></a><span style="font-weight: 400;"> have expanded their </span><a href="https://www.cisco.com/site/us/en/solutions/artificial-intelligence/secure-ai-factory/index.html"><span style="font-weight: 400;">Secure AI Factory</span></a><span style="font-weight: 400;"> to enable AI deployment across central data centers and local edge sites, ensuring real-time decision-making and enhanced security.</span>

<span style="font-weight: 400;">The collaboration introduces NVIDIA Spectrum-X switch silicon and Cisco operating systems for versatile AI infrastructure building blocks for businesses. </span>

<span style="font-weight: 400;">"Most organizations understand the potential for AI to transform their businesses, but they're navigating how to deploy the technology safely and at scale," says Chuck Robbins, chair and CEO, Cisco. "In partnership with NVIDIA , we're solving that challenge with an architecture that sets a new standard for performance, making it simpler to deploy, operate, and secure </span><a href="https://www.edgeir.com/ai-infrastructure"><span style="font-weight: 400;">AI infrastructure</span></a><span style="font-weight: 400;">."</span>

<span style="font-weight: 400;">Cisco has also enhanced security by building features to provide Hybrid Mesh Firewall policies and leveraging Cisco AI Defense in order to secure AI systems and multi-agent actions.</span>

<span style="font-weight: 400;">The partnership advances NVIDIA’s OpenShell platform, which brings additional security controls and guardrails to AI agent actions and workflows.</span>

<span style="font-weight: 400;">Cisco’s AI offerings now support edge inferencing and are designed to run AI workloads closer to data sources like hospitals, warehouses and vehicles.</span>

<span style="font-weight: 400;">The high-performance hardware, including Cisco N9100 switches with NVIDIA Spectrum-6, provides scalability and efficient infrastructure for AI.</span>

<span style="font-weight: 400;">Cisco AI Defense secures the deployment of AI agents by checking and validating their activity, providing a bridge between innovation and risk.</span>

<span style="font-weight: 400;">Industry experts commend the collaboration for streamlining AI implementation, enhancing scalability, and tackling security issues in real-time AI applications.</span>

<span style="font-weight: 400;">Andrew Leece, COO and founder, </span><a href="https://www.edgeir.com/companies/sharon-ai"><span style="font-weight: 400;">Sharon AI</span></a><span style="font-weight: 400;">, says, "With NCP RA compliance and the 51.2T Spectrum-4 based N9100 switch availability, we will be scaling our AI infrastructure with robust performance and efficiency. The G300 Silicon One-based N9300 switches provide the flexibility to meet evolving customer needs."</span>

<span style="font-weight: 400;">Cisco and Sharon AI, jointly with NVIDIA, recently opened </span><a href="https://www.edgeir.com/cisco-nvidia-and-sharon-ai-bring-hyperscale-class-ai-infrastructure-onshore-in-australia-20260302"><span style="font-weight: 400;">Australia’s first Cisco Secure AI Factory</span></a><span style="font-weight: 400;"> for secure, high-performance AI infrastructure.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/cisco-and-nvidia-push-secure-ai-infrastructure-from-core-data-centers-to-the-edge-20260331/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162903</post-id>	</item>
	</channel>
</rss>