<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

<channel>
	<title>EdgeIR.com - Digital Infrastructure News</title>
	<atom:link href="https://www.edgeir.com/feed" rel="self" type="application/rss+xml"/>
	<link>https://www.edgeir.com</link>
	<description>Daily news, insights and contributed articles in the digital infrastructure ecosystem. </description>
	<lastBuildDate>Mon, 04 May 2026 17:40:23 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
<site xmlns="com-wordpress:feed-additions:1">170853209</site>	<item>
		<title>SPAN pushes AI compute into homes to bypass grid bottlenecks with XFRA</title>
		<link>https://www.edgeir.com/span-pushes-ai-compute-into-homes-to-bypass-grid-bottlenecks-with-xfra-20260508</link>
					<comments>https://www.edgeir.com/span-pushes-ai-compute-into-homes-to-bypass-grid-bottlenecks-with-xfra-20260508#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Fri, 08 May 2026 10:00:12 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[data centers]]></category>
		<category><![CDATA[energy & power]]></category>
		<category><![CDATA[SPAN]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163259</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04124419/SPAN-announced-XFRA.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="SPAN announced XFRA" decoding="async" fetchpriority="high" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04124419/SPAN-announced-XFRA.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04124419/SPAN-announced-XFRA-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04124419/SPAN-announced-XFRA-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04124419/SPAN-announced-XFRA-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04124419/SPAN-announced-XFRA-768x480.png 768w" sizes="(max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.span.io/"><span style="font-weight: 400;">SPAN</span></a><span style="font-weight: 400;"> announced XFRA, a distributed data center solution to address growing AI compute demand. </span>

<span style="font-weight: 400;">By tapping into existing, underutilized power capacity in residential and small commercial spaces </span><a href="https://www.xfra.ai/"><span style="font-weight: 400;">XFRA</span></a><span style="font-weight: 400;"> can deliver scalable compute power in a fraction of the time compared to new energy infrastructure buildouts.</span>

<span style="font-weight: 400;">SPAN is working with </span><a href="https://www.edgeir.com/companies/nvidia"><span style="font-weight: 400;">NVIDIA</span></a><span style="font-weight: 400;"> to use liquid cooled GPU Server Edition RTX PRO 6000 Blackwell series </span><a href="https://www.edgeir.com/what-is-gpu-as-a-service-gpuaas-20250212"><span style="font-weight: 400;">GPUs</span></a><span style="font-weight: 400;">.</span>

<span style="font-weight: 400;">“SPAN’s unique and differentiated intellectual property in power controls enables us to improve the utilization of existing grid infrastructure,” says Arch Rao, founder and CEO of SPAN. “We have successfully deployed this capability to accelerate home electrification, unlock new home construction, and increase utility grid utilization. Now, distributed compute is the next logical extension of our technology. By building on our core strengths in power optimization and collaborating with industry leaders like NVIDIA, we are collapsing the speed-to-power gap to deliver gigawatts of cost-effective compute capacity at unprecedented speed.”</span>

<span style="font-weight: 400;">The solution intends to bridge the "speed-to-power" gap for AI workloads, which are consuming electricity at an astonishing pace.</span>

<span style="font-weight: 400;">XFRA offers benefits for scalers, homeowners, and utilities, creating a win-win scenario across the energy and compute ecosystem.</span>

<span style="font-weight: 400;">First deployments are scheduled for sometime in 2023, with a target of being in the gigawatt range by 2027.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/span-pushes-ai-compute-into-homes-to-bypass-grid-bottlenecks-with-xfra-20260508/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163259</post-id>	</item>
		<item>
		<title>Nokia-Blaize alliance targets GPU limits with hybrid edge AI push in Southeast Asia</title>
		<link>https://www.edgeir.com/nokia-blaize-alliance-targets-gpu-limits-with-hybrid-edge-ai-push-in-southeast-asia-20260507</link>
					<comments>https://www.edgeir.com/nokia-blaize-alliance-targets-gpu-limits-with-hybrid-edge-ai-push-in-southeast-asia-20260507#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Thu, 07 May 2026 10:00:44 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[Blaize]]></category>
		<category><![CDATA[connectivity networks]]></category>
		<category><![CDATA[edge infrastructure]]></category>
		<category><![CDATA[Nokia]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163256</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04123617/Blaize-Nokia-Datacomm-Diangraha.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Blaize + Nokia + Datacomm Diangraha" decoding="async" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04123617/Blaize-Nokia-Datacomm-Diangraha.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04123617/Blaize-Nokia-Datacomm-Diangraha-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04123617/Blaize-Nokia-Datacomm-Diangraha-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04123617/Blaize-Nokia-Datacomm-Diangraha-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04123617/Blaize-Nokia-Datacomm-Diangraha-768x480.png 768w" sizes="(max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.edgeir.com/companies/blaize"><span style="font-weight: 400;">Blaize</span></a><span style="font-weight: 400;">, </span><a href="https://www.edgeir.com/companies/nokia"><span style="font-weight: 400;">Nokia</span></a><span style="font-weight: 400;">, and </span><a href="https://www.datacomm.co.id/"><span style="font-weight: 400;">Datacomm Diangraha</span></a><span style="font-weight: 400;"> are collaborating to deploy hybrid AI infrastructure across Indonesia and Southeast Asia.</span>

<span style="font-weight: 400;">This partnership seeks to help accelerate AI adoption by combining Nokia's networks, Blaize's energy-efficient AI compute and Datacomm's regional expertise.</span>

<span style="font-weight: 400;">"The combination of Nokia's validated networking infrastructure and Blaize's energy-efficient inference compute is uniquely positioned for the enterprise edge,” says Dinakar Munagala, CEO, Blaize. “GPU economics simply do not scale to thousands of distributed sites but Blaize does. This is not about replacing GPUs; it is about deploying the right compute where it delivers the most value."</span>

<span style="font-weight: 400;">The partnership combines Nokia's advanced infrastructure for GPU-compute telco workloads with Blaize's platform for low-power enterprise </span><a href="https://www.edgeir.com/what-is-edge-ai-and-what-is-it-used-for-20250321"><span style="font-weight: 400;">edge AI</span></a><span style="font-weight: 400;"> processing.</span>

<span style="font-weight: 400;">Datacomm reports increasing demand for AI inference solutions in Indonesia, highlighting the market's rapid growth.</span>

<span style="font-weight: 400;">The alliance is rolling out in the public sector as well as geospatial and enterprise, with Indonesia as the target market for expansion.</span>

<span style="font-weight: 400;">The hybrid approach enables the best fit compute platform (GPU or Blaize edge AI) for the right workload.</span>

<span style="font-weight: 400;">Nokia and Blaize recently signed an </span><a href="https://www.edgeir.com/nokia-and-blaize-sign-edge-ai-inference-mou-targeting-apac-networks-20260129"><span style="font-weight: 400;">edge AI inference MOU</span></a><span style="font-weight: 400;"> targeting APAC networks earlier this year.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/nokia-blaize-alliance-targets-gpu-limits-with-hybrid-edge-ai-push-in-southeast-asia-20260507/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163256</post-id>	</item>
		<item>
		<title>Aranya exits stealth with GPU orchestration play as inference infrastructure shifts up the stack</title>
		<link>https://www.edgeir.com/aranya-exits-stealth-with-gpu-orchestration-play-as-inference-infrastructure-shifts-up-the-stack-20260506</link>
					<comments>https://www.edgeir.com/aranya-exits-stealth-with-gpu-orchestration-play-as-inference-infrastructure-shifts-up-the-stack-20260506#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Wed, 06 May 2026 10:00:17 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[Aranya]]></category>
		<category><![CDATA[digital infrastructure]]></category>
		<category><![CDATA[GPUs]]></category>
		<category><![CDATA[Hydra Host]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163252</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04122924/Aranya.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Aranya" decoding="async" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04122924/Aranya.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04122924/Aranya-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04122924/Aranya-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04122924/Aranya-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04122924/Aranya-768x480.png 768w" sizes="(max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.edgeir.com/companies/aranya"><span style="font-weight: 400;">Aranya</span></a><span style="font-weight: 400;">, a cluster-scale operating system, emerged from stealth to address the growing demand for AI inference infrastructure.</span>

<span style="font-weight: 400;">Aranya announced partnerships, including one with </span><a href="https://www.edgeir.com/companies/hydra-host"><span style="font-weight: 400;">Hydra Host</span></a><span style="font-weight: 400;">, deploying its ClusterdOS across 1,700+ GPUs.</span>

<a href="https://aranya.tech/#clusterdos"><span style="font-weight: 400;">ClusterdOS</span></a><span style="font-weight: 400;"> enables Kubernetes to become self-healing by providing a straightforward way to manage AI inference infrastructure. The technology helps eliminate the need for dedicated platform teams, cutting setup time for partners.</span>

<span style="font-weight: 400;">"At Aranya, we believe inference is the new mining. Just as crypto defined the last era of GPU-scale compute, inference is the core value-extracting workload of the AI era. The infrastructure demands are just as unforgiving: the clusters have to run, and they have to run at scale. For bleeding-edge companies that simply cannot afford downtime, we've built Aranya around that reality, with technical depth from GPU orchestration all the way up the stack," says Christian Bhatia Ondaatje, co-founder and CEO of Aranya. "Partnering with top AI inference companies like Hydra Host proves our technology inside some of the most demanding inference environments in the industry as we work towards what comes next: giving every engineering team the agency to own, operate, and expand their inference infrastructure."</span>

<span style="font-weight: 400;">The company is also building Vibecluster, a team-level inference scaling management tool.</span>

<span style="font-weight: 400;">Hydra Host claims it has seen a 90% reduction in cluster downtime using Aranya's technology, reflecting growing demand for software-defined abstractions that can unify fragmented AI stacks and reduce reliance on specialized platform teams. </span>

<span style="font-weight: 400;">Strategically, this aligns with the rise of </span><a href="https://www.edgeir.com/infrastructure-directory/cloud-neoclouds"><span style="font-weight: 400;">neocloud</span></a><span style="font-weight: 400;"> and GPU-centric service providers, where differentiation is shifting up the stack from raw compute to orchestration, reliability, and time-to-market. In that sense, Aranya is less a point solution and more a signal of where value is consolidating in the AI era - not just in owning GPUs, but in making them usable, scalable, and continuously available for inference at production scale.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/aranya-exits-stealth-with-gpu-orchestration-play-as-inference-infrastructure-shifts-up-the-stack-20260506/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163252</post-id>	</item>
		<item>
		<title>Oracle links OCI to AWS in direct multicloud network push</title>
		<link>https://www.edgeir.com/oracle-links-oci-to-aws-in-direct-multicloud-network-push-20260505</link>
					<comments>https://www.edgeir.com/oracle-links-oci-to-aws-in-direct-multicloud-network-push-20260505#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Tue, 05 May 2026 10:00:52 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[cloud infrastructure]]></category>
		<category><![CDATA[connectivity networks]]></category>
		<category><![CDATA[digital infrastructure]]></category>
		<category><![CDATA[Oracle]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163249</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04122617/Oracle-AWS.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Oracle + AWS" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04122617/Oracle-AWS.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04122617/Oracle-AWS-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04122617/Oracle-AWS-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04122617/Oracle-AWS-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04122617/Oracle-AWS-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.edgeir.com/companies/oracle-cloud-infrastructure-oci"><span style="font-weight: 400;">Oracle</span></a><span style="font-weight: 400;"> and </span><a href="https://www.edgeir.com/companies/amazon-web-services-aws"><span style="font-weight: 400;">AWS</span></a><span style="font-weight: 400;"> are collaborating to expand multicloud networking, enabling high-speed, private, and simplified connectivity between Oracle Cloud Infrastructure (OCI) and AWS. </span>

<span style="font-weight: 400;">The deal will enable customers to move apps and data between OCI and AWS, for both full-and split-stack multicloud deployments.</span>

<span style="font-weight: 400;">“Oracle continues to advance multicloud connectivity as part of its commitment to helping customers unlock flexibility, agility, and performance across clouds,” says Nathan Thomas, senior vice president, product management, Oracle Cloud Infrastructure. “With Oracle AI Database@AWS, we pioneered a simpler way for customers to run Oracle AI Database workloads in AWS with the same features, architecture, and performance as they expect on-premises. We’re now building on that by establishing connectivity between our popular cross-cloud interconnect and AWS Interconnect–multicloud. This will help our mutual customers modernize their applications, unify their data, and unlock new generative AI opportunities.”</span>

<span style="font-weight: 400;">The partnership focuses on modernizing applications, unifying data and enriching AI-driven innovations while minimizing the operational complexity that comes with managing multiple network providers.</span>

<span style="font-weight: 400;">OCI multicloud networking enables secure, private high-performance connectivity between 26 interconnected partner cloud regions.</span>

<span style="font-weight: 400;">OCI-</span><a href="https://aws.amazon.com/interconnect/multicloud/"><span style="font-weight: 400;">AWS Interconnect - multicloud</span></a><span style="font-weight: 400;">, new connectivity between OCI and AWS, will be introduced in the near future and is expected to start in 2026 in the AWS US East (N. Virginia) region.</span>

<span style="font-weight: 400;">Oracle also has its controlled, public utility atmosphere, which integrates with leading suppliers such AWS, Azure Microsoft and Google Cloud among others.</span>

<span style="font-weight: 400;">Oracle has also focused on flexibility, control and enterprise-grade performance with attributes such as Oracle AI Databaseactory@AWS for its cloud services, launched last year.</span>

<span style="font-weight: 400;">Oracle is well-positioned with its multicloud strategy that allows for frictionless interoperability across cloud vendors and the ability for end users to use the best capabilities of different platforms seamlessly. </span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/oracle-links-oci-to-aws-in-direct-multicloud-network-push-20260505/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163249</post-id>	</item>
		<item>
		<title>Why distributed inference is becoming the backbone of scalable AI at the edge</title>
		<link>https://www.edgeir.com/why-distributed-inference-is-becoming-the-backbone-of-scalable-ai-at-the-edge-20260504</link>
					<comments>https://www.edgeir.com/why-distributed-inference-is-becoming-the-backbone-of-scalable-ai-at-the-edge-20260504#respond</comments>
		
		<dc:creator><![CDATA[Jags Kandasmy]]></dc:creator>
		<pubDate>Mon, 04 May 2026 13:30:01 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Opinion & Guest Commentary]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[edge AI]]></category>
		<category><![CDATA[edge computing]]></category>
		<category><![CDATA[edge infrastructure]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163238</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04092538/Guest-Post-distributed-inference.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Guest Post distributed inference" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04092538/Guest-Post-distributed-inference.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04092538/Guest-Post-distributed-inference-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04092538/Guest-Post-distributed-inference-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04092538/Guest-Post-distributed-inference-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/05/04092538/Guest-Post-distributed-inference-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><i><span style="font-weight: 400;">By Jags Kandasmy, CEO, </span></i><a href="https://www.edgeir.com/companies/latent-ai"><i><span style="font-weight: 400;">Latent AI</span></i></a>

<span style="font-weight: 400;">For years, AI progress has been measured in parameter counts, but larger models introduce diminishing returns in real-world environments where latency, cost, and power matter. Physical systems  –  like autonomous platforms, smart infrastructure, and defense systems – cannot absorb the inefficiencies of “always-on, full-model” inference.</span>

<span style="font-weight: 400;">The challenge is no longer whether AI can perform tasks but whether it can be deployed efficiently across constrained hardware environments. Distributed inference offers an alternative to standard models, using a systems-level approach to execute AI in a scalable way. </span>
<h2><span style="font-weight: 400;">Why real-world AI breaks centralized assumptions</span></h2>
<span style="font-weight: 400;">Cloud-centric architectures assume reliable connectivity, abundant bandwidth, and a tolerance for delay – conditions rarely met at the edge. Edge and physical AI environments require deterministic, low-latency responses and local autonomy. This shift mirrors broader industry recognition that AI must move closer to where data is generated. </span>

<span style="font-weight: 400;">Scalability is often defined by how effectively intelligence is deployed, not how large models become. Architectural discipline – i.e. how systems are designed, orchestrated and deployed – emerges as the new differentiator.</span>
<h2><span style="font-weight: 400;">Defining distributed inference: a systems-level approach to AI execution</span></h2>
<a href="https://www.atlanticcouncil.org/wp-content/uploads/2023/08/AI-and-Edge-Continuum_Memo.pdf"><span style="font-weight: 400;">Distributed inference</span></a><span style="font-weight: 400;"> refers to decomposing AI workloads across multiple compute layers, including device, edge node, and cloud, based on task complexity and resource availability. Instead of a single monolithic model, multiple right-sized models operate in sequence or parallel.</span>

<span style="font-weight: 400;">Early advances in model compression made edge deployment feasible but didn’t solve how to manage inference across environments. The missing layer is orchestration: intelligently assigning workloads to the appropriate compute tier. This represents a shift from static deployment to dynamic, context-aware execution.</span>

<span style="font-weight: 400;">Optimizing individual models is insufficient without coordinating their interactions across the system. Emerging architectures resemble a “control plane” for inference, analogous to virtualization layers in traditional IT infrastructure. This orchestration layer enables portability, scalability, and efficient resource use.</span>
<h2><span style="font-weight: 400;">How distributed inference reduces cost, latency, and wasted compute</span></h2>
<span style="font-weight: 400;">Monolithic inference pipelines are inefficient. Running full-scale models on every input leads to unnecessary compute cycles and energy consumption. Low-value or irrelevant signals are processed with the same intensity as high-value ones. </span>

<span style="font-weight: 400;">Layered decision-making is a more efficient model. Distributed systems apply progressively complex models only when needed. Simple classifiers at the edge filter inputs before escalating to more resource-intensive analysis.</span>

<span style="font-weight: 400;">Think about a military operation trying to locate a ship in the vast ocean. Rather than processing the entire ocean for the ship’s type, instead look for any ship, either via satellite or drone. Once a ship is spotted, the data can be sent to the next layer to classify the type of ship. This separates signal from noise.</span>

<span style="font-weight: 400;">For manufacturers pursuing industrial automation, distributed inference has been shown to reduce RAM usage by 73%, increase inference by 73%, and reduce GPU requirements by up to 92%. This results in real-time predictive maintenance and quality control without massive expenditure.</span>

<span style="font-weight: 400;">Lower power consumption extends device lifespan and operational viability. It enables faster response times by processing decisions closer to the source and improves cost efficiency by aligning compute usage with actual need. </span>
<h2><span style="font-weight: 400;">Enabling real-time, mission-critical AI in constrained environments</span></h2>
<span style="font-weight: 400;">Edge environments change the rules. Edge deployments operate under constraints: intermittent connectivity, limited compute, and, oftentimes, adversarial conditions. Centralized fallback is not always viable, particularly in defense or critical infrastructure contexts. </span>

<span style="font-weight: 400;">Distributed inference is now a prerequisite for mission success. Intelligence must be available at the point of need, not dependent on remote systems. Distributed architectures provide resilience by reducing single points of failure. As highlighted in defense-focused discussions, operational advantage increasingly depends on delivering capability in real time. </span>

<span style="font-weight: 400;">Speed has become a strategic variable; time is a weapons platform. Distributed inference compresses time across the AI lifecycle: development, deployment, and execution. Organizations that reduce latency in both computation and deployment gain a measurable advantage.</span>

<span style="font-weight: 400;">The next phase of AI adoption is not discovering new use cases but implementing existing ones effectively. Leaders are prioritizing rapid fielding and demonstrable outcomes over exploratory innovation. </span>
<h2><span style="font-weight: 400;">Architectural implications for enterprise and government leaders</span></h2>
<span style="font-weight: 400;">There are four implications that leaders must consider:</span>
<ol>
 	<li><span style="font-weight: 400;"> Designing for distributed intelligence from the outset. Systems must be architected to support heterogeneous compute environments. Models should be modular, portable, and adaptable to different hardware constraints. </span></li>
 	<li><span style="font-weight: 400;"> Rethinking AI infrastructure strategy. The binary framing of cloud versus edge is insufficient; future-forward architectures integrate both seamlessly. Distributed inference requires coordination across endpoints, edge nodes, and centralized resources, so platforms that enable this integration will become foundational.</span></li>
 	<li><span style="font-weight: 400;"> Governance and resource optimization. Organizations must treat compute as a finite, allocatable resource. Distributed inference enables prioritization based on mission value and signal relevance. Reducing “wasted inference” becomes a key efficiency lever.</span></li>
 	<li><span style="font-weight: 400;"> The need for an organizational mindset shift. Success depends less on model development and more on system orchestration. Leaders must prioritize deployment speed, operational impact and lifecycle efficiency. </span></li>
</ol>
<h2><span style="font-weight: 400;">The backbone of scalable AI </span></h2>
<span style="font-weight: 400;">As AI expands into the physical world, centralized approaches will fail to meet performance and cost requirements. Distributed inference follows the trajectory of distributed computing and cloud-native architectures and thus will become foundational.</span>

<span style="font-weight: 400;">Organizations that master distributed architectures will outperform those focused solely on model scale. The future of AI scalability is not defined by model size or parameter counts; it is defined by how effectively intelligence is distributed, orchestrated and executed across systems. The organizations that win will be those that treat time, compute and deployment as tightly managed system variables, not abstract capabilities.</span>
<h2><span style="font-weight: 400;">About the author</span></h2>
<span style="font-weight: 400;">Jags Kandasamy is an experienced entrepreneur and technology leader with a passion for innovation. As co-founder and CEO of</span><a href="https://latentai.com/"> <span style="font-weight: 400;">Latent AI</span></a><span style="font-weight: 400;">, he brings a wealth of experience in AI, machine learning, cyber security and IoT. Jags has a history of building successful companies and driving growth. His previous roles at OtoSense, Hewlett Packard, and other leading organizations have equipped him with a deep understanding of the industry and a keen eye for emerging opportunities. Jags is committed to leveraging AI to solve real-world problems and create a positive impact.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/why-distributed-inference-is-becoming-the-backbone-of-scalable-ai-at-the-edge-20260504/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163238</post-id>	</item>
		<item>
		<title>Cloud, data center and AI hires signal next phase of infrastructure expansion</title>
		<link>https://www.edgeir.com/cloud-data-center-and-ai-hires-signal-next-phase-of-infrastructure-expansion-20260503</link>
					<comments>https://www.edgeir.com/cloud-data-center-and-ai-hires-signal-next-phase-of-infrastructure-expansion-20260503#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Sun, 03 May 2026 18:57:04 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[People & Leadership]]></category>
		<category><![CDATA[appointments]]></category>
		<category><![CDATA[Fastly]]></category>
		<category><![CDATA[JLL]]></category>
		<category><![CDATA[Nebius]]></category>
		<category><![CDATA[Oracle]]></category>
		<category><![CDATA[OTAVA]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163228</guid>

					<description><![CDATA[<p><img width="2048" height="1365" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2023/10/04184803/executive-scaled.jpg" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2023/10/04184803/executive-scaled.jpg 2048w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2023/10/04184803/executive-300x200.jpg 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2023/10/04184803/executive-1024x683.jpg 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2023/10/04184803/executive-150x100.jpg 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2023/10/04184803/executive-768x512.jpg 768w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2023/10/04184803/executive-1536x1024.jpg 1536w" sizes="auto, (max-width: 2048px) 100vw, 2048px" /></p>A fresh wave of executive appointments across cloud, data center and AI infrastructure firms points to continued investment in scaling capacity, strengthening go-to-market execution and managing capital-intensive growth. From OTAVA naming a new CEO and Nebius expanding in North America to JLL reinforcing its data center leadership, Fastly sharpening its go-to-market strategy and Oracle bringing in a new CFO, the hires reflect an industry racing to meet demand for multi-cloud, edge and AI-driven infrastructure while navigating increasing operational and competitive complexity.
<h2>OTAVA names veteran cloud executive CEO</h2>
<a href="https://www.edgeir.com/tag/otava">OTAVA</a> has appointed Donnie Gerault as president and CEO, bringing in a veteran cloud executive to lead its next phase of growth in secure and compliant multi-cloud services. Gerault has more than two decades of experience in the managed cloud sector, including leadership roles at Lenovo and Atos, and most recently served as CEO of cloud integrator Cloud49, where he focused on modernization, orchestration and optimization strategies across enterprise and public sector clients.

The appointment signals a continued push by OTAVA to scale its multi-cloud platform and strengthen its position in a competitive market defined by security, compliance and hybrid deployment complexity. Gerault’s background in building and expanding cloud services businesses, including M&amp;A-driven growth at ERGOS Technology Partners, aligns with broader industry trends as providers look to differentiate through operational scale and specialized services.
<h2>Nebius taps former Akamai, AWS exec to lead North America expansion</h2>
<a href="https://www.edgeir.com/companies/nebius">Nebius</a> has appointed Dan Lawrence as senior vice president and general manager for the Americas, tasking the former Akamai and AWS executive with leading the company’s expansion in North America. Lawrence will oversee go-to-market strategy and scale the commercial organization across enterprise, AI-native and strategic customer segments as Nebius builds out capacity to support demand for AI infrastructure.

The hire underscores Nebius’s push to establish a foothold in the U.S. as it develops large-scale AI cloud capacity, including gigawatt-scale data center projects. Lawrence’s experience scaling cloud businesses and building sales organizations at Akamai and AWS aligns with the company’s focus on capturing enterprise demand and competing in a rapidly expanding market for purpose-built AI compute.
<h2>JLL strengthens data center leadership</h2>
<a href="https://www.edgeir.com/tag/jll">JLL</a> has strengthened its Data Center and Critical Environments leadership with the appointment of Brandon Keesee as Managing Director of Hyperscale and Colocation Data Centers and the promotion of Michael Martin to Managing Director of Data Center Operations. The two Navy veterans bring more than 60 years of combined experience in mission-critical environments, with Keesee tasked with driving growth and client strategy across JLL’s data center platform, and Martin overseeing operations performance, safety and reliability across hyperscale, colocation and enterprise portfolios.

The moves come as the data center sector enters a rapid expansion phase, with JLL projecting global capacity to nearly double by 2030. Both executives bring deep operational expertise from roles at Apple, Microsoft and large-scale data center operators, positioning JLL to address increasing complexity in facility management, energy use and AI-driven infrastructure demands as operators scale capacity worldwide.
<h2>Fastly taps marketing leader to drive edge, AI growth</h2>
<a href="https://www.edgeir.com/companies/fastly">Fastly</a> has appointed Joan Jenkins as chief marketing officer, bringing in an experienced B2B technology executive to lead global marketing as the company looks to accelerate growth across its edge cloud and security platform. Jenkins joins with more than two decades of experience in scaling enterprise technology brands, including leadership roles at Informatica, Oracle, Cisco and Mindtickle, with a focus on data-driven and AI-enabled marketing strategies.

The hire reflects Fastly’s push to strengthen its market positioning as edge computing, security and AI workloads converge, increasing competition among infrastructure providers. As demand grows for platforms that can support performance-sensitive and AI-driven applications, Jenkins is expected to drive go-to-market execution and expand Fastly’s reach with enterprise and developer audiences.
<h2>Oracle names former Schneider Electric executive CFO</h2>
<a href="https://www.edgeir.com/companies/oracle">Oracle</a> has appointed Hilary Maxson as chief financial officer, bringing in a former Schneider Electric executive to lead finance as the company expands its cloud and AI infrastructure business. Maxson, who took on the role in April, succeeds Doug Kehring and brings experience in scaling global infrastructure operations, including leadership roles in finance, strategy and M&amp;A at Schneider Electric and AES.

The appointment comes as Oracle continues to invest in <a href="https://www.edgeir.com/companies/oracle-cloud-infrastructure-oci">cloud capacity</a> and AI-driven growth, with recent results showing strong revenue and earnings gains. Maxson’s background in capital-intensive infrastructure and operational performance is expected to support Oracle’s next phase of expansion, where disciplined financial management is critical to scaling data center and cloud platforms.]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/cloud-data-center-and-ai-hires-signal-next-phase-of-infrastructure-expansion-20260503/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163228</post-id>	</item>
		<item>
		<title>Anthropic locks in 5GW of AWS compute as Amazon deepens $100B AI infrastructure bet</title>
		<link>https://www.edgeir.com/anthropic-locks-in-5gw-of-aws-compute-as-amazon-deepens-100b-ai-infrastructure-bet-20260501</link>
					<comments>https://www.edgeir.com/anthropic-locks-in-5gw-of-aws-compute-as-amazon-deepens-100b-ai-infrastructure-bet-20260501#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Fri, 01 May 2026 09:00:52 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[Anthropic]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[cloud infrastructure]]></category>
		<category><![CDATA[data centers]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163185</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27154930/Anthropic-and-Amazon.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Anthropic and Amazon" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27154930/Anthropic-and-Amazon.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27154930/Anthropic-and-Amazon-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27154930/Anthropic-and-Amazon-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27154930/Anthropic-and-Amazon-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27154930/Anthropic-and-Amazon-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.edgeir.com/companies/anthropic"><span style="font-weight: 400;">Anthropic </span></a><span style="font-weight: 400;">and Amazon have expanded their collaboration, securing up to 5 gigawatts of computing capacity for training and deploying Claude, with significant Trainium2 and Trainium3 capacity coming online by the end of 2026. </span>

<span style="font-weight: 400;">Over the course of a decade, Anthropic will invest more than $100 billion into </span><a href="https://www.edgeir.com/companies/amazon-web-services-aws"><span style="font-weight: 400;">AWS</span></a><span style="font-weight: 400;"> and stockpile other future-generation Amazon silicon.</span>

<span style="font-weight: 400;">“Our users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace with rapidly growing demand,” says Dario Amodei, CEO and co-founder of Anthropic. “Our collaboration with Amazon will allow us to continue advancing AI research while delivering Claude to our customers, including the more than 100,000 building on AWS.”</span>

<span style="font-weight: 400;">The deal involves extension of Claude's Asian and European infrastructure to accommodate increasing international demand.</span>

<span style="font-weight: 400;">The </span><a href="http://claude.ai"><span style="font-weight: 400;">Claude platform</span></a><span style="font-weight: 400;"> will soon be natively available via AWS, leveraging existing AWS accounts and governance.</span>

<span style="font-weight: 400;">Amazon is putting down $5 billion on Anthropic now, plus up to another $20 billion later, a run-up to the $8 billion already invested.</span>

<span style="font-weight: 400;">Anthropic's revenue has grown significantly, surpassing $30 billion in 2026, driven by increased enterprise, developer, and consumer demand for Claude.</span>

<span style="font-weight: 400;">The deal is designed to alleviate pressure on infrastructure from rapid growth and provide consistent service for all levels of Claude usage. The expanded relationship speaks to just how much demand Anthropic is driving within the cloud and data center infrastructure ecosystem. The training will be primarily based in the US, but inference will expand Anthropic’s footprint on a global basis, and in turn, drive uptake in additional add-on infrastructure services.</span>

<span style="font-weight: 400;">Claude remains the only frontier AI model available on AWS, Google Cloud and Microsoft Azure.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/anthropic-locks-in-5gw-of-aws-compute-as-amazon-deepens-100b-ai-infrastructure-bet-20260501/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163185</post-id>	</item>
		<item>
		<title>Allbirds dumps footwear and bets $50M on GPU cloud pivot as “NewBird AI”</title>
		<link>https://www.edgeir.com/allbirds-dumps-footwear-and-bets-50m-on-gpu-cloud-pivot-as-newbird-ai-20260430</link>
					<comments>https://www.edgeir.com/allbirds-dumps-footwear-and-bets-50m-on-gpu-cloud-pivot-as-newbird-ai-20260430#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Thu, 30 Apr 2026 10:00:30 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[cloud infrastructure]]></category>
		<category><![CDATA[GPUs]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163181</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27153549/NewBird-AI.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="NewBird AI" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27153549/NewBird-AI.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27153549/NewBird-AI-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27153549/NewBird-AI-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27153549/NewBird-AI-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27153549/NewBird-AI-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.allbirds.com/"><span style="font-weight: 400;">Allbirds, Inc.</span></a><span style="font-weight: 400;"> has signed a $50 million convertible financing facility to turn its business in the direction of AI compute infrastructure to become a </span><a href="https://www.edgeir.com/what-is-gpu-as-a-service-gpuaas-20250212"><span style="font-weight: 400;">GPU-as-a-Service</span></a><span style="font-weight: 400;"> (GPUaaS) and AI-native cloud solutions provider under the new corporate name "NewBird AI".</span>

<span style="font-weight: 400;">The financing facility is expected to close in Q2 2026, subject to stockholder approval at a special meeting scheduled for May 18, 2026.</span>

<span style="font-weight: 400;">The Allbirds legacy will live on as the company had previously announced plans to sell its brand and footwear assets to American Exchange Group.</span>

<span style="font-weight: 400;">Contingent upon the completion of the asset sale, a special dividend is expected to be paid on or about May 20, 2026 to stockholders.</span>

<span style="font-weight: 400;">NewBird AI is expected to focus on high performance GPU assets as it seeks to tap into the increasing demand from enterprises and AI developers for AI compute capacity. The company will also focus on growing its </span><a href="https://www.edgeir.com/infrastructure-directory/cloud-neoclouds"><span style="font-weight: 400;">neocloud</span></a><span style="font-weight: 400;"> platform, strengthening partnerships and exploring strategic opportunities for M&amp;A as demand for AI infrastructure continues to rise.</span>

<span style="font-weight: 400;">The global AI market has several difficulties such as GPU shortage situation, low data center vacancy rate and demand that can hardly meet compute capacity needs; NewBird AI aims to solve these issues.</span>

<span style="font-weight: 400;">Allbirds’ transition to NewBird AI reflects a broader shift in the global technology landscape, where surging demand for AI compute, persistent GPU shortages, and constrained data center capacity are driving the rapid emergence of new “neocloud” infrastructure providers. </span>

<span style="font-weight: 400;">While the opportunity to deliver GPU-as-a-Service is significant, the move also underscores the capital intensity and operational complexity of competing in AI infrastructure. Success will depend on the company’s ability to secure scalable GPU supply, power, and strategic partnerships in an increasingly competitive and fragmented market.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/allbirds-dumps-footwear-and-bets-50m-on-gpu-cloud-pivot-as-newbird-ai-20260430/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163181</post-id>	</item>
		<item>
		<title>Blaize and NeoTensr expand APAC edge AI push with $50M infrastructure deal</title>
		<link>https://www.edgeir.com/blaize-and-neotensr-expand-apac-edge-ai-push-with-50m-infrastructure-deal-20260429</link>
					<comments>https://www.edgeir.com/blaize-and-neotensr-expand-apac-edge-ai-push-with-50m-infrastructure-deal-20260429#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Wed, 29 Apr 2026 06:00:23 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[data centers]]></category>
		<category><![CDATA[edge AI]]></category>
		<category><![CDATA[edge infrastructure]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163176</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27152712/Blaize-and-NeoTensr.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Blaize and NeoTensr" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27152712/Blaize-and-NeoTensr.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27152712/Blaize-and-NeoTensr-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27152712/Blaize-and-NeoTensr-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27152712/Blaize-and-NeoTensr-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/27152712/Blaize-and-NeoTensr-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.edgeir.com/companies/blaize"><span style="font-weight: 400;">Blaize</span></a><span style="font-weight: 400;"> and NeoTensr signed a contract worth up to $50M to deploy co-branded AI edge data center infrastructure across the Asia Pacific region, targeting the growing </span><a href="https://www.edgeir.com/infrastructure-directory/edge-data-centers"><span style="font-weight: 400;">edge data center</span></a><span style="font-weight: 400;"> market. </span>

<span style="font-weight: 400;">This agreement builds off the previous 20M+ NeoTensr order awarded in Q4 of 2025, increasing the cap on total value to $70M.</span>

<span style="font-weight: 400;">The Blaize hybrid AI server (based on its quad card) can support 200+ simultaneous camera streams with sophisticated AI analytics leading to a wide range of applications such as smart cities, industrial automation, and retail intelligence.</span>

<span style="font-weight: 400;">“What makes this deployment technically significant is that we are not partitioning the workload between edge and cloud - we are collapsing that boundary entirely,” says Ke Yin, Co-founder and chief scientist, Blaize. “The Blaize GSP handles continuous yet dynamic, low-latency computer vision at the sensor layer, while our hybrid architecture simultaneously runs LLM and VLM inference on the same physical infrastructure. Processing 200+ camera streams per server with that level of analytical depth is not achievable on conventional hardware. NeoTensr recognized that, and it’s why this platform will define the standard for intelligent edge data centers across the region.”</span>

<span style="font-weight: 400;">NeoTensr will leverage Blaize's technology to create scalable, production-ready AI infrastructure for smart cities and enterprises throughout Asia Pacific.</span>

<span style="font-weight: 400;">It consists of co-branded AI hardware, software solutions and services into a full-stack platform for deployment and monetization.</span>

<span style="font-weight: 400;">Blaize combines its </span><a href="https://www.blaize.com/technology/"><span style="font-weight: 400;">Graph Streaming Processor </span></a><span style="font-weight: 400;">with GPU infrastructure in a hybrid AI architecture designed for efficient and real-time </span><a href="https://www.edgeir.com/why-the-future-of-ai-inference-lies-at-the-edge-20260311"><span style="font-weight: 400;">AI inference at the edge</span></a><span style="font-weight: 400;">.</span>

<span style="font-weight: 400;">The strategic partnership positions Blaize and NeoTensr to tap into the growing demand for edge data centers in Asia that provide high-performance low-latency AI solutions.</span>

<span style="font-weight: 400;">Earlier this year </span><a href="https://www.edgeir.com/nokia-and-blaize-sign-edge-ai-inference-mou-targeting-apac-networks-20260129"><span style="font-weight: 400;">Blaize signed an edge AI inference MOU</span></a><span style="font-weight: 400;"> targeting APAC networks with Nokia.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/blaize-and-neotensr-expand-apac-edge-ai-push-with-50m-infrastructure-deal-20260429/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163176</post-id>	</item>
		<item>
		<title>Network X Americas 2026</title>
		<link>https://www.edgeir.com/network-x-americas-2026-20260428</link>
					<comments>https://www.edgeir.com/network-x-americas-2026-20260428#respond</comments>
		
		<dc:creator><![CDATA[Candice Rodriguez]]></dc:creator>
		<pubDate>Wed, 29 Apr 2026 00:25:39 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure Events]]></category>
		<category><![CDATA[6G]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[conference]]></category>
		<category><![CDATA[connectivity]]></category>
		<category><![CDATA[event]]></category>
		<category><![CDATA[Network X Americas 2026]]></category>
		<category><![CDATA[telecom]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163196</guid>

					<description><![CDATA[<p><img width="1000" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/28202509/NXA-banner-1000x750-Centred.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="NXA banner 1000x750 (Centred)" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/28202509/NXA-banner-1000x750-Centred.png 1000w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/28202509/NXA-banner-1000x750-Centred-300x225.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/28202509/NXA-banner-1000x750-Centred-150x113.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/28202509/NXA-banner-1000x750-Centred-768x576.png 768w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></p><p class="p1"><a href="https://tmt.knect365.com/network-x-americas/pass-type-grid/?_mc=barter_nwxa_nwxa_attnd_tsmatt_partner_EdgeInfrastructureReview_2026&amp;utm_source=other&amp;utm_medium=barter_&amp;utm_campaign=&amp;utm_content=attnd_"><strong>Network X Americas 2026</strong></a>
<strong>May 18-20, 2026 | Irving Convention Center, Dallas, Texas</strong></p>
<p class="p1">Network X Americas is the only event uniting mobile, fixed, Wi-Fi and satellite connectivity with the operational reality of AI in the Americas. Returning to Dallas for 2026, the event brings together operators, technology providers, investors and enterprise decision-makers for three days of high-level content, strategic conversations and unmatched networking opportunities.</p>
<p class="p1">Taking place May 18–20 at the Irving Convention Center, Dallas, Texas, the 2026 programme reflects the challenges and opportunities defining the sector right now — from AI adoption and network convergence to broadband expansion, fibre growth, cloud transformation and enterprise monetization. Confirmed speakers include senior executives from AT&amp;T, Verizon, Comcast, Optimum, and T-Mobile, alongside public sector leaders shaping the future of connectivity across North America.</p>
<p class="p1">With 1,100+ attendees expected from across service providers, hyperscalers, vendors and the wider digital infrastructure community, Network X Americas also hosts three co-located events — WGC Americas, the 6G Summit, and OnGo Forward  - making it the single most valuable three days in the North American telecom calendar.</p>
<strong><a href="https://tmt.knect365.com/network-x-americas/pass-type-grid/?_mc=barter_nwxa_nwxa_attnd_tsmatt_partner_EdgeInfrastructureReview_2026&amp;utm_source=other&amp;utm_medium=barter_&amp;utm_campaign=&amp;utm_content=attnd_">Book you Pass HERE!</a></strong>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/network-x-americas-2026-20260428/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163196</post-id>	</item>
	</channel>
</rss>