<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Edge Infrastructure Review</title>
	<atom:link href="https://www.edgeir.com/feed" rel="self" type="application/rss+xml" />
	<link>https://www.edgeir.com</link>
	<description>Digital infrastructure market analysis</description>
	<lastBuildDate>Mon, 13 Apr 2026 11:54:05 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
<site xmlns="com-wordpress:feed-additions:1">170853209</site>	<item>
		<title>Crusoe expands AI infrastructure race with 900 MW Abilene build for Microsoft</title>
		<link>https://www.edgeir.com/crusoe-expands-ai-infrastructure-race-with-900-mw-abilene-build-for-microsoft-20260417</link>
					<comments>https://www.edgeir.com/crusoe-expands-ai-infrastructure-race-with-900-mw-abilene-build-for-microsoft-20260417#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Fri, 17 Apr 2026 06:00:43 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[Crusoe]]></category>
		<category><![CDATA[data centers]]></category>
		<category><![CDATA[Microsoft]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163034</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/09122853/Crusoe-w-Microsoft.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Crusoe-w-Microsoft" decoding="async" fetchpriority="high" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/09122853/Crusoe-w-Microsoft.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/09122853/Crusoe-w-Microsoft-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/09122853/Crusoe-w-Microsoft-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/09122853/Crusoe-w-Microsoft-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/09122853/Crusoe-w-Microsoft-768x480.png 768w" sizes="(max-width: 1200px) 100vw, 1200px" /></p><span style="font-weight: 400;">Neocloud provider </span><a href="https://www.edgeir.com/companies/crusoe"><span style="font-weight: 400;">Crusoe</span></a><span style="font-weight: 400;"> announced a new 900 MW </span><a href="https://www.edgeir.com/what-are-ai-factories-20250804"><span style="font-weight: 400;">AI factory</span></a><span style="font-weight: 400;"> campus in Abilene, Texas, dedicated to supporting large-scale AI workloads for Microsoft, with the total site capacity expected to reach 2.1 GW. </span>

<span style="font-weight: 400;">The campus will consist of two new buildings and a powerplant on-site providing grid resilience as well supporting next-gen AI infrastructure.</span>

<span style="font-weight: 400;">Construction has begun, with the first building planned to be operational by mid-2027, maintaining Crusoe’s rapid pace of infrastructure buildout.</span>

<span style="font-weight: 400;">“Crusoe is building a new AI factory campus in Abilene, purpose-built for the demands of next-generation AI," says Chase Lochmiller, co-founder and CEO of Crusoe. “By integrating 900 megawatts of new on-site power generation, we will continue building the industrial foundation for American AI – at a velocity the industry has never seen."</span>

<span style="font-weight: 400;">It will also be expected to provide thousands of jobs during the construction phase, and hundreds more that are permanent job opportunities that could seriously enhance North Abilene's economy.</span>

<span style="font-weight: 400;">With integrated energy-efficient infrastructure, the new campus will come equipped with a ~900 MW on-site power plant, ultra-high-density compute and water-efficient cooling systems.</span>

<span style="font-weight: 400;">The nearby Abilene campus serves as a large booster of local tax revenue, and that's expected to compound with its expansion.</span>

<span style="font-weight: 400;">Crusoe's partnership with Microsoft shows how they are addressing the need for reliable and scalable infrastructures to satisfy increasing AI demands. Crusoe solutions deliver effective and energy first</span><a href="https://www.crusoe.ai/data-centers"><span style="font-weight: 400;"> giga-scale AI infrastructure</span></a><span style="font-weight: 400;"> that enables the future of AI.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/crusoe-expands-ai-infrastructure-race-with-900-mw-abilene-build-for-microsoft-20260417/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163034</post-id>	</item>
		<item>
		<title>Aker BP taps Armada for offshore modular AI data center to process drilling data at the edge</title>
		<link>https://www.edgeir.com/aker-bp-taps-armada-for-offshore-modular-ai-data-center-to-process-drilling-data-at-the-edge-20260416</link>
					<comments>https://www.edgeir.com/aker-bp-taps-armada-for-offshore-modular-ai-data-center-to-process-drilling-data-at-the-edge-20260416#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 06:00:16 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[Armada]]></category>
		<category><![CDATA[edge infrastructure]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163030</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/09113255/Armada-on-Oil-Rig-Aker-BP.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Armada on Oil Rig Aker BP" decoding="async" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/09113255/Armada-on-Oil-Rig-Aker-BP.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/09113255/Armada-on-Oil-Rig-Aker-BP-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/09113255/Armada-on-Oil-Rig-Aker-BP-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/09113255/Armada-on-Oil-Rig-Aker-BP-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/09113255/Armada-on-Oil-Rig-Aker-BP-768x480.png 768w" sizes="(max-width: 1200px) 100vw, 1200px" /></p><span style="font-weight: 400;">Oil company Aker BP is collaborating with edge AI infrastructure solutions provider </span><a href="https://www.edgeir.com/companies/armada"><span style="font-weight: 400;">Armada</span></a><span style="font-weight: 400;"> to deliver a </span><a href="https://www.edgeir.com/infrastructure-directory/edge-data-centers"><span style="font-weight: 400;">modular data center</span></a><span style="font-weight: 400;"> for offshore drilling on the Norwegian Continental Shelf.</span>

<a href="https://www.armada.ai/product/galleon"><span style="font-weight: 400;">Galleon</span></a><span style="font-weight: 400;"> allows on-site processing and analysis of drilling data at the rigs, overcoming issues such as connectivity challenges and delays in decision-making.</span>

<span style="font-weight: 400;">Running AI models locally facilitates more accurate prediction and prevention of equipment failures, reducing downtime. It also reduces the risk of cyber-attacks by lowering dependence on external networks.</span>

<a href="https://akerbp.com/"><span style="font-weight: 400;">Aker BP</span></a><span style="font-weight: 400;"> will unify vendor applications based on a single edge platform, streamlining compliance and cost savings, while allowing operations to occur from remote locations.</span>

<span style="font-weight: 400;">Deployment starts with a single reference Galleon on 1 rig laying down an installable repeatable blueprint.</span>

<span style="font-weight: 400;">The project capitalizes on Armada’s history of providing modular AI infrastructure in remote environments, including a U.S. Navy deployment project from 2025.</span>

<span style="font-weight: 400;">The partnership will improve operational efficiency, resilience, and autonomy of offshore drilling operations.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/aker-bp-taps-armada-for-offshore-modular-ai-data-center-to-process-drilling-data-at-the-edge-20260416/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163030</post-id>	</item>
		<item>
		<title>Leaseweb builds out European sovereign cloud with programmable networking and AI compute</title>
		<link>https://www.edgeir.com/leaseweb-builds-out-european-sovereign-cloud-with-programmable-networking-and-ai-compute-20260415</link>
					<comments>https://www.edgeir.com/leaseweb-builds-out-european-sovereign-cloud-with-programmable-networking-and-ai-compute-20260415#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 10:00:22 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[cloud infrastructure]]></category>
		<category><![CDATA[Sovereign Infrastructure]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162993</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174410/Leaseweb-sovereign-cloud.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Leaseweb sovereign cloud" decoding="async" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174410/Leaseweb-sovereign-cloud.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174410/Leaseweb-sovereign-cloud-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174410/Leaseweb-sovereign-cloud-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174410/Leaseweb-sovereign-cloud-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174410/Leaseweb-sovereign-cloud-768x480.png 768w" sizes="(max-width: 1200px) 100vw, 1200px" /></p><span style="font-weight: 400;">Cloud services and Infrastructure-as-a-Service (IaaS) provider </span><a href="https://www.edgeir.com/companies/leaseweb"><span style="font-weight: 400;">Leaseweb</span></a><span style="font-weight: 400;"> is advancing Europe's </span><a href="https://www.edgeir.com/infrastructure-directory/cloud-sovereign"><span style="font-weight: 400;">sovereign cloud</span></a><span style="font-weight: 400;"> initiatives through its </span><a href="https://www.leaseweb.com/en/about-us/our-story/european-cloud-campus"><span style="font-weight: 400;">European Cloud Campus project</span></a><span style="font-weight: 400;">, part of the IPCEI-CIS program.</span>

<span style="font-weight: 400;">Notable advancements include cloud infrastructure expansion, autoscaling, load balancing and privacy network storage; in addition to programmable virtual overlay networking for improved flexibility and security.</span>

<span style="font-weight: 400;">“Over the past year, we’ve made significant progress in turning the vision of a sovereign European cloud into reality,” says Robert van der Meulen, director product strategy at Leaseweb. “From expanding our compute and container platforms to releasing open APIs and developer tools, these efforts demonstrate how industry and technology are coming together to build flexible, secure, and scalable cloud infrastructure for Europe. Looking ahead, we remain committed to advancing automation, network flexibility, and ecosystem integration, ensuring that Europe’s digital sovereignty is supported by practical, world-class infrastructure.”</span>

<span style="font-weight: 400;">The company also launched new tools (open APIs, a Terraform provider, and platform monitoring systems) to help automate developer integration.</span>

<span style="font-weight: 400;">In 2025, Leaseweb actively supported the cloud community by organizing events such as the Leaseweb Tech Summit and taking on contributions to 58 open-source projects. Emphasizing collaboration between industry stakeholders to build secure, scalable, and flexible cloud infrastructure for Europe. </span>

<span style="font-weight: 400;">Leaseweb is the only Dutch cloud provider to join the IPCEI-CIS program directly and support Europe's digital sovereignty.</span>

<span style="font-weight: 400;">Leaseweb's offerings are based on the core tenets of digital sovereignty and customer control over data. In early 2025 </span><a href="https://www.edgeir.com/leaseweb-expands-global-cloud-portfolio-with-nvidia-gpus-to-power-ai-and-hpc-workloads-20250129"><span style="font-weight: 400;">Leaseweb expanded its global cloud portfolio</span></a><span style="font-weight: 400;"> with NVIDIA GPUs to power AI and HPC workloads.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/leaseweb-builds-out-european-sovereign-cloud-with-programmable-networking-and-ai-compute-20260415/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162993</post-id>	</item>
		<item>
		<title>Akamai pushes AI inference to the edge with orchestrated GPU grid across 4,400 sites</title>
		<link>https://www.edgeir.com/akamai-pushes-ai-inference-to-the-edge-with-orchestrated-gpu-grid-across-4400-sites-20260414</link>
					<comments>https://www.edgeir.com/akamai-pushes-ai-inference-to-the-edge-with-orchestrated-gpu-grid-across-4400-sites-20260414#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Tue, 14 Apr 2026 10:00:49 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[cloud infrastructure]]></category>
		<category><![CDATA[edge infrastructure]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162986</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173750/Akamai-launches-AI-Grid.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Akamai launches AI Grid" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173750/Akamai-launches-AI-Grid.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173750/Akamai-launches-AI-Grid-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173750/Akamai-launches-AI-Grid-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173750/Akamai-launches-AI-Grid-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173750/Akamai-launches-AI-Grid-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.edgeir.com/companies/akamai"><span style="font-weight: 400;">Akamai</span></a><span style="font-weight: 400;"> launched the </span><a href="https://www.edgeir.com/akamai-extends-ai-inference-to-the-edge-with-nvidia-infrastructure-20251111"><span style="font-weight: 400;">Akamai Inference Cloud</span></a><span style="font-weight: 400;"> late last year, the first global-scale implementation of NVIDIA AI Grid, enabling distributed AI inference across 4,400 edge locations.</span>

<span style="font-weight: 400;">Akamai empowers a platform that delivers AI systems using NVIDIA AI infrastructure and optimizes workload routing with Akamai's network to offer the best possible latency, cost, and performance.</span>

<span style="font-weight: 400;">Intelligent orchestration optimizes the cost-efficiency and response time of AI applications via improved “tokenomics” in Akamai’s AI Grid, resulting in throughput gains.</span>

<span style="font-weight: 400;">"AI factories have been purpose-built for training and frontier model workloads and centralized infrastructure will continue to deliver the best tokenomics for those use cases," says Adam Karon, COO and general manager, Cloud Technology Group, Akamai. "But real-time video, physical AI, and highly concurrent personalized experiences demand inference at the point of contact, not a round trip to a centralized cluster. Our AI Grid intelligent orchestration gives </span><a href="https://www.edgeir.com/what-are-ai-factories-20250804"><span style="font-weight: 400;">AI factories</span></a><span style="font-weight: 400;"> a way to scale inference outward, leveraging the same distributed architecture that revolutionized content delivery to route AI workloads across 4,400 locations, at the right cost, at the right time."</span>

<span style="font-weight: 400;">It reduces latency by processing requests at the edge to support use cases for AI in real time, such as gaming, financial services, media, and retail.</span>

<span style="font-weight: 400;">To do so, Akamai has thousands of </span><a href="https://www.nvidia.com/en-us/data-center/rtx-pro-6000-blackwell-server-edition/"><span style="font-weight: 400;">NVIDIA RTX PRO 6000 GPUs</span></a><span style="font-weight: 400;"> as part of its infrastructure with high density compute capabilities to offer enterprise scale GPU services for large-scale AI workloads and multi-modal inference.</span>

<span style="font-weight: 400;">The platform empowers enterprises to deploy adaptive, context-aware AI agents in both centralized and distributed architectures through this model.</span>

<span style="font-weight: 400;">Early adoption is evident across the gaming, finance and media sectors; a recent $200 million service agreement announced last month by Akamai validates enterprise demand.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/akamai-pushes-ai-inference-to-the-edge-with-orchestrated-gpu-grid-across-4400-sites-20260414/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162986</post-id>	</item>
		<item>
		<title>How AI-RAN paves the way to performance and profits</title>
		<link>https://www.edgeir.com/how-ai-ran-paves-the-way-to-performance-and-profits-20260413</link>
					<comments>https://www.edgeir.com/how-ai-ran-paves-the-way-to-performance-and-profits-20260413#respond</comments>
		
		<dc:creator><![CDATA[Rob Hughes]]></dc:creator>
		<pubDate>Mon, 13 Apr 2026 11:54:05 +0000</pubDate>
				<category><![CDATA[Opinion & Guest Commentary]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[connectivity]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163048</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/13075025/AI-RAN-Guest-Post.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="AI-RAN Guest Post" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/13075025/AI-RAN-Guest-Post.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/13075025/AI-RAN-Guest-Post-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/13075025/AI-RAN-Guest-Post-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/13075025/AI-RAN-Guest-Post-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/13075025/AI-RAN-Guest-Post-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><em>By Rob Hughes, Head of Wireless Marketing for<a href="https://www.1finity.com/"> 1Finity</a>, a Fujitsu company. </em>

As data demands escalate, today’s networks are growing and evolving at a rapid pace. Mobile networks, in particular, have changed substantially in response to expectations of readily available hyperconnectivity at our fingertips everywhere we go.

At the same time, mobile network operators (MNOs) are facing a confluence of pressures and challenges. Market consolidation, increasing competition, rising supply chain costs and shrinking profits all contribute to an urgency to offer profitable new services while reducing total cost of ownership (TCO).

Growing reliance on artificial intelligence (AI) offers promise for MNOs to address these challenges. The important question is not whether AI belongs in mobile networks, but how operators will turn AI into both performance gains and new revenue.
<h2>The Power of AI</h2>
With the evolution to 5G, the mobile radio access network (RAN) has seen significant changes. Increased throughput speeds have accelerated the adoption of video streaming and encouraged a wide range of content rich applications that have driven increased demands on the network.

As a result, MNOs are looking for solutions to improve spectral efficiency and capacity while reducing TCO. Indeed, we have seen quite a bit of positive enthusiasm around harnessing the power of AI automation to streamline management and improve RAN performance. With AI optimization, MNOs can boost network performance to deliver an outstanding customer experience.

Taking this a step further, there is genuine interest among MNO pioneers to fully integrate AI resources into the RAN infrastructure and implement AI-RAN technology. With the technical breakthrough offered by AI-RAN, MNOs can further enhance RAN performance by increasing network throughput, reducing energy consumption and improving spectral efficiency.

While these network enhancements are exciting, even more enticing are the business opportunities presented by AI-RAN.
<h2>Pinpoint the Real Opportunity</h2>
AI-RAN offers significant monetization potential and competitive advantages for network operators. In a recent<a href="https://www.lightreading.com/ai-machine-learning/why-operators-are-looking-at-ai-ran-for-future-ran-systems"> Future of AI survey</a> by Heavy Reading analysts, however, only 38 percent of MNOs cited ‘the ability to offer AI as a service’ among the top three benefits they expected from AI-RAN.

For some network operators the evolution to AI-RAN is seen as a challenge to surmount, requiring effort and financial investment. Yet, AI offers the ability to deliver promising new services - particularly through AI-as-a-Service (AIaaS) and GPU-as-a-Service (GPUaaS) - that can create the new revenue streams needed to pay off 5G and AI-RAN investments.

In fact, the GPUaaS market is<a href="https://www.marketsandmarkets.com/Market-Reports/gpu-as-a-service-market-153834402.html?utm_source=chatgpt.com"> expected</a> to exceed $25 billion by 2030 with an annual growth rate of 26.5 percent. That’s because cloud-based GPU resources are increasingly desirable for AI training, batch AI/data analytics, video rendering and simulation/engineering workloads.
<h2>Disrupt the Cycle</h2>
Historically, emerging network technologies have faced a "chicken and egg" dilemma regarding investment. Software developers typically hesitate to create content until infrastructure is widespread, while MNOs often delay deployment until there are sufficient revenue-generating applications to justify the cost.

The nature of AI-RAN infrastructure enables MNOs to disrupt this traditional cycle. This can be achieved by leasing excess compute capacity within the network, generating revenue without waiting for a broad base of applications that need the low latency or data sovereignty of AI-RAN to become available. Much like a high-end restaurant must maximize table turnover to offset premium real estate costs, MNOs can monetize their valuable AI-RAN resources to pay for themselves.

Because base station traffic fluctuates throughout the day, the compute capacity of a RAN distributed unit (DU) is built to handle the absolute peak traffic of the busiest day of the year, plus a margin for future expansion. Consequently, significant compute capacity remains idle most of the time.

Today that excess capacity sits idle, but that doesn’t need to be the case. Given the growing market for GPUaaS, MNOs have a promising opportunity to generate near-term revenue while they gradually introduce new AI services.
<h2>Fast-track to Revenue</h2>
Current <a href="https://www.edgeir.com/what-is-gpu-as-a-service-gpuaas-20250212">GPUaaS</a> and AIaaS solutions enable highly flexible billing models, including hourly or even minute-level options, without long-term obligations. This allows MNOs to sell GPU capacity contracts that fluctuate according to daily demand cycles. Moreover, MNOs maintain operational control by reserving the right to lease this capacity only when it is not required for their own internal needs, taking full advantage of idle capacity.

By leveraging AI resources to monetize excess capacity, MNOs can effectively fast-track to AI-RAN immediately, reaping the performance benefits while generating revenue to cover initial costs. As the MNO’s own demand for AI compute resources grows, they can then gradually scale back the amount of capacity available for lease.
<h2>The Road to AI-RAN</h2>
The transition to AI-RAN requires more than just new technology - it requires new business models, skillsets and mindsets. We are just at the beginning of a long journey that will bring a host of new innovations along the way. And as with any new technology, a small number of pioneers and innovators are pushing the envelope and deploying something new in order to spur AI-RAN advancements.

Although the pioneers are ready to implement AI-RAN now, the majority of deployments will be a year or two behind, but that’s par for the course. Network change can be slow to materialize, such as was the case with 5G Standalone (SA) networks, but adoption will happen in time.

Those MNOs willing to start the journey now will have more time to learn both the AI-RAN technology as well as the necessary changes in operations and business models. That head start will put them firmly ahead of the competition on the path to intelligent, next-generation AI-powered networks.
<h2>About the author</h2>
Rob Hughes is Head of Wireless Marketing at <a href="https://www.1finity.com/">1Finity,</a> overseeing solution and marketing strategy for the company’s wireless portfolio. With over 20 years’ experience in telecommunications, he has a broad range of expertise in wireless, optical, wireline, enterprise, open networking, analytics and cable solutions. Rob has a unique perspective based on both business and technical experience. He studied Engineering at University of Victoria, British Columbia, and taught Marketing Strategy at the University of Texas.]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/how-ai-ran-paves-the-way-to-performance-and-profits-20260413/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163048</post-id>	</item>
		<item>
		<title>StorMagic and HiveRadar target off-grid compute with mobile edge data centers</title>
		<link>https://www.edgeir.com/stormagic-and-hiveradar-target-off-grid-compute-with-mobile-edge-data-centers-20260413</link>
					<comments>https://www.edgeir.com/stormagic-and-hiveradar-target-off-grid-compute-with-mobile-edge-data-centers-20260413#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Mon, 13 Apr 2026 10:00:17 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[EDGE Data Centers]]></category>
		<category><![CDATA[edge infrastructure]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162982</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173346/StorMagic-and-HiveRadar-edge-data-center.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="StorMagic and HiveRadar edge data center" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173346/StorMagic-and-HiveRadar-edge-data-center.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173346/StorMagic-and-HiveRadar-edge-data-center-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173346/StorMagic-and-HiveRadar-edge-data-center-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173346/StorMagic-and-HiveRadar-edge-data-center-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173346/StorMagic-and-HiveRadar-edge-data-center-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.edgeir.com/companies/stormagic"><span style="font-weight: 400;">StorMagic</span></a><span style="font-weight: 400;"> and HiveRadar have partnered to deliver a mobile </span><a href="https://www.edgeir.com/what-is-edge-computing-the-why-and-where-of-edge-computing-20250320"><span style="font-weight: 400;">edge computing</span></a><span style="font-weight: 400;"> solution combining StorMagic's SvHCI software and HiveRadar's </span><a href="https://www.hiveradar.com/products/edc/"><span style="font-weight: 400;">Portable Edge Data Center</span></a><span style="font-weight: 400;"> (P-EDC).</span>

<span style="font-weight: 400;">It is a solution purpose-built for secure, robust and high-performance IT infrastructure in remote, mobile and off-grid environments.</span>

<span style="font-weight: 400;">"Whether operating in disaster recovery zones, forward-operating environments, mobile command centers, remote film sets or distributed retail sites, our customers can now deploy a secure, high-performance virtualized environment in minutes," says Rahul Narsimhan, CEO, HiveRadar. "Together, StorMagic and HiveRadar customers will benefit from rapid deployment and travel-ready mobility, which is especially beneficial for defense, emergency response, energy, retail, entertainment and industrial operations users."</span>

<span style="font-weight: 400;">Key features include data-at-rest encryption, VM import for workload migration, caching for performance, real-time health monitoring, and low-latency field compute and storage.</span>

<span style="font-weight: 400;">With 5G and satellite integration, it also supports off-grid operations, quick deployment and secure connectivity.</span>

<span style="font-weight: 400;">Customers benefit from high availability, failover capabilities and easy virtualization of mobile infrastructures. While HiveRadar specializes in ruggedized IT infrastructure solutions, StorMagic builds lightweight, edge-optimized virtualization solutions.</span>

<span style="font-weight: 400;">The joint solution is now available via StorMagic's global partner and reseller network.</span>

<span style="font-weight: 400;">Late last year StorMagic and SNUC launched </span><a href="https://www.edgeir.com/stormagic-and-snuc-launch-rugged-hci-appliances-for-demanding-edge-deployments-20250930"><span style="font-weight: 400;">rugged HCI appliances</span></a><span style="font-weight: 400;"> for demanding edge deployments.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/stormagic-and-hiveradar-target-off-grid-compute-with-mobile-edge-data-centers-20260413/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162982</post-id>	</item>
		<item>
		<title>Lambda doubles down on NVIDIA stack with 10,000+ Blackwell GPUs and CPO networking push</title>
		<link>https://www.edgeir.com/lambda-doubles-down-on-nvidia-stack-with-10000-blackwell-gpus-and-cpo-networking-push-20260410</link>
					<comments>https://www.edgeir.com/lambda-doubles-down-on-nvidia-stack-with-10000-blackwell-gpus-and-cpo-networking-push-20260410#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Fri, 10 Apr 2026 10:00:12 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[GPUs]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162978</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173028/Lambda-NVIDIA-Blackwell-GPUs-.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Lambda NVIDIA Blackwell GPUs" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173028/Lambda-NVIDIA-Blackwell-GPUs-.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173028/Lambda-NVIDIA-Blackwell-GPUs--300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173028/Lambda-NVIDIA-Blackwell-GPUs--1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173028/Lambda-NVIDIA-Blackwell-GPUs--150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06173028/Lambda-NVIDIA-Blackwell-GPUs--768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.edgeir.com/companies/lambda"><span style="font-weight: 400;">Lambda</span></a><span style="font-weight: 400;"> recently announced it’s becoming a launch partner for the NVIDIA Vera CPU platform and NVIDIA STX.</span>

<span style="font-weight: 400;">The GPU-native AI infrastructure provider will deploy NVIDIA Quantum-X800 infiniBand photonics co-packaged optics in an </span><a href="https://www.edgeir.com/what-are-ai-factories-20250804"><span style="font-weight: 400;">AI factory</span></a><span style="font-weight: 400;"> with 10,000+ NVIDIA Blackwell Ultra GPUs. </span>

<span style="font-weight: 400;">Lambda's bare metal instances made its way out of the lab and into the core cloud offering, giving users direct access to hardware while avoiding virtualization overhead for distributed AI training workloads.</span>

<span style="font-weight: 400;">Designed for launching thousands of parallel AI environments, the </span><a href="https://www.nvidia.com/en-us/data-center/vera-cpu/"><span style="font-weight: 400;">NVIDIA Vera CPU platform</span></a><span style="font-weight: 400;"> enables maximally high memory bandwidth which optimizes reinforcement learning and agentic AI workloads.</span>

<span style="font-weight: 400;">The NVIDIA STX is a modular architecture for AI storage that augments inference, analytics, and training with next-gen hardware optimized KV-cache management.</span>

<span style="font-weight: 400;">Co-Packaged Optics (CPO) networking enables faster, cost-efficient AI infrastructure suitable for large-scale AI factories, alleviating major efficiency bottlenecks found in current approaches.</span>

<span style="font-weight: 400;">“The race to build AI factories isn’t won on GPU counts alone,” says Dave Salvator, director of accelerated computing at NVIDIA. “Network architecture is what determines whether those systems can perform at scale. Getting this right is what allows AI infrastructure to power services used by hundreds of millions of people around the world.“​​​​​​​​​​​​​​​​</span>

<span style="font-weight: 400;">Lambda oversees one of the largest deployments of NVIDIA Quantum-X800 CPO switches, highlighting how critical network architecture is when scaling AI systems.</span>

<span style="font-weight: 400;">These announcements further bolster Lambda's AI infrastructure platform, which empowers frontier labs, enterprises, and hyperscalers with proven and energy-efficient workhorses built for reliability at scale.</span>

<span style="font-weight: 400;">Lambda continues its mission to make AI compute ubiquitous, leveraging a decade-long collaboration with NVIDIA to advance its Superintelligence Cloud platform.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/lambda-doubles-down-on-nvidia-stack-with-10000-blackwell-gpus-and-cpo-networking-push-20260410/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162978</post-id>	</item>
		<item>
		<title>DDN and Zadara target sovereign AI deployments with multi-tenant NVIDIA factory stack</title>
		<link>https://www.edgeir.com/ddn-and-zadara-target-sovereign-ai-deployments-with-multi-tenant-nvidia-factory-stack-20260409</link>
					<comments>https://www.edgeir.com/ddn-and-zadara-target-sovereign-ai-deployments-with-multi-tenant-nvidia-factory-stack-20260409#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 10:00:08 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[cloud infrastructure]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162974</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06172745/Zadara-DDN.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Zadara + DDN" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06172745/Zadara-DDN.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06172745/Zadara-DDN-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06172745/Zadara-DDN-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06172745/Zadara-DDN-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06172745/Zadara-DDN-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><a href="https://www.ddn.com/"><span style="font-weight: 400;">DDN</span></a><span style="font-weight: 400;"> and </span><a href="https://www.edgeir.com/companies/zadara"><span style="font-weight: 400;">Zadara</span></a><span style="font-weight: 400;"> announced a partnership to deliver high-performance AI infrastructure for </span><a href="https://www.edgeir.com/infrastructure-directory/cloud-sovereign"><span style="font-weight: 400;">sovereign clouds</span></a><span style="font-weight: 400;"> and multi-tenant AI factories, leveraging NVIDIA reference designs. </span>

<span style="font-weight: 400;">The integration combines the </span><a href="https://www.ddn.com/products/exascaler-cloud/"><span style="font-weight: 400;">DDN EXAScaler</span></a><span style="font-weight: 400;"> AI data platform with Zadara’s cloud-native, AI-optimized infrastructure for scalable, secure, and efficient AI deployments.</span>

<span style="font-weight: 400;">The solution solves critical problems of enterprise AI such as real-time infrastructure complexity, GPU performance, compliance and multi-tenant governance that hinder faster deployment of AI.</span>

<span style="font-weight: 400;">“NVIDIA reference designs are accelerating the broad adoption of AI factories. Yet organizations need a cloud-native platform that can operationalize them simply and efficiently in real-world, multi-tenant environments while complying with data sovereignty regulations,” says Yoram Novick, CEO at Zadara. “DDN EXAScaler brings the high-performance AI data foundation needed to meet demanding sovereign and enterprise requirements while Zadara delivers the orchestration, isolation, and policy control required to deploy AI infrastructure quickly, securely, and efficiently.”</span>

<span style="font-weight: 400;">Zadara’s platform decouples AI infrastructure operations with performance, compliance and tenant isolation in accordance with NVIDIA best practices.</span>

<span style="font-weight: 400;">This partnership powers NVIDIA-enabled </span><a href="https://www.edgeir.com/what-are-ai-factories-20250804"><span style="font-weight: 400;">AI factories</span></a><span style="font-weight: 400;"> for secure multi-tenancy, predictable performance and policy-based orchestration with rapid time to import.</span>

<span style="font-weight: 400;">The integrated solution enables sovereign AI and multi-tenant environments with high throughput, GPU-aware scheduling, and compliance controls.</span>

<span style="font-weight: 400;">This initiative aims to make enterprise AI more accessible, affordable and practical by minimizing operational complexity while improving time-to-value.</span>

<span style="font-weight: 400;">The partnership between DDN and Zadara reinforces the need for AI factories as a cornerstone of enterprise, telcos, and service providers.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/ddn-and-zadara-target-sovereign-ai-deployments-with-multi-tenant-nvidia-factory-stack-20260409/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162974</post-id>	</item>
		<item>
		<title>Premio targets multi-camera edge AI with new Jetson Orin systems</title>
		<link>https://www.edgeir.com/premio-targets-multi-camera-edge-ai-with-new-jetson-orin-systems-20260408</link>
					<comments>https://www.edgeir.com/premio-targets-multi-camera-edge-ai-with-new-jetson-orin-systems-20260408#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Wed, 08 Apr 2026 10:00:45 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[Technology & Architecture]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[edge AI]]></category>
		<category><![CDATA[edge infrastructure]]></category>
		<category><![CDATA[Premio]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=162989</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174021/Premio-JCO-Series.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Premio JCO Series" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174021/Premio-JCO-Series.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174021/Premio-JCO-Series-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174021/Premio-JCO-Series-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174021/Premio-JCO-Series-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/06174021/Premio-JCO-Series-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><span style="font-weight: 400;">Rugged edge AI and embedded computing provider </span><a href="https://www.edgeir.com/companies/premio-inc"><span style="font-weight: 400;">Premio</span></a><span style="font-weight: 400;"> debuted two new models  to its </span><a href="https://premioinc.com/collections/jco-1000-orn-series-ai-edge-computer?source=EIN&amp;campaign=03_2026_JCO1000ORN_Launch"><span style="font-weight: 400;">JCO-Series</span></a><span style="font-weight: 400;"> of rugged NVIDIA Jetson Orin </span><a href="https://www.edgeir.com/what-is-edge-ai-and-what-is-it-used-for-20250321"><span style="font-weight: 400;">edge AI</span></a><span style="font-weight: 400;"> computers at ISC West 2026.</span>

<span style="font-weight: 400;">Designed for next-gen vision AI applications, these models power multi-camera deployments used in multiple industries including security, transportation and smart infrastructure by supporting up to four GMSL2 cameras.</span>

<span style="font-weight: 400;">“As AI-driven vision systems continue to transform industries such as security, transportation, and smart infrastructure, organizations need reliable computing platforms capable of processing data directly at the edge," says Dustin Seetoo, VP of product marketing at Premio. “With the JCO-1000-ORN-B and JCO-1000-ORN-C, we are expanding the </span><a href="https://www.edgeir.com/premio-launches-rugged-jetson-orin-edge-computer-for-harsh-ai-deployments-20250923"><span style="font-weight: 400;">JCO-1000-ORN Series</span></a><span style="font-weight: 400;"> to support multi-camera AI deployments while maintaining the rugged performance required for real-world edge environments.”</span>

<span style="font-weight: 400;">These supercomputers come equipped with NVIDIA Jetson Orin NX/Nano modules, which provide up to 157 TOPS of AI performance and ruggedized I/O for edge environments.</span>

<span style="font-weight: 400;">Highlights will include dual LAN or rugged M12 LAN connectors, high-speed NVMe M.2 storage, 4G/5G and Wi-Fi support including CAN Bus and extended operating temperature range (-20°C to 55°C).</span>

<span style="font-weight: 400;">The systems are small form factor, vehicle-ready with 9–36VDC input and certified to CE, FCC and UL global standards.</span>

<span style="font-weight: 400;">The new models will be available for order by mid-Q2 2026. </span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/premio-targets-multi-camera-edge-ai-with-new-jetson-orin-systems-20260408/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">162989</post-id>	</item>
		<item>
		<title>Hosted.ai raises $19M to tackle GPU underutilization and reshape AI infrastructure economics</title>
		<link>https://www.edgeir.com/hosted-ai-raises-19m-to-tackle-gpu-underutilization-and-reshape-ai-infrastructure-economics-20260407</link>
					<comments>https://www.edgeir.com/hosted-ai-raises-19m-to-tackle-gpu-underutilization-and-reshape-ai-infrastructure-economics-20260407#respond</comments>
		
		<dc:creator><![CDATA[Stephen Mayhew]]></dc:creator>
		<pubDate>Tue, 07 Apr 2026 22:42:11 +0000</pubDate>
				<category><![CDATA[Digital Infrastructure News]]></category>
		<category><![CDATA[M&A & Investment]]></category>
		<category><![CDATA[AI infrastructure]]></category>
		<category><![CDATA[GPUs]]></category>
		<category><![CDATA[Hosted.ai]]></category>
		<category><![CDATA[infrastructure investment]]></category>
		<guid isPermaLink="false">https://www.edgeir.com/?p=163006</guid>

					<description><![CDATA[<p><img width="1200" height="750" src="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/07184128/Hosted.ai-Seed-Round.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Hosted.ai Seed Round" decoding="async" loading="lazy" srcset="https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/07184128/Hosted.ai-Seed-Round.png 1200w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/07184128/Hosted.ai-Seed-Round-300x188.png 300w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/07184128/Hosted.ai-Seed-Round-1024x640.png 1024w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/07184128/Hosted.ai-Seed-Round-150x94.png 150w, https://d27aquackk44od.cloudfront.net/wp-content/uploads/2026/04/07184128/Hosted.ai-Seed-Round-768x480.png 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p><span style="font-weight: 400;">AI infrastructure startup </span><a href="https://www.edgeir.com/companies/hosted-ai"><span style="font-weight: 400;">hosted·ai</span></a><span style="font-weight: 400;"> secured $19M in seed funding to simplify and optimize GPU infrastructure for AI, led by Creandum with participation from other investors like Repeat VC and People Ventures. </span>

<span style="font-weight: 400;">The startup focuses on the inefficiencies of GPU infrastructure, including high costs, low utilization — with average usage at just 40% —  and limited access to regional suppliers.</span>

<span style="font-weight: 400;">hosted·ai's software stack includes: </span>

<span style="font-weight: 400;">hosted·ai: A </span><a href="https://www.edgeir.com/what-is-gpu-as-a-service-gpuaas-20250212"><span style="font-weight: 400;">GPUaaS</span></a><span style="font-weight: 400;"> platform improving GPU utilization by up to 5x, reducing costs, and enabling resource sharing. </span>

<a href="https://packet.ai/"><span style="font-weight: 400;">packet·ai</span></a><span style="font-weight: 400;">: A neocloud service leveraging optimized GPU infrastructure for competitive pricing. </span>

<a href="http://gpuaas.com"><span style="font-weight: 400;">GPUaaS.com</span></a><span style="font-weight: 400;">: A matchmaking service connecting enterprises with GPU providers for scalable solutions.</span>

<span style="font-weight: 400;">The company will target the “GPU waste” problem and allow regional service providers a piece of this AI infrastructure pie.</span>

<span style="font-weight: 400;">“The GPU market has a waste problem, not a scarcity problem,” says Ditlev Bredahl, CEO of Hosted.ai. “We’ve spent 25 years building infrastructure software that makes service providers competitive and the GPU opportunity is the biggest we’ve seen. This funding lets us move faster: more platform, more partners, more regions. We’re building the operating system for the GPU economy, and this round puts us in a strong position to do exactly that.”</span>

<span style="font-weight: 400;">Some of the founding team members have previous experience working with infrastructure technologies at VMware, NVIDIA or XenSource.</span>

<span style="font-weight: 400;">Founded in 2024 and launched in 2025, hosted·ai is a global company offering products across the US, EMEA and Asia-Pacific.</span>

<span style="font-weight: 400;">Their mission is to revolutionize the economics of GPU computing so that AI infrastructure becomes affordable and accessible for developers and companies alike.</span>]]></description>
		
					<wfw:commentRss>https://www.edgeir.com/hosted-ai-raises-19m-to-tackle-gpu-underutilization-and-reshape-ai-infrastructure-economics-20260407/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">163006</post-id>	</item>
	</channel>
</rss>
