<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:media="http://search.yahoo.com/mrss/">

<channel>
	<title>NVIDIA Blog</title>
	<atom:link href="https://blogs.nvidia.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://blogs.nvidia.com/</link>
	<description></description>
	<lastBuildDate>Mon, 16 Mar 2026 23:29:08 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Roche Scales NVIDIA AI Factories Globally to Accelerate Drug Discovery, Diagnostic Solutions and Manufacturing Breakthroughs</title>
		<link>https://blogs.nvidia.com/blog/roche-ai-factories-omniverse/</link>
		
		<dc:creator><![CDATA[Constantin Landers]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 20:30:59 +0000</pubDate>
				<category><![CDATA[AI Infrastructure]]></category>
		<category><![CDATA[AI Factory]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Digital Twin]]></category>
		<category><![CDATA[GTC 2026]]></category>
		<category><![CDATA[Healthcare and Life Sciences]]></category>
		<category><![CDATA[Industrial and Manufacturing]]></category>
		<category><![CDATA[NVIDIA Blackwell]]></category>
		<category><![CDATA[Omniverse]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[Simulation and Design]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90895</guid>

					<description><![CDATA[Roche's new deployment spans more than 3,500 NVIDIA Blackwell GPUs across its worldwide operations and embedded across the entire value chain, massively scaling R&#038;D productivity, next-generation diagnostics and manufacturing efficiencies.]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>adfafasf</p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/hc-press-roche-gtcsj26-4853250-1920x1080-1.jpg" type="image/jpeg" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/hc-press-roche-gtcsj26-4853250-1920x1080-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Roche Scales NVIDIA AI Factories Globally to Accelerate Drug Discovery, Diagnostic Solutions and Manufacturing Breakthroughs]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>NVIDIA DSX Air Boosts Time to Token With Accelerated Simulation for AI Factories</title>
		<link>https://blogs.nvidia.com/blog/dsx-air-simulation-ai-factories/</link>
		
		<dc:creator><![CDATA[Scott Martin]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 20:00:48 +0000</pubDate>
				<category><![CDATA[AI Infrastructure]]></category>
		<category><![CDATA[AI Factory]]></category>
		<category><![CDATA[GTC 2026]]></category>
		<category><![CDATA[NVIDIA DGX]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90852</guid>

					<description><![CDATA[Setting up AI factories in simulation — decreasing deployment time from months to days — is  accelerating the next industrial revolution.  Nowhere was that more apparent than at GTC 2026, in San Jose, where NVIDIA founder and CEO Jensen Huang introduced NVIDIA DSX Air. Part of NVIDIA DSX Sim in the DSX platform, NVIDIA’s blueprint [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">Setting up AI factories in simulation — decreasing deployment time from months to days — is  accelerating the next industrial revolution. </span></p>
<p><span style="font-weight: 400;">Nowhere was that more apparent than at GTC 2026, in San Jose, where NVIDIA founder and CEO Jensen Huang introduced NVIDIA DSX Air. Part of NVIDIA DSX Sim in the <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-releases-vera-rubin-dsx-ai-factory-reference-design-and-omniverse-dsx-digital-twin-blueprint-with-broad-industry-support">DSX platform, NVIDIA’s blueprint for AI factories</a>, DSX Air is a software-as-a-service platform for logically simulating AI factories. It delivers high‑fidelity digital simulations of NVIDIA hardware infrastructure, including GPUs, SuperNICs, DPUs and switches, and it integrates with leading partner solutions for storage and routing, security, orchestration and more via open, API-based connectivity.</span></p>
<p><span style="font-weight: 400;">NVIDIA DSX Air enables a complete AI factory ecosystem, uniting NVIDIA infrastructure with partner technologies to deliver full‑stack simulation and accelerate complex AI deployments.    </span></p>
<p><span style="font-weight: 400;">Companies building some of the world’s most advanced AI infrastructure, including </span><span style="font-weight: 400;">CoreWeave</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">are already using DSX Air to simulate and validate their environments long before hardware reaches the loading dock. The development underscores a new reality: simulation is now essential to accelerating AI deployment at scale.</span></p>
<p><span style="font-weight: 400;">DSX Air allows organizations to construct a full digital twin of their AI factory — compute, networking, storage, orchestration and security — before a single server is unboxed. By shifting integration and troubleshooting into simulation, customers are reducing the time to first token from weeks or months to mere days or hours, saving enormous amounts of time and costs.</span></p>
<p><span style="font-weight: 400;">An industry analogy for this AI factory simulation phenomenon explains it well: It’s like IT mirroring your laptop to set up a new one, except the “laptop” is a hyperscale AI factory and the “mirroring” is a complete, high‑fidelity replica of the production environment.</span></p>
<p><span style="font-weight: 400;">For operators racing to bring new AI capacity online, this change is transformative.</span></p>
<h2><b>Building a Platform for an Entire Ecosystem</b></h2>
<p><span style="font-weight: 400;">The NVIDIA DSX Air simulation platform is designed to support the entire AI factory ecosystem. Server manufacturers, orchestration vendors, storage providers and security partners can all validate their offerings alongside NVIDIA infrastructure — together, in one environment, at scale.</span></p>
<p><span style="font-weight: 400;">This ecosystem‑wide capability is already reshaping partner workflows.</span></p>
<p><span style="font-weight: 400;">Server manufacturers, which serve as the primary channel for enterprise inference, can now model and validate their reference architectures without building expensive physical labs. Enterprise AI environments rarely fit rigid designs, and customers often require bespoke configurations. With DSX Air, manufacturers can create digital twins tailored to specific customer needs, test their software stacks and deliver validated solutions without touching hardware.</span></p>
<p><span style="font-weight: 400;">Orchestration vendors — critical for enterprises and tier‑2 clouds that need turnkey AI services — gain the ability to test at scale. At GTC, NVIDIA showcased a multi‑tenant RTX PRO Server environment running entirely in simulation, with </span><span style="font-weight: 400;">Netris</span> <span style="font-weight: 400;">providing network orchestration, </span><span style="font-weight: 400;">Rafay</span> <span style="font-weight: 400;">handling host orchestration and </span><a target="_blank" href="https://www.nvidia.com/en-us/software/run-ai/"><span style="font-weight: 400;">NVIDIA Run:ai</span></a><span style="font-weight: 400;"> optimizing GPU allocation. These partners can now validate complex workflows under realistic conditions without deploying physical clusters.</span></p>
<p><span style="font-weight: 400;">The simulation environment is also valuable for validating the data platforms that power AI factories. Instead of requiring large physical clusters, DSX Air allows ecosystem partners to model complete AI workflows alongside NVIDIA compute, networking and software infrastructure. At GTC, the booth demonstration features a video retrieval-augmented generation workload running on the </span><span style="font-weight: 400;">VAST</span><span style="font-weight: 400;"> AI Operating System, including a fully operational VAST cluster with DataEngine nodes and the video search and summarization front end. DataEngine triggers and functions process and index video content through an end-to-end pipeline, illustrating how AI applications can be designed, tested and validated inside the DGX Air simulation before deploying physical infrastructure.</span></p>
<p><span style="font-weight: 400;">Security vendors — facing some of the most demanding validation requirements — can now test multi‑tenant policies, DPU‑accelerated isolation and threat detection in a realistic environment. The GTC demo includes </span><span style="font-weight: 400;">Check Point</span><span style="font-weight: 400;">’s distributed firewall running on simulated BlueField DPUs, </span><span style="font-weight: 400;">TrendAI Vision One </span><span style="font-weight: 400;">for threat detection and </span><span style="font-weight: 400;">Keysight Cyperf </span><span style="font-weight: 400;">generating realistic traffic. Security partners can identify vulnerabilities and validate policies in a customer’s digital twin long before production goes live.</span></p>
<p><span style="font-weight: 400;">Across the ecosystem, partners emphasized the same point: DSX Air gives them a complete, scalable and cost‑effective way to validate their solutions with NVIDIA infrastructure and with each other.</span></p>
<h2><b>Operating With a New Model to Accelerate Time to Token</b></h2>
<p><span style="font-weight: 400;">NVIDIA DSX Air isn’t just a deployment accelerator — it introduces a new operational model for AI factories.</span></p>
<p><span style="font-weight: 400;">On the first day, customers build their intended production environment entirely in simulation. They configure networking, compute, storage, orchestration, security and scheduling exactly as they plan to deploy them. They validate that everything works together, identify issues early and ensure the environment behaves as expected.</span></p>
<p><span style="font-weight: 400;">Next, they can deploy with confidence. Because the environment has already been tested end to end, the probability of a smooth bring‑up increases dramatically. Time to first token shrinks, and teams can focus on running workloads rather than troubleshooting infrastructure.</span></p>
<p><span style="font-weight: 400;">Afterward and beyond, DSX Air becomes a safe environment for change management. Long‑lived simulations allow customers to test upgrades, rehearse maintenance windows, validate patches and predict operational impact before touching production. Only after changes succeed in simulation are they applied to the live environment, maximizing uptime and ensuring infrastructure availability.</span></p>
<p><span style="font-weight: 400;">This lifecycle approach reflects how modern AI factories can operate as they scale.</span></p>
<h2><b>Simulating AI Factories Becomes the Backbone of AI Infrastructure</b></h2>
<p><span style="font-weight: 400;">GTC showed that simulation is no longer a future concept — it is the new backbone of AI infrastructure deployment and operations. </span></p>
<p><span style="font-weight: 400;">NVIDIA DSX Air enables customers and partners to simulate everything in one place, accelerating deployment, reducing risk and ensuring day‑one performance at scale.</span></p>
<h2><b>Adopting NVIDIA DSX Air to Accelerate Deployments With Simulation</b></h2>
<p><span style="font-weight: 400;">Siam.AI,</span><span style="font-weight: 400;"> Thailand’s largest AI cloud provider, has accelerated its infrastructure deployment with NVIDIA DSX Air. Using simulation, Siam.AI embraced NVIDIA best practices well ahead of schedule, ensuring day-one operational expertise and validating their architecture in a virtual environment before the physical hardware even arrived.</span></p>
<p><span style="font-weight: 400;">Similarly, </span><span style="font-weight: 400;">Hydra Host</span><span style="font-weight: 400;"> is using DSX Air to accelerate development of Brokkr, its AI factory operating system for bare-metal GPU provisioning that’s used by dozens of GPU deployments globally. By simulating full-stack environments in DSX Air before deploying to production, Hydra Host can validate Brokkr’s automation and orchestration workflows across diverse networking and hardware configurations at scale. This simulation-first approach lets Hydra Host ship validated infrastructure faster to customers worldwide while minimizing risk to live systems as global AI demand grows.</span></p>
<p><span style="font-weight: 400;">As AI factories grow in size and complexity, the ability to validate full‑stack environments before hardware arrives will define the pace of innovation. NVIDIA DSX Air delivers that capability today, giving organizations the fastest possible path to first token and a more reliable way to operate AI infrastructure over time.</span></p>
<p><a target="_blank" href="http://www.nvidia.com/air/"><i><span style="font-weight: 400;">Learn more</span></i></a><i><span style="font-weight: 400;"> about NVIDIA DSX Air.</span></i></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/ethernet-corp-blog-dsx-air-1280x680-4855450.png" type="image/png" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/ethernet-corp-blog-dsx-air-1280x680-4855450-842x450.png" width="842" height="450" />
			<media:title type="html"><![CDATA[NVIDIA DSX Air Boosts Time to Token With Accelerated Simulation for AI Factories]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Into the Omniverse: How Industrial AI and Digital Twins Accelerate Design, Engineering and Manufacturing Across Industries</title>
		<link>https://blogs.nvidia.com/blog/industrial-ai-digital-twins-omniverse/</link>
		
		<dc:creator><![CDATA[James McKenna]]></dc:creator>
		<pubDate>Thu, 12 Mar 2026 15:00:51 +0000</pubDate>
				<category><![CDATA[Pro Graphics]]></category>
		<category><![CDATA[CUDA-X]]></category>
		<category><![CDATA[Digital Twin]]></category>
		<category><![CDATA[Industrial and Manufacturing]]></category>
		<category><![CDATA[Into the Omniverse]]></category>
		<category><![CDATA[Omniverse]]></category>
		<category><![CDATA[Simulation and Design]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90659</guid>

					<description><![CDATA[Industrial AI, digital twins, AI physics and accelerated AI infrastructure are empowering companies across industries to accelerate and scale the design, simulation and optimization of products, processes and facilities before building in the real world.]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><i><span style="font-weight: 400;">Editor’s note: This post is part of </span></i><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/news/"><i><span style="font-weight: 400;">Into the Omniverse</span></i></a><i><span style="font-weight: 400;">, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in </span></i><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/usd/"><i><span style="font-weight: 400;">OpenUSD</span></i></a><i><span style="font-weight: 400;"> and </span></i><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/usd/"><i><span style="font-weight: 400;">NVIDIA Omniverse</span></i></a><i><span style="font-weight: 400;">.</span></i></p>
<p><span style="font-weight: 400;">Industrial AI, digital twins, AI physics and accelerated AI infrastructure are empowering companies across industries to accelerate and scale the design, simulation and optimization of products, processes and facilities before building in the real world.</span></p>
<p><span style="font-weight: 400;">Earlier this month, </span><a target="_blank" href="https://nvidianews.nvidia.com/news/dassault-systemes-nvidia-industrial-ai"><span style="font-weight: 400;">NVIDIA and Dassault Systèmes</span></a><span style="font-weight: 400;"> announced a partnership that brings together Dassault Systèmes’ Virtual Twin platforms, NVIDIA accelerated computing, AI physics open models and </span><a target="_blank" href="https://developer.nvidia.com/cuda/cuda-x-libraries"><span style="font-weight: 400;">NVIDIA CUDA-X</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://developer.nvidia.com/omniverse?sortBy=developer_learning_library%2Fsort%2Ffeatured_in.omniverse%3Adesc%2Ctitle%3Aasc"><span style="font-weight: 400;">Omniverse</span></a><span style="font-weight: 400;"> libraries. This allows designers and engineers to use virtual twins and companions — trained on physics-based world models — to innovate faster, boost efficiency and deliver sustainable products.</span></p>
<p><span style="font-weight: 400;">Dassault Systèmes’ SIMULIA software now uses NVIDIA CUDA-X and AI physics libraries for AI-based virtual twin physics behavior — empowering designers and engineers to accurately and instantly predict outcomes in simulation.</span></p>
<p><span style="font-weight: 400;">NVIDIA is adopting Dassault Systèmes’</span> <a target="_blank" href="https://www.3ds.com/products/catia/systems-engineering/model-based-systems-design"><span style="font-weight: 400;">model-based systems engineering</span></a><span style="font-weight: 400;"> technologies to accelerate the design and global deployment of gigawatt-scale AI factories that are powering industrial and physical AI across industries. Dassault Systèmes will in turn deploy NVIDIA-powered </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/ai-factory/"><span style="font-weight: 400;">AI factories</span></a><span style="font-weight: 400;"> on three continents through its </span><a target="_blank" href="https://www.3ds.com/products/outscale"><span style="font-weight: 400;">OUTSCALE</span></a><span style="font-weight: 400;"> sovereign cloud, enabling its customers to run AI workloads while maintaining data residency and security requirements.</span></p>
<p><span style="font-weight: 400;">These efforts are already making a splash across industries, accelerating industrial development and production processes.</span></p>
<h2><b>Industrial AI Simulations, From Car Parts to Cheese Proteins </b></h2>
<p><a target="_blank" href="https://www.nvidia.com/en-us/glossary/digital-twin/"><span style="font-weight: 400;">Digital twins</span></a><span style="font-weight: 400;">, also known as virtual twins, and physics-based world models are already being deployed to advance industries.</span></p>
<p><span style="font-weight: 400;">In automotive, Lucid Motors is combining cutting-edge simulation, AI physics open models, Dassault Systèmes’ tools for </span><span style="font-weight: 400;">vehicle and powertrain engineering </span><span style="font-weight: 400;">and digital twin technology to accelerate innovation in electric vehicles. </span></p>
<p><span style="font-weight: 400;">In life sciences, scientists and researchers are using virtual twins, Dassault Systèmes’ science-validated world models and the </span><a target="_blank" href="https://www.nvidia.com/en-us/industries/healthcare-life-sciences/biopharma/"><span style="font-weight: 400;">NVIDIA BioNeMo</span></a><span style="font-weight: 400;"> platform to speed molecule and materials discovery, therapeutics design and sustainable food development.</span></p>
<p><span style="font-weight: 400;">The Bel Group is using technologies from Dassault </span><span style="font-weight: 400;">Systèmes’ </span><span style="font-weight: 400;">supported by NVIDIA to accelerate the development and production of healthier, more sustainable foods for millions of consumers. </span></p>
<p><span style="font-weight: 400;">The company is using Dassault Syst</span><span style="font-weight: 400;">è</span><span style="font-weight: 400;">mes’ industry world models to generate and study food proteins, creating non-dairy protein options that pair with its well-known cheeses, including Babybel. Using accurate, high-resolution virtual twins allows the Bel Group to study and develop validated research outcomes of food proteins more quickly and efficiently.</span></p>
<p><span style="font-weight: 400;">Using accurate, high-resolution virtual twins allows the Bel Group to study and develop validated research outcomes of food proteins more quickly and efficiently.</span></p>
<p><span style="font-weight: 400;">In industrial automation, </span><a target="_blank" href="https://automation.omron.com/en/us/industries/electric-vehicle-manufacturing/"><span style="font-weight: 400;">Omron</span></a><span style="font-weight: 400;"> is using virtual twins and physical AI to design and deploy automation technology with greater confidence — advancing the shift toward digitally validated production. </span></p>
<p><span style="font-weight: 400;">In the aerospace industry, researchers and engineers at Wichita State University’s National Institute for Aviation Research use virtual twins and AI companions powered by Dassault Systèmes’ Industry World Models and NVIDIA Nemotron open models to accelerate the design, testing and certification of aircrafts.</span></p>
<p><iframe title="How the World’s Industries Are Being Transformed by Dassault Systèmes and NVIDIA" width="1200" height="675" src="https://www.youtube.com/embed/9taAfTbQpIA?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<h2><b>Learning From and Simulating the Real World </b></h2>
<p><span style="font-weight: 400;">Dassault Systemes’ physics-based Industry World Models are trained to have PhD-level knowledge in fields like biology, physics and material sciences. This allows them to accurately simulate real-world environments and scenarios so teams can test industrial operations end to end — from supply chains to store shelves — before deploying changes in the real world. </span></p>
<p><span style="font-weight: 400;">These virtual models can help researchers and developers with workflows ranging from </span><a target="_blank" href="https://www.nvidia.com/en-us/deep-learning-ai/resources/genomics/"><span style="font-weight: 400;">DNA sequencing</span></a><span style="font-weight: 400;"> to strengthening manufactured materials for vehicles. </span></p>
<p><span style="font-weight: 400;">“Knowledge is encoded in the living world,” said Pascal Daloz, CEO of Dassault Systemes, during his 3DEXPERIENCE World keynote. “With our virtual twins, we are learning from life and are also understanding it in order to replicate it and scale it.” </span></p>
<p><iframe title="CEO Fireside Chat With Jensen Huang and Pascal Daloz at Dassault Systèmes 3DEXPERIENCE World" width="1200" height="675" src="https://www.youtube.com/embed/3e6dR82pmdU?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<h2><b>Get Plugged In to Industrial AI</b></h2>
<p><span style="font-weight: 400;">Learn more about industrial and physical AI by registering for </span><a target="_blank" href="https://www.nvidia.com/gtc/?ncid=pa-srch-goog-224-prsp-rsa-en-us-3-l2&amp;_bt=794864499290&amp;_bk=gpu%20tech%20conference&amp;_bm=b&amp;_bn=g&amp;_bg=191037752574&amp;gad_source=1&amp;gad_campaignid=23502880536&amp;gclid=CjwKCAiAv5bMBhAIEiwAqP9GuNbBLok4ubPhmQL8DLRPF1hnZ-GaxjNKhsN7QSe4WUMnqO-TIO5n3xoC8ikQAvD_BwE"><span style="font-weight: 400;">NVIDIA GTC</span></a><span style="font-weight: 400;">, running March 16-19 in San Jose, kicking off with </span><a target="_blank" href="https://www.nvidia.com/gtc/keynote/"><span style="font-weight: 400;">NVIDIA founder and CEO Jensen Huang’s keynote address</span></a><span style="font-weight: 400;"> on Monday, March 16, at 11 a.m. PT. </span></p>
<p><span style="font-weight: 400;">At the conference:</span></p>
<ul>
<li><span style="font-weight: 400;">Explore an </span><a target="_blank" href="https://www.nvidia.com/gtc/sessions/industrial-ai-and-manufacturing/"><span style="font-weight: 400;">industrial AI agenda</span></a><span style="font-weight: 400;"> packed with hands-on sessions, customer stories and live demos. </span></li>
<li><span style="font-weight: 400;">Dive into the world of OpenUSD with a special session focused on </span><a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/sessions/gtc26-s81630/?ncid=so-link-998580"><span style="font-weight: 400;">OpenUSD for physical AI simulation</span></a><span style="font-weight: 400;">, as well as a full agenda of hands-on </span><a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?sessions=S81630,S81492,S81613,DLIW82272,DLIT81639,DLIT81816,DLIT81800,DLIT81697,DLIT81757,CWES81740,CWES81600"><span style="font-weight: 400;">OpenUSD learning sessions</span></a><span style="font-weight: 400;">. </span></li>
<li><span style="font-weight: 400;">Find Dassault Systèmes in the industrial AI and robotics pavilion on the show floor and learn from Florence Hu-Aubigny, executive vice president of R&amp;D at Dassault Systemes, who’ll present on </span><a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/sessions/gtc26-s81501/"><span style="font-weight: 400;">how virtual twins </span></a><span style="font-weight: 400;">are shaping the next industrial revolution.</span></li>
<li><span style="font-weight: 400;">Get a live look at GTC with our </span><a target="_blank" href="https://www.addevent.com/event/bjsbh426bzwy"><span style="font-weight: 400;">developer community livestream</span></a> <span style="font-weight: 400;">on March 18, where participants can </span><span style="font-weight: 400;">ask questions, request deep dives and talk directly with NVIDIA engineers in the chat.</span></li>
</ul>
<p><span style="font-weight: 400;">Learn how to build industrial and physical AI applications by attending </span><a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?sessions=DLIT81484,DLIT81801,CWES81472,DLIT81800,DLIT81639,DLIT81798,DLIT81650,DLIT81774,DLIT81879"><span style="font-weight: 400;">these sessions at GTC</span></a><span style="font-weight: 400;">.</span></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2027/03/nv-ov-ito-feb-1280x680_credit_r4.png" type="image/png" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2027/03/nv-ov-ito-feb-1280x680_credit_r4-842x450.png" width="842" height="450" />
			<media:title type="html"><![CDATA[Into the Omniverse: How Industrial AI and Digital Twins Accelerate Design, Engineering and Manufacturing Across Industries]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>New NVIDIA Nemotron 3 Super Delivers 5x Higher Throughput for Agentic AI</title>
		<link>https://blogs.nvidia.com/blog/nemotron-3-super-agentic-ai/</link>
		
		<dc:creator><![CDATA[Kari Briski]]></dc:creator>
		<pubDate>Wed, 11 Mar 2026 16:00:21 +0000</pubDate>
				<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Nemotron]]></category>
		<category><![CDATA[NVIDIA NIM]]></category>
		<category><![CDATA[Open Source]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90771</guid>

					<description><![CDATA[Launched today, NVIDIA Nemotron 3 Super is a 120‑billion‑parameter open model with 12 billion active parameters designed to run complex agentic AI systems at scale.  Available now, the model combines advanced reasoning capabilities to efficiently complete tasks with high accuracy for autonomous agents. AI-Native Companies: Perplexity offers its users access to Nemotron 3 Super for [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">Launched today, NVIDIA Nemotron 3 Super is a 120‑billion‑parameter open model with 12 billion active parameters designed to run complex agentic AI systems at scale. </span></p>
<p><span style="font-weight: 400;">Available now, the model combines advanced reasoning capabilities to efficiently complete tasks with high accuracy for autonomous agents.</span></p>
<p><b>AI-Native Companies: </b><span style="font-weight: 400;">Perplexity</span><span style="font-weight: 400;"> offers its users access to Nemotron 3 Super for search and as one of 20 orchestrated models in Computer. Companies offering software development agents like </span><a target="_blank" href="https://www.coderabbit.ai/blog/faster-code-reviews-with-nemotron-3-super"><span style="font-weight: 400;">CodeRabbit</span></a><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Factory</span><span style="font-weight: 400;"> and </span><a target="_blank" href="http://greptile.com/blog/nvidia-nemotron-super-in-code-review"><span style="font-weight: 400;">Greptile</span></a><span style="font-weight: 400;"> are integrating the model into their AI agents along with proprietary models to achieve higher accuracy at lower cost. And life sciences and frontier AI organizations like </span><span style="font-weight: 400;">Edison Scientific</span><span style="font-weight: 400;"> and </span><span style="font-weight: 400;">Lila Sciences</span><span style="font-weight: 400;"> will power their agents for deep literature search, data science and molecular understanding.</span></p>
<p><b>Enterprise Software Platforms:</b><span style="font-weight: 400;"> Industry leaders such as </span><span style="font-weight: 400;">Amdocs</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Palantir</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Cadence</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Dassault Systèmes</span><span style="font-weight: 400;"> and </span><a target="_blank" href="https://www.siemens.com/en-us/products/fuse-eda-ai-system/"><span style="font-weight: 400;">Siemens</span></a><span style="font-weight: 400;"> are deploying and customizing the model to automate workflows in telecom, cybersecurity, semiconductor design and manufacturing. </span></p>
<p><span style="font-weight: 400;">As companies move beyond chatbots and into multi‑agent applications, they encounter two constraints.</span></p>
<p><span style="font-weight: 400;">The first is context explosion. Multi‑agent workflows generate up to </span><a target="_blank" href="https://www.anthropic.com/engineering/multi-agent-research-system"><span style="font-weight: 400;">15x more</span></a><span style="font-weight: 400;"> tokens than standard chat because each interaction requires resending full histories, including tool outputs and intermediate reasoning. </span></p>
<p><span style="font-weight: 400;">Over long tasks, this volume of context increases costs and can lead to goal drift, where agents lose alignment with the original objective.</span></p>
<p><span style="font-weight: 400;">The second is the thinking tax. Complex agents must reason at every step, but using large models for every subtask makes multi-agent applications too expensive and sluggish for practical applications.</span></p>
<p><span style="font-weight: 400;">Nemotron 3 Super has a 1‑million‑token context window, allowing agents to retain full workflow state in memory and preventing goal drift.</span><span style="font-weight: 400;"><br />
</span><span style="font-weight: 400;"><br />
</span><span style="font-weight: 400;">Nemotron 3 Super has set new standards, claiming the top spot on Artificial Analysis for efficiency and openn</span><span style="font-weight: 400;">ess with leading accuracy among models of the same size. </span></p>
<p><span style="font-weight: 400;">The model also powers the </span><a target="_blank" href="https://build.nvidia.com/nvidia/aiq"><span style="font-weight: 400;">NVIDIA AI-Q</span></a><span style="font-weight: 400;"> research agent to the No. 1 position on </span><a target="_blank" href="https://huggingface.co/spaces/muset-ai/DeepResearch-Bench-Leaderboard"><span style="font-weight: 400;">DeepResearch Bench</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://agentresearchlab.com/benchmarks/deepresearch-bench-ii/index.html#leaderboard"><span style="font-weight: 400;">DeepResearch Bench II</span></a><span style="font-weight: 400;"> leaderboards, benchmarks that measure an AI system’s ability to conduct thorough, multistep research across large document sets while maintaining reasoning coherence. </span></p>
<h2><b>Hybrid Architecture</b></h2>
<p><span style="font-weight: 400;">Nemotron 3 Super uses a hybrid mixture‑of‑experts (MoE) architecture that combines three major innovations to deliver up to </span><span style="font-weight: 400;">5x higher throughput</span><span style="font-weight: 400;"> and up to </span><span style="font-weight: 400;">2x higher accuracy</span><span style="font-weight: 400;"> than the previous Nemotron Super model. </span></p>
<ul>
<li><b>Hybrid Architecture:</b><span style="font-weight: 400;"> Mamba layers deliver </span><span style="font-weight: 400;">4x higher memory and compute efficiency,</span><span style="font-weight: 400;"> while transformer layers drive advanced reasoning.</span></li>
<li><b>MoE:</b><span style="font-weight: 400;"> Only 12 billion of its 120 billion parameters are active at inference. </span></li>
<li><b>Latent MoE:</b><span style="font-weight: 400;"> A new technique that improves accuracy by activating four expert specialists for the cost of one to generate the next token at inference.</span></li>
<li><b>Multi-Token Prediction:</b><span style="font-weight: 400;"> Predicts multiple future words simultaneously, resulting in </span><span style="font-weight: 400;">3x faster inference</span><span style="font-weight: 400;">.</span></li>
</ul>
<p><span style="font-weight: 400;">On the NVIDIA Blackwell platform, the model runs in NVFP4 precision. That cuts memory requirements and pushes inference up to </span><span style="font-weight: 400;">4x faster than</span><span style="font-weight: 400;"> FP8 on NVIDIA Hopper, with no loss in accuracy. </span></p>
<h2><b>Open Weights, Data and Recipes</b></h2>
<p><span style="font-weight: 400;">NVIDIA is releasing Nemotron 3 Super with open weights under a permissive license. Developers can deploy and customize it on workstations, in data centers or in the cloud.</span></p>
<p><span style="font-weight: 400;">The model was trained on synthetic data generated using frontier reasoning models. NVIDIA is publishing the complete methodology, including over 10 trillion tokens of pre- and post-training datasets, 15 training environments for reinforcement learning and evaluation recipes. Researchers can further use the NVIDIA NeMo platform to fine-tune the model or build their own. </span></p>
<h2><b>Use in Agentic Systems</b></h2>
<p><span style="font-weight: 400;">Nemotron 3 Super is designed to handle complex subtasks inside a multi-agent system. </span></p>
<p><span style="font-weight: 400;">A software development agent can load an entire codebase into context at once, enabling end-to-end code generation and debugging without document segmentation. </span></p>
<p><span style="font-weight: 400;">In financial analysis it can load thousands of pages of reports into memory,  eliminating the need to re-reason across long conversations, which improves efficiency. </span></p>
<p><span style="font-weight: 400;">Nemotron 3 Super has high-accuracy tool calling that ensures autonomous agents reliably navigate massive function libraries to prevent execution errors in high-stakes environments, like autonomous security orchestration in cybersecurity</span><span style="font-weight: 400;">.</span></p>
<h2><b>Availability</b><b><br />
</b></h2>
<p><span style="font-weight: 400;">NVIDIA Nemotron 3 Super, part of the </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-debuts-nemotron-3-family-of-open-models"><span style="font-weight: 400;">Nemotron 3 family</span></a><span style="font-weight: 400;">, can be accessed at </span><a target="_blank" href="https://build.nvidia.com/nvidia/nemotron-3-super-120b-a12b"><span style="font-weight: 400;">build.nvidia.com</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="http://perplexity.ai"><span style="font-weight: 400;">Perplexity</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://openrouter.ai/nvidia/nemotron-3-super-120b-a12b:free"><span style="font-weight: 400;">OpenRouter</span></a> <span style="font-weight: 400;">and </span><a target="_blank" href="https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-FP8"><span style="font-weight: 400;">Hugging Face</span></a><span style="font-weight: 400;">. Dell Technologies is bringing the model to the Dell Enterprise Hub on Hugging Face, optimized for on-premise deployment on the Dell AI Factory, advancing multi-agent AI workflows. </span><a target="_blank" href="https://community.hpe.com/t5/the-cloud-experience-everywhere/operationalizing-agentic-ai-with-nvidia-nemotron-and-hpe-agents/ba-p/7262654"><span style="font-weight: 400;">HPE</span></a><span style="font-weight: 400;"> is also bringing NVIDIA Nemotron to its agents hub to help ensure scalable enterprise adoption of agentic AI. </span></p>
<p><span style="font-weight: 400;">Enterprises and developers can deploy the model through several partners:</span></p>
<ul>
<li><span style="font-weight: 400;"><strong>Cloud Service Providers</strong>: </span><span style="font-weight: 400;">Google Cloud’s Vertex AI</span><span style="font-weight: 400;"> and </span><span style="font-weight: 400;">Oracle Cloud Infrastructure,</span><span style="font-weight: 400;"> and coming soon to </span><span style="font-weight: 400;">Amazon Web Services through Amazon Bedrock as well as Microsoft Azure.</span></li>
<li><span style="font-weight: 400;"><strong>NVIDIA Cloud Partners</strong>: </span><span style="font-weight: 400;">Coreweave</span><span style="font-weight: 400;">, </span><a target="_blank" href="https://www.crusoe.ai/cloud/managed-inference"><span style="font-weight: 400;">Crusoe</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://nebius.com/blog/posts/nemotron3-super-now-available"><span style="font-weight: 400;">Nebius</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://www.together.ai/blog/nvidia-nemotron-3-super"><span style="font-weight: 400;">Together AI</span></a><span style="font-weight: 400;">.</span></li>
<li><span style="font-weight: 400;"><strong>Inference Service Providers</strong>: </span><a target="_blank" href="https://www.baseten.co/blog/introducing-nemotron-3-super"><span style="font-weight: 400;">Baseten</span></a><span style="font-weight: 400;">, <a target="_blank" href="https://developers.cloudflare.com/changelog/post/2026-03-11-nemotron-3-super-workers-ai">Cloudflare</a>, </span><a target="_blank" href="https://deepinfra.com/blog/nvidia-nemotron-3-super-release"><span style="font-weight: 400;">DeepInfra</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://app.fireworks.ai/models/fireworks/nvidia-nemotron-3-super-120b-a12b-fp8"><span style="font-weight: 400;">Fireworks AI</span><span style="font-weight: 400;">,</span></a> <a target="_blank" href="http://inference.net/blog/nemotron-finetuning"><span style="font-weight: 400;">Inference.net</span></a><span style="font-weight: 400;">,</span> <a target="_blank" href="https://lightning.ai/nvidia-nemotron-3-super"><span style="font-weight: 400;">Lightning AI</span></a><span style="font-weight: 400;">,</span> <a target="_blank" href="https://modal.com/docs/examples/nemotron_inference"><span style="font-weight: 400;">Modal</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://friendli.ai/blog/nvidia-nemotron-3-super"><span style="font-weight: 400;">FriendliAI</span></a><span style="font-weight: 400;">.</span></li>
<li><span style="font-weight: 400;"><strong>Data Platforms and Services</strong>: </span><span style="font-weight: 400;">Distyl,</span> <span style="font-weight: 400;">Dataiku</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">DataRobot</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Deloitte</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">EY</span><span style="font-weight: 400;"> and </span><span style="font-weight: 400;">Tata Consultancy Services.</span></li>
</ul>
<p><span style="font-weight: 400;">The model is packaged as an </span><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/products/nim-microservices/"><span style="font-weight: 400;">NVIDIA NIM</span></a><span style="font-weight: 400;"> microservice, allowing deployment from on-premises systems to the cloud.</span></p>
<p><i><span style="font-weight: 400;">Stay up to date on agentic AI, NVIDIA </span></i><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/"><i><span style="font-weight: 400;">Nemotron</span></i></a><i><span style="font-weight: 400;"> and more by subscribing to </span></i><a target="_blank" href="https://www.nvidia.com/en-us/executive-insights/generative-ai-tools/?modal=stay-inf"><i><span style="font-weight: 400;">NVIDIA AI news</span></i></a><i><span style="font-weight: 400;">,</span></i><a target="_blank" href="https://developer.nvidia.com/community"><i><span style="font-weight: 400;"> joining the community</span></i></a><i><span style="font-weight: 400;">, and following NVIDIA AI on </span></i><a target="_blank" href="https://www.linkedin.com/showcase/nvidia-ai/posts/?feedView=all"><i><span style="font-weight: 400;">LinkedIn</span></i></a><i><span style="font-weight: 400;">, </span></i><a target="_blank" href="https://www.instagram.com/nvidiaai/?hl=en"><i><span style="font-weight: 400;">Instagram</span></i></a><i><span style="font-weight: 400;">, </span></i><a target="_blank" href="https://x.com/NVIDIAAIDev"><i><span style="font-weight: 400;">X</span></i></a><i><span style="font-weight: 400;"> and </span></i><a target="_blank" href="https://www.facebook.com/NVIDIAAI"><i><span style="font-weight: 400;">Facebook</span></i></a><i><span style="font-weight: 400;">. </span></i></p>
<p><i><span style="font-weight: 400;">Explore </span></i><a target="_blank" href="https://youtube.com/playlist?list=PL5B692fm6--vdRKB14FImVi7MTJ77zjn4&amp;feature=shared"><i><span style="font-weight: 400;">self-paced video tutorials and livestreams</span></i></a><i><span style="font-weight: 400;">.</span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/nemotron-3-super-1920x1080-1.jpg" type="image/jpeg" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/nemotron-3-super-1920x1080-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[New NVIDIA Nemotron 3 Super Delivers 5x Higher Throughput for Agentic AI]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>NVIDIA GTC 2026: Live Updates on What’s Next in AI</title>
		<link>https://blogs.nvidia.com/blog/gtc-2026-news/</link>
		
		<dc:creator><![CDATA[NVIDIA Writers]]></dc:creator>
		<pubDate>Wed, 11 Mar 2026 15:00:08 +0000</pubDate>
				<category><![CDATA[AI Infrastructure]]></category>
		<category><![CDATA[Corporate]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Robotics]]></category>
		<category><![CDATA[GTC 2026]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90757</guid>

					<description><![CDATA[Rolling coverage from San Jose, including NVIDIA CEO Jensen Huang’s keynote, news highlights, live demos and on‑the‑ground color through March 19.]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div>]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/26gtc-sj-preshow-various-_MF92538_sized-scaled.jpg" type="image/jpeg" width="2048" height="1152">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/26gtc-sj-preshow-various-_MF92538_sized-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[NVIDIA GTC 2026: Live Updates on What’s Next in AI]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>As Open Models Spark AI Boom, NVIDIA Jetson Brings It to Life at the Edge</title>
		<link>https://blogs.nvidia.com/blog/jetson-generative-ai-edge-oss/</link>
		
		<dc:creator><![CDATA[Chen Su]]></dc:creator>
		<pubDate>Tue, 10 Mar 2026 16:43:40 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Cosmos]]></category>
		<category><![CDATA[Isaac]]></category>
		<category><![CDATA[Jetson]]></category>
		<category><![CDATA[Nemotron]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Physical AI]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90166</guid>

					<description><![CDATA[The Cat 306 CR mini-excavator weighs just under eight tons and fits inside a standard shipping container. It’s the machine a contractor rents when the job site is tight: a utility trench near a foundation, a basement dig in a dense neighborhood. The cab is roughly the size of a phone booth. The operator sits [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">The </span><a target="_blank" href="https://www.cat.com/en_US/products/new/equipment/excavators/mini-excavators/100084.html"><span style="font-weight: 400;">Cat 306 CR</span></a><span style="font-weight: 400;"> mini-excavator weighs just under eight tons and fits inside a standard shipping container. It’s the machine a contractor rents when the job site is tight: a utility trench near a foundation, a basement dig in a dense neighborhood.</span></p>
<p><span style="font-weight: 400;">The cab is roughly the size of a phone booth. The operator sits close to the controls, two joysticks, multiple functions per hand. It takes time to learn. It takes longer to speed up.</span></p>
<p><span style="font-weight: 400;">At CES earlier this year, that </span><a href="https://blogs.nvidia.com/blog/caterpillar-ces-2026/"><span style="font-weight: 400;">machine answered questions</span></a><span style="font-weight: 400;">.</span><span style="font-weight: 400;"><br />
</span><span style="font-weight: 400;"><br />
</span><span style="font-weight: 400;">In the demo, the Cat AI Assistant ran on </span><a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-thor/"><span style="font-weight: 400;">NVIDIA Jetson Thor</span></a><span style="font-weight: 400;">, an edge AI platform built for real‑time inference in industrial and robotic systems, </span><a target="_blank" href="https://developer.nvidia.com/nemotron"><span style="font-weight: 400;">NVIDIA Nemotron</span></a><span style="font-weight: 400;"> speech models are used for fast and accurate natural voice interactions. Qwen3 4B, served locally via vLLM, interprets requests and generates responses with low latency, no cloud link required.</span></p>
<p><span style="font-weight: 400;">Beyond enterprise innovation, open models unlock new possibilities for developers to build and experiment freely. Running OpenClaw on NVIDIA Jetson enables developers to create private, always-on AI assistants at the edge — with zero application programming interface cost and full data privacy.</span></p>
<p><span style="font-weight: 400;">All Jetson developer kits support OpenClaw, offering the flexibility to switch across open models from 2 billion parameters to 30 billion. With a frontier-class AI assistant running locally, users can power morning briefings, automate daily tasks, perform code reviews and control smart home systems — all in real time.</span></p>
<h2>From the Cloud to the Edge</h2>
<p><span style="font-weight: 400;">For most of their recent history, open models lived where it was easiest to support them. </span></p>
<p><span style="font-weight: 400;">They ran in data centers, backed by elastic compute and persistent networks. Cloud deployments carry costs in latency and ongoing compute spend that scale with every query.</span></p>
<p><span style="font-weight: 400;">Physical systems optimize for something else. Low latency because machines interact with people and environments. Limited power because devices have hard limits. And consistent behavior because variability introduces risk.</span></p>
<p><span style="font-weight: 400;">There’s also a supply question. Memory shortages have driven up costs across the industry. Jetson brings compute and memory together in a system-on-module, accelerating customer hardware design and making sourcing and validation easier than with discrete component approaches.</span></p>
<p><span style="font-weight: 400;">And as models have grown more efficient, developers have also started asking a different question. Not which model performs best in isolation, but where it makes sense to run. </span></p>
<p><span style="font-weight: 400;">More often, the answer is on the device, starting from Jetson Orin Nano 8GB for entry-level generative AI models.  </span></p>
<h2>Building Autonomous Physical AI Systems at Scale</h2>
<p><span style="font-weight: 400;">For </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/generative-physical-ai/"><span style="font-weight: 400;">physical AI</span></a><span style="font-weight: 400;"> systems, generative AI models are expanding what’s possible. </span></p>
<p><span style="font-weight: 400;">Caterpillar&#8217;s in-cab Cat AI Assistant, which is in development, runs speech and language models locally alongside trusted machine context, supporting operator guidance and safety features.</span></p>
<p><span style="font-weight: 400;">At CES, </span><span style="font-weight: 400;">Franka Robotics</span> <span style="font-weight: 400;">showed what that looks like in robotics. The </span><a target="_blank" href="https://franka.de/news/franka-ces-2026-powering-the-future-of-embodied-ai"><span style="font-weight: 400;">company’s FR3 Duo dual-arm system ran the NVIDIA GR00T N1.6 model</span></a><span style="font-weight: 400;"> end-to-end onboard, perception to motion, no task scripting. The policy executes locally.</span></p>
<p><iframe title="YouTube video player" src="https://www.youtube.com/embed/ncKvzReJZyM?si=WWxO-PnJAOX6JIPj" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><span style="font-weight: 400;">In robotics research, the </span><a target="_blank" href="https://nvlabs.github.io/GEAR-SONIC/"><span style="font-weight: 400;">SONIC project from NVIDIA’s GEAR Lab</span></a><span style="font-weight: 400;"> trains a humanoid controller on over 100 million frames of motion-capture data, then deploys the resulting policy on a physical robot where the kinematic planner runs on </span><a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/"><span style="font-weight: 400;">Jetson Orin</span></a><span style="font-weight: 400;"> at around 12 milliseconds per pass. The policy loop runs at 50 Hz. Everything executes onboard.</span></p>
<p><span style="font-weight: 400;">The pattern reaches into the developer community. A team from UIUC’s SIGRobotics club </span><a target="_blank" href="https://www.hackster.io/sigrobotics/matcha-bot-sigrobotics-embodied-ai-hackathon-1st-place-f0e520"><span style="font-weight: 400;">built a dual-arm matcha-making robot</span></a><span style="font-weight: 400;"> on Jetson Thor running the GR00T N1.5 model. It took first place at an NVIDIA embodied AI hackathon.</span></p>
<p><span style="font-weight: 400;">This research momentum continues at the NYU Center for Robotics and Embodied Intelligence. The group recently ran its </span><a target="_blank" href="https://yourownrobot.ai/"><span style="font-weight: 400;">YOR robot</span></a><span style="font-weight: 400;"> on Jetson Thor, using NVIDIA Blackwell compute to handle the heavy processing required for AI-driven movement. Early results show YOR performing intricate pick-and-place tasks with better generalization to new objects and robustness to scene variation, accelerating readiness for a wide range of household tasks like cooking and laundry.</span></p>
<p><span style="font-weight: 400;">Independent researchers are finding the same. Andrés Marafioti, a multimodal research lead at </span><span style="font-weight: 400;">Hugging Face</span><span style="font-weight: 400;">, </span><a target="_blank" href="https://www.linkedin.com/posts/andimarafioti_an-ai-agent-sent-me-to-bed-at-midnight-then-activity-7430244238950297601-OJlX/?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAACIoNTMBsMKQgXfIdyJvm7NsaP70ieqO9Tc"><span style="font-weight: 400;">built an agentic AI system on Jetson AGX Orin</span></a><span style="font-weight: 400;"> that routes tasks across models and schedules its own work. Late one night, the agent sent him a message: Go to sleep. Everything will be ready by morning.</span></p>
<p><span style="font-weight: 400;">Developer Ajeet Singh Raina from the Collabnix community has shown how to run </span><a target="_blank" href="https://www.ajeetraina.com/how-to-run-openclaw-moltbot-on-nvidia-jetson-thor-with-docker-model-runner-your-private-ai-assistant-at-the-edge/"><b>OpenClaw</b></a><span style="font-weight: 400;"> on NVIDIA Jetson Thor for a personal AI assistant that runs 24/7. This setup allows for private large language model inference for the user’s own data while the system manages emails and calendars through a local gateway.</span></p>
<h2>Jetson Is the New Standard</h2>
<p><span style="font-weight: 400;">NVIDIA Jetson has become a common platform for running open models at the edge.</span></p>
<p><span style="font-weight: 400;">It supports a wide range of open models and AI frameworks, giving developers flexibility for almost any generative AI workload at the edge. </span></p>
<p><img loading="lazy" decoding="async" class="wp-image-90741 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/image-6.png" alt="" width="1434" height="785" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/image-6.png 1434w, https://blogs.nvidia.com/wp-content/uploads/2026/03/image-6-960x526.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/image-6-1280x701.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/image-6-630x345.png 630w" sizes="auto, (max-width: 1434px) 100vw, 1434px" /></p>
<p><span style="font-weight: 400;">Model benchmarks are available at </span><a target="_blank" href="https://www.jetson-ai-lab.com/models/"><span style="font-weight: 400;">Jetson AI Lab</span></a><span style="font-weight: 400;">, along with tutorials from the open model community. Jetson Thor delivers leadership inference performance across all major generative AI models. </span></p>
<p><b>Gemma:</b><span style="font-weight: 400;"> Built on Google’s Gemini research, Gemma 3 is a versatile workhorse for Jetson. It is multimodal out of the box, which means it can see and talk in over 140 languages. On Jetson Thor, it handles a massive 128K context window. This makes it perfect for robots that need to remember a long list of complex or multistep instructions.</span></p>
<p><b>gpt-oss-20B:</b> <a target="_blank" href="https://openai.com/index/introducing-gpt-oss/"><span style="font-weight: 400;">This model from OpenAI</span></a><span style="font-weight: 400;"> lowers the barrier to deploying advanced AI by delivering near state-of-the-art reasoning performance in a model that can run locally on Jetson Thor and Orin for cost-efficient inference. </span></p>
<p><b>Mistral AI:</b><span style="font-weight: 400;"> The new Mistral 3 open model family delivers industry-leading accuracy, efficiency and customization capabilities for developers and enterprises. This family includes small, dense models ranging from 3B to 14B, fast and remarkably smart for their size. Jetson developers can use the vLLM container on NVIDIA Jetson Thor to achieve 52 tokens per second for single concurrency, with scaling up to 273 tokens per second with concurrency of eight.</span></p>
<p><a target="_blank" href="https://www.nvidia.com/en-us/ai/cosmos/"><b>NVIDIA Cosmos</b></a>: <span style="font-weight: 400;">This leading, open, reasoning vision language model enables robots and AI agents to see, understand and act in the physical world like humans. Both the 8B and 2B models run on Jetson to deliver advanced spatial-temporal perception and reasoning capabilities. </span></p>
<p><a target="_blank" href="https://developer.nvidia.com/isaac/gr00t"><b>NVIDIA Isaac GR00T</b></a><b> N1.6</b><span style="font-weight: 400;"> is an open vision language action model (VLA) for generalist robot skills. Developers can use it to build robots that perceive their environment, reason about instructions and act across a wide range of tasks, environments and embodiments. On Jetson Thor, the full GR00T N1.6 pipeline executes onboard, delivering real-time perception, spatial awareness and responsive action.</span></p>
<p><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/” with “https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/"><b>NVIDIA Nemotron</b></a><b>: </b><span style="font-weight: 400;">A family of open models, datasets and technologies that empower users to build efficient, accurate and specialized agentic AI systems. It’s designed for advanced reasoning, coding, visual understanding, agentic tasks, safety, speech and information. The Nemotron 3 Nano 9B model effectively runs on Jetson Orin Nano Super with llama.cpp with 9 tokens per second performance. </span></p>
<p><b>PI 0.5:</b><span style="font-weight: 400;"> A VLA model from Physical Intelligence that enables robots to understand instructions and autonomously execute complex real-world tasks with strong generalization and real-time adaptability, while NVIDIA Jetson Thor delivers 120 action tokens per second to power responsive, low-latency physical AI deployment.</span><span style="font-weight: 400;"><br />
</span><span style="font-weight: 400;"><br />
</span><b>Qwen 3.5: </b><span style="font-weight: 400;">This family of models from Alibaba, including the latest Qwen 3.5 releases, offers a mix of dense and mixture‑of‑experts models that deliver strong reasoning, coding multimodal understanding and long‑context performance. Jetson Thor delivers optimized performance across Qwen models like the </span><a target="_blank" href="https://www.jetson-ai-lab.com/modelsMOE%20-35B-A3B%20M"><span style="font-weight: 400;">Qwen 3.5-35B-A3B</span></a><span style="font-weight: 400;"> model, which reasons at 35 tokens per second, making real-time interactivity possible. </span></p>
<p><span style="font-weight: 400;">Any developer can fine-tune these models to create specialized physical AI agents and seamlessly deploy them into physical AI systems. The NVIDIA Jetson platform supports popular AI frameworks from NVIDIA TRT, Llama.cpp, Ollama, vLLM, SGLang and more.</span></p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-90749 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/image-7.png" alt="" width="1830" height="1034" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/image-7.png 1830w, https://blogs.nvidia.com/wp-content/uploads/2026/03/image-7-960x542.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/image-7-1680x949.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/image-7-1280x723.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/image-7-1536x868.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/image-7-630x356.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/image-7-300x169.png 300w, https://blogs.nvidia.com/wp-content/uploads/2026/03/image-7-400x225.png 400w" sizes="auto, (max-width: 1830px) 100vw, 1830px" /></p>
<h2>Take On Open Models on Jetson</h2>
<p><span style="font-weight: 400;">Developers can dive into Hugging Face tutorials — including </span><a target="_blank" href="https://huggingface.co/blog/nvidia/cosmos-on-jetson"><span style="font-weight: 400;">Deploying Open Source Vision Language Models on Jetson</span></a><span style="font-weight: 400;"> — and catch the latest </span><a target="_blank" href="https://www.youtube.com/watch?v=u4ZA7XH7rN8"><span style="font-weight: 400;">livestream</span></a><span style="font-weight: 400;">. Learn from <a target="_blank" href="https://45.63.86.155/tutorials/openclaw/">this tutorial</a> and run OpenClaw on NVIDIA Jetson. </span></p>
<p><span style="font-weight: 400;">Join </span><a target="_blank" href="https://www.nvidia.com/gtc/"><span style="font-weight: 400;">GTC 2026</span></a><span style="font-weight: 400;"> next month to see it all in action. NVIDIA will show how open models are moving from data centers into machines operating in the physical world, including in a  panel on the </span><a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/sessions/gtc26-s81844/"><span style="font-weight: 400;">Future of Industrial Autonomy</span></a><span style="font-weight: 400;">.</span></p>
<p><i><span style="font-weight: 400;">Watch the </span></i><a target="_blank" href="https://www.nvidia.com/gtc/keynote/"><i><span style="font-weight: 400;">GTC keynote</span></i></a><i><span style="font-weight: 400;"> from NVIDIA founder and CEO Jensen Huang and explore </span></i><a target="_blank" href="https://www.nvidia.com/gtc/sessions/physical-ai-days/"><i><span style="font-weight: 400;">physical AI</span></i></a><i><span style="font-weight: 400;">,</span></i><a target="_blank" href="https://www.nvidia.com/gtc/sessions/robotics/"><i><span style="font-weight: 400;"> robotics</span></i></a><i><span style="font-weight: 400;"> and </span></i><a target="_blank" href="https://www.nvidia.com/gtc/sessions/computer-vision-and-video-analytics/"><i><span style="font-weight: 400;">vision AI</span></i></a><i><span style="font-weight: 400;"> sessions.</span></i></p>
<aside style="background-color: #f4f4f4; border-left: 4px solid #d1d1d1; padding: 20px; margin: 20px 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; color: #333; line-height: 1.6; border-radius: 4px;">
<h3 style="margin-top: 0; color: #000; font-size: 1.25rem; border-bottom: 1px solid #d1d1d1; padding-bottom: 10px; margin-bottom: 15px; text-transform: uppercase; letter-spacing: 1px;">Caterpillar Technical Highlights</h3>
<ul style="list-style-type: none; padding-left: 0; margin-bottom: 0;">
<li style="margin-bottom: 12px;"><strong>NVIDIA Jetson Thor:</strong> Edge AI platform for real-time inference in industrial and robotics systems</li>
<li style="margin-bottom: 12px;"><strong>NVIDIA Riva:</strong> Speech AI framework using Parakeet ASR and Magpie TTS</li>
<li style="margin-bottom: 12px;"><strong>Qwen3 4B:</strong> Compact LLM for intent parsing and response generation</li>
<li style="margin-bottom: 12px;"><strong>vLLM:</strong> Efficient runtime for serving LLM inference at the edge</li>
<li style="margin-bottom: 12px;"><strong>CatHelios:</strong> Unified data platform providing trusted machine context</li>
<li style="margin-bottom: 0;"><strong>NVIDIA Omniverse:</strong> Digital twin and simulation frameworks for industrial workflows</li>
</ul>
</aside>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/02/Jetson_OSS_KV_new_v013-scaled.jpg" type="image/jpeg" width="2048" height="1152">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/02/Jetson_OSS_KV_new_v013-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[As Open Models Spark AI Boom, NVIDIA Jetson Brings It to Life at the Edge]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>NVIDIA Virtualizes Game Development With RTX PRO Server</title>
		<link>https://blogs.nvidia.com/blog/gdc-2026-virtual-game-development/</link>
		
		<dc:creator><![CDATA[Paul Logan]]></dc:creator>
		<pubDate>Tue, 10 Mar 2026 15:30:07 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Pro Graphics]]></category>
		<category><![CDATA[Creators]]></category>
		<category><![CDATA[Game Development]]></category>
		<category><![CDATA[NVIDIA RTX]]></category>
		<category><![CDATA[Simulation and Design]]></category>
		<category><![CDATA[Virtualization]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90681</guid>

					<description><![CDATA[Game development teams are working across larger worlds, more complex pipelines and more distributed teams than ever. At the same time, many studios still rely on fixed, desk-bound GPU hardware for critical production work. At the Game Developers Conference (GDC) this week in San Francisco, NVIDIA is showcasing a new approach to bring together disparate [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">Game development teams are working across larger worlds, more complex pipelines and more distributed teams than ever. At the same time, many studios still rely on fixed, desk-bound GPU hardware for critical production work.</span></p>
<p><span style="font-weight: 400;">At the </span><a target="_blank" href="https://www.nvidia.com/en-us/events/gdc/"><span style="font-weight: 400;">Game Developers Conference (GDC)</span></a><span style="font-weight: 400;"> this week in San Francisco, NVIDIA is showcasing a new approach to bring together disparate workflows using virtualized game development on </span><a target="_blank" href="https://www.nvidia.com/en-us/data-center/products/rtx-pro-server/"><span style="font-weight: 400;">NVIDIA RTX PRO Servers</span></a><span style="font-weight: 400;">, powered by </span><a target="_blank" href="https://www.nvidia.com/en-us/data-center/rtx-pro-6000-blackwell-server-edition/"><span style="font-weight: 400;">NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://www.nvidia.com/en-us/data-center/virtual-gpu-technology/"><span style="font-weight: 400;">NVIDIA vGPU</span></a><span style="font-weight: 400;"> software.</span></p>
<p><span style="font-weight: 400;">With the RTX PRO Server, studios can centralize and virtualize core workflows across creative, engineering, AI research and quality assurance (QA) — all on shared GPU infrastructure in the data center. </span></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-medium wp-image-90687" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/gdc-game-dev-virtualization-960x540.jpg" alt="" width="960" height="540" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/gdc-game-dev-virtualization-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gdc-game-dev-virtualization-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gdc-game-dev-virtualization-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gdc-game-dev-virtualization-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gdc-game-dev-virtualization-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gdc-game-dev-virtualization-630x355.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gdc-game-dev-virtualization-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gdc-game-dev-virtualization-400x225.jpg 400w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gdc-game-dev-virtualization.jpg 1999w" sizes="auto, (max-width: 960px) 100vw, 960px" /></p>
<p><span style="font-weight: 400;">This enables teams to maintain the responsiveness and visual fidelity they expect from workstation-class systems while improving infrastructure utilization, scalability, data security and operational consistency across teams and locations.</span></p>
<p><iframe loading="lazy" title="NVIDIA RTX PRO Server for Game Development" width="1200" height="675" src="https://www.youtube.com/embed/Yai8kh8rt0U?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<h2><b>Simplifying Complex Workflows</b></h2>
<p><span style="font-weight: 400;">As game development studios scale, hardware can often sit underutilized in one location while other teams wait to access it for production work. QA capacity is hard to expand quickly. Over time, workstation hardware, drivers and tools diverge, making bugs harder to reproduce. AI workloads are often isolated on separate infrastructure, creating more operational overhead. </span></p>
<p><span style="font-weight: 400;">The NVIDIA RTX PRO Server helps studios move from workstation-by-workstation scaling to centralized GPU infrastructure. Studios can pool resources, allocate performance by workload and support parallel development, testing and AI workflows without expanding physical workstation sprawl.</span></p>
<p><span style="font-weight: 400;">Centralized GPU infrastructure enables studios to run AI training, simulation and game automation workloads overnight, then dynamically reallocate the same resources to interactive development during the day, improving overall utilization and reducing idle capacity.</span></p>
<p><span style="font-weight: 400;">The NVIDIA RTX PRO Server supports virtualized workflows for 3D graphics and AI across the game development lifecycle for:</span></p>
<ul>
<li style="font-weight: 300;" aria-level="1"><b>Artists:</b><span style="font-weight: 400;"> Providing virtual RTX workstations for traditional 3D and generative AI content-creation workflows.</span></li>
<li style="font-weight: 300;" aria-level="1"><b>Developers:</b><span style="font-weight: 400;"> Powering consistent, high-performance engineering environments for coding and 3D development.</span></li>
<li style="font-weight: 300;" aria-level="1"><b>AI researchers:</b><span style="font-weight: 400;"> Offering large-memory GPU profiles for fine-tuning, inference and AI agents.</span></li>
<li style="font-weight: 300;" aria-level="1"><b>QA teams:</b><span style="font-weight: 400;"> Enabling scalable game validation and performance testing using the same NVIDIA Blackwell architecture used by GeForce RTX 50 Series GPUs.</span></li>
</ul>
<p><span style="font-weight: 400;">This allows studios to support multiple teams — including across sites and contractors — on one common GPU platform, improving collaboration and reducing debugging issues that can arise from disparate hardware.</span></p>
<h2><b>Supporting AI and Engineering on Shared Infrastructure</b></h2>
<p><span style="font-weight: 400;">AI is becoming a core part of everyday game development, spanning coding, content creation, testing and live operations. As these workflows expand, studios need infrastructure that can support AI alongside traditional graphics workloads without introducing separate, siloed systems.</span></p>
<p><span style="font-weight: 400;">With the RTX PRO Server, studios can support coding agents, internal model experimentation and AI-assisted production workflows without spinning up a separate AI stack for every team.</span></p>
<p><span style="font-weight: 400;">The </span><a target="_blank" href="https://www.nvidia.com/en-us/data-center/rtx-pro-6000-blackwell-server-edition/"><span style="font-weight: 400;">NVIDIA RTX PRO 6000 Blackwell Server Edition GPU</span></a><span style="font-weight: 400;"> features a massive 96GB memory buffer, enabling developers to run multiple demanding applications simultaneously while supporting AI inference on larger models directly alongside real-time graphics workflows.</span></p>
<p><span style="font-weight: 400;">NVIDIA Multi-Instance GPU (MIG) technology partitions a single GPU into isolated instances with dedicated memory, compute and cache resources. Combined with NVIDIA vGPU software, MIG can help studios securely allocate GPU capacity across users and workloads. In combined MIG and vGPU configurations, a single RTX PRO 6000 Blackwell Server Edition GPU can support up to 48 concurrent users, maximizing utilization while maintaining performance isolation.</span></p>
<h2><b>Enterprise-Ready Deployment for Game Studios</b></h2>
<p><span style="font-weight: 400;">NVIDIA RTX PRO Servers are designed for enterprise-grade data-center operations. Studios can deploy virtual workstations on RTX PRO Servers via NVIDIA vGPU on supported hypervisor and remote workstation platforms.</span></p>
<p><span style="font-weight: 400;">That means RTX PRO Servers can fit into studios’ existing infrastructure and IT practices, rather than requiring one-off deployments.</span></p>
<p><span style="font-weight: 400;">Major game publishers </span><a target="_blank" href="https://www.nvidia.com/en-us/case-studies/activision/"><span style="font-weight: 400;">already use NVIDIA vGPU technology</span></a><span style="font-weight: 400;"> to scale centralized development infrastructure and improve efficiency at studio scale.</span></p>
<p><span style="font-weight: 400;">​</span><i><span style="font-weight: 400;">Learn more about the </span></i><a target="_blank" href="https://www.nvidia.com/en-us/data-center/products/rtx-pro-server/"><i><span style="font-weight: 400;">NVIDIA RTX PRO Server</span></i></a><i><span style="font-weight: 400;">.</span></i></p>
<p><i><span style="font-weight: 400;">See these workflows live by joining </span></i><a target="_blank" href="https://www.nvidia.com/en-us/events/gdc/"><i><span style="font-weight: 400;">NVIDIA’s booth 1426 at GDC</span></i></a><i><span style="font-weight: 400;"> or attending </span></i><a target="_blank" href="https://www.nvidia.com/gtc/"><i><span style="font-weight: 400;">NVIDIA GTC</span></i></a><i><span style="font-weight: 400;">, running March 16-19 in San Jose, California. </span></i></p>
<p><i><span style="font-weight: 400;">See </span></i><a target="_blank" href="https://www.nvidia.com/en-eu/about-nvidia/terms-of-service/"><i><span style="font-weight: 400;">notice</span></i></a><i><span style="font-weight: 400;"> regarding software product information.</span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gdc-rtx-pro-server-1920x1080-1.jpg" type="image/jpeg" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gdc-rtx-pro-server-1920x1080-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[NVIDIA Virtualizes Game Development With RTX PRO Server]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>NVIDIA and ComfyUI Streamline Local AI Video Generation for Game Developers and Creators at GDC</title>
		<link>https://blogs.nvidia.com/blog/rtx-ai-garage-flux-ltx-video-comfyui-gdc/</link>
		
		<dc:creator><![CDATA[Michael Fukuyama]]></dc:creator>
		<pubDate>Tue, 10 Mar 2026 15:30:05 +0000</pubDate>
				<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Creators]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[NVIDIA RTX]]></category>
		<category><![CDATA[RTX AI Garage]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90699</guid>

					<description><![CDATA[Game developers and artists are building cinematic worlds and iconic characters — raising the bar for immersive experiences on NVIDIA RTX AI PCs.  At the Game Developers Conference (GDC) in San Francisco this week, NVIDIA announced a suite of updates that streamline AI video generation for concept development and storyboarding on RTX GPUs and the [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">Game developers and artists are building cinematic worlds and iconic characters — raising the bar for immersive experiences on </span><a target="_blank" href="https://www.nvidia.com/en-us/ai-on-rtx/"><span style="font-weight: 400;">NVIDIA RTX AI PCs</span></a><span style="font-weight: 400;">. </span></p>
<p><span style="font-weight: 400;">At the Game Developers Conference (GDC) in San Francisco this week, NVIDIA announced a suite of updates that streamline AI video generation for concept development and storyboarding on RTX GPUs and the NVIDIA DGX Spark desktop supercomputer.</span></p>
<p><span style="font-weight: 400;">These announcements include:</span></p>
<ul>
<li>ComfyUI’s new App View with a simplified interface, lowering the barrier for entry for the popular generative AI tool.</li>
<li>RTX Video Super Resolution available for ComfyUI, a real-time 4K upscaler ideal for video generation — also available for developers as a Python Wheel.</li>
<li>NVFP4 and FP8 model variants are available today for FLUX.2 Klein, with NVFP4 support for LTX-2.3 coming soon, delivering up to 2.5x performance gains and 60% lower memory usage for both models.</li>
</ul>
<h2><b>Frictionless Local AI: Collaborate, Optimize, Customize</b></h2>
<p><span style="font-weight: 400;">Many of today’s popular AI applications are making it easier for beginners to try state-of-the-art models directly on their laptop or desktop.</span></p>
<p><span style="font-weight: 400;">For artists unfamiliar with node graphs, ComfyUI’s new App View presents workflows in a simplified interface. Users only need to enter a prompt, adjust simple parameters and hit generate. The full node-based experience remains available as Node View, and users can seamlessly switch between the two modes.</span></p>
<p><span style="font-weight: 400;">App View is compatible with the RTX optimizations in ComfyUI. Performance for RTX GPUs is 40% faster since September, and ComfyUI now supports NVFP4 and FP8 data formats natively. All combined, performance is 2.5x faster and VRAM is reduced by 60% with NVIDIA GeForce RTX 50 Series GPUs’ NVFP4 format, and performance is 1.7x faster and VRAM is reduced by 40% with FP8.</span></p>
<p><span style="font-weight: 400;">At CES in January, NVIDIA announced several models released with NVFP4 and FP8 support. And now more NVFP4 and FP8 models are available — </span><a target="_blank" href="https://huggingface.co/Lightricks/LTX-2.3-fp8"><span style="font-weight: 400;">LTX-2.3,</span><span style="font-weight: 400;"> with </span></a><span style="font-weight: 400;">NVFP4 support coming soon, </span><a target="_blank" href="https://huggingface.co/black-forest-labs/FLUX.2-klein-4b-nvfp4/tree/main"><span style="font-weight: 400;">FLUX.2 Klein 4B</span></a><span style="font-weight: 400;">, and </span><a target="_blank" href="https://huggingface.co/black-forest-labs/FLUX.2-klein-9b-nvfp4/tree/main"><span style="font-weight: 400;">FLUX.2 Klein 9B</span></a><span style="font-weight: 400;"> —</span> <span style="font-weight: 400;">directly in ComfyUI. To get started, download the NVFP4 and FP8 checkpoints directly from Hugging Face, load the default workflows in ComfyUI via the Template Browser and replace the default model checkpoint with the newly downloaded checkpoint. </span></p>
<p><span style="font-weight: 400;">App View mode is available today. Learn more on </span><a target="_blank" href="https://www.comfy.org/"><span style="font-weight: 400;">ComfyUI</span></a><span style="font-weight: 400;">. </span></p>
<h2><b>Faster 4K Video Generation </b></h2>
<p><span style="font-weight: 400;">Getting high-quality video outputs often means juggling three constraints: speed, VRAM and control. While many artists ultimately want 4K quality, most prefer to generate smaller, faster previews first, and then upscale them. Today’s upscalers take minutes to upscale a 10‑second clip into 4K resolution.</span></p>
<p><span style="font-weight: 400;">Now, users can quickly upscale generated video to 4K with NVIDIA RTX Video Super Resolution, available as a node for ComfyUI. RTX Video can be accessed as a <a target="_blank" href="https://github.com/Comfy-Org/Nvidia_RTX_Nodes_ComfyUI">standalone node</a> for building video workflows from scratch.</span></p>
<p><span style="font-weight: 400;">For AI developers, NVIDIA released a free Python package available via </span><a target="_blank" href="https://pypi.org/project/nvidia-vfx/"><span style="font-weight: 400;">the PyPI repository</span></a><span style="font-weight: 400;">, along  with </span><a target="_blank" href="https://github.com/NVIDIA-Maxine/nvidia-vfx-python-samples"><span style="font-weight: 400;">sample code on GitHub</span></a><span style="font-weight: 400;"> and a </span><a target="_blank" href="https://docs.nvidia.com/maxine/vfx-python/latest/index.html"><span style="font-weight: 400;">VFX Python bindings guide</span></a><span style="font-weight: 400;">, to get started quickly. The package provides programmatic access to the same AI upscaling technology that powers RTX Video, running directly on RTX GPU Tensor Cores to deliver 4K upscaling 30x faster than alternative popular local upscalers, and at a fraction of the VRAM cost. The package is powered by the </span><a target="_blank" href="https://catalog.ngc.nvidia.com/orgs/nvidia/teams/maxine/collections/maxine_vfx_sdk"><span style="font-weight: 400;">NVIDIA Video Effects software development kit</span></a><span style="font-weight: 400;">.</span></p>
<figure id="attachment_90700" aria-describedby="caption-attachment-90700" style="width: 1200px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2026/03/RTX-AI-generative-AI-model-performance.png"><img loading="lazy" decoding="async" class="size-large wp-image-90700" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/RTX-AI-generative-AI-model-performance-1680x632.png" alt="" width="1200" height="451" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/RTX-AI-generative-AI-model-performance-1680x632.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/RTX-AI-generative-AI-model-performance-960x361.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/RTX-AI-generative-AI-model-performance-1280x482.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/RTX-AI-generative-AI-model-performance-1536x578.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/RTX-AI-generative-AI-model-performance-630x237.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/RTX-AI-generative-AI-model-performance.png 1879w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></a><figcaption id="caption-attachment-90700" class="wp-caption-text"><em>Generative AI model performance for LTX-2 and FLUX.2 Klein 9B on an NVIDIA RTX 5090 GPU. Performance testing done on RTX 5090. LTX-2: 512&#215;768 resolution, 100 frames, 20 steps. FLUX.2 Klein 9B (base): 1024&#215;1024 resolution, 20 steps.</em></figcaption></figure>
<p><span style="font-weight: 400;">Ready to get started with ComfyUI? Check out the latest NVIDIA Studio Sessions tutorial hosted by  visual effects artist </span><a target="_blank" href="https://www.youtube.com/@MaxNovakTutorials"><span style="font-weight: 400;">Max Novak</span></a><span style="font-weight: 400;"> for a guided walkthrough:</span></p>
<p><iframe loading="lazy" title="Get Started in ComfyUI w/ Max Novak: Beginner Tutorial/Guide" width="1200" height="675" src="https://www.youtube.com/embed/tuXveeHJpDA?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<h2><b>#ICYMI: The Latest Updates for RTX AI PCs at GDC</b></h2>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f389.png" alt="🎉" class="wp-smiley" style="height: 1em; max-height: 1em;" />Join NVIDIA at </span><a target="_blank" href="https://www.nvidia.com/gtc/"><span style="font-weight: 400;">GTC</span></a><span style="font-weight: 400;">, March 16-19 in San Jose! Check out </span><a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/sessions/gtc26-dlit81948/"><span style="font-weight: 400;">“Create Generative AI Workflow for Design and Visualization in ComfyUI”</span></a><span style="font-weight: 400;"> on March 17, for a training session led by NVIDIA 3D workflow specialists focused on building RTX-accelerated generative workflows for images, video, 3D, and PBR materials. </span><a target="_blank" href="https://www.nvidia.com/gtc/pricing/"><span style="font-weight: 400;">Register</span></a><span style="font-weight: 400;"> today and explore the </span><a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/"><span style="font-weight: 400;">session catalog</span></a><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f4a1.png" alt="💡" class="wp-smiley" style="height: 1em; max-height: 1em;" /></span><a target="_blank" href="https://youtu.be/nL3Efak3FnI?si=IMzV9QcbcJZeYnOq"><span style="font-weight: 400;">LTX Desktop</span></a><span style="font-weight: 400;"> is a fully local, open-source video editor running directly on the LTX engine, optimized for NVIDIA GPUs and compatible hardware.</span></p>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f9a5.png" alt="🦥" class="wp-smiley" style="height: 1em; max-height: 1em;" /> </span><a target="_blank" href="https://lmstudio.ai/link"><span style="font-weight: 400;">LM Link</span></a><span style="font-weight: 400;"> connects separate devices running LM Studio, allowing models to run on remote machines as if they were local. It’s ideal for users wanting to run an agent on their laptop while still accessing free and private AI, powered by their DGX Spark or RTX desktop. Learn how to run </span><a target="_blank" href="https://build.nvidia.com/spark/lm-studio"><span style="font-weight: 400;">LM Studio on DGX Spark</span></a><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f3ae.png" alt="🎮" class="wp-smiley" style="height: 1em; max-height: 1em;" />On Tuesday, March 31, as part of the next opt-in NVIDIA App beta, overrides for NVIDIA DLSS 4.5 Dynamic Multi Frame Generation and DLSS 4.5 Multi Frame Generation 6x Mode will be released for GeForce RTX 50 Series owners. Learn about </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce/news/gdc-2026-nvidia-geforce-rtx-announcements"><span style="font-weight: 400;">NVIDIA news at GDC</span></a><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f916.png" alt="🤖" class="wp-smiley" style="height: 1em; max-height: 1em;" />Next month, </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce/news/rtx-remix-advanced-particle-vfx"><span style="font-weight: 400;">a new NVIDIA RTX Remix update will introduce Advanced Particle VFX</span></a><span style="font-weight: 400;">, enabling modders to create a wide array of particle effects that further improve image quality, detail and immersion.</span></p>
<p><b><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f984.png" alt="🦄" class="wp-smiley" style="height: 1em; max-height: 1em;" /></b><span style="font-weight: 400;">Topaz Labs</span> <span style="font-weight: 400;">has collaborated with NVIDIA to optimize NeuroStream for NVIDIA GPUs — a proprietary VRAM optimization that allows complex AI models to run on consumer hardware.</span></p>
<p><b><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f4c3.png" alt="📃" class="wp-smiley" style="height: 1em; max-height: 1em;" /></b><span style="font-weight: 400;">Microsoft has introduced support for VoiceMod, one of the first apps to enable Windows ML for GPU inference, significantly improving performance voice quality compared with CPUs. </span></p>
<p><i><span style="font-weight: 400;">Plug in to NVIDIA AI PC on </span></i><a target="_blank" href="https://www.facebook.com/NVIDIA.AI.PC/"><i><span style="font-weight: 400;">Facebook</span></i></a><i><span style="font-weight: 400;">, </span></i><a target="_blank" href="https://www.instagram.com/nvidia.ai.pc/"><i><span style="font-weight: 400;">Instagram</span></i></a><i><span style="font-weight: 400;">, </span></i><a target="_blank" href="https://www.tiktok.com/@nvidia_ai_pc"><i><span style="font-weight: 400;">TikTok</span></i></a><i><span style="font-weight: 400;"> and </span></i><a target="_blank" href="https://x.com/NVIDIA_AI_PC"><i><span style="font-weight: 400;">X</span></i></a><i><span style="font-weight: 400;"> — and stay informed by subscribing to the </span></i><a target="_blank" href="https://www.nvidia.com/en-us/ai-on-rtx/?modal=subscribe-ai"><i><span style="font-weight: 400;">RTX AI PC newsletter</span></i></a><i><span style="font-weight: 400;">. Follow NVIDIA Workstation on </span></i><a target="_blank" href="https://www.linkedin.com/showcase/3761136/"><i><span style="font-weight: 400;">LinkedIn</span></i></a><i><span style="font-weight: 400;"> and </span></i><a target="_blank" href="https://x.com/NVIDIAworkstatn"><i><span style="font-weight: 400;">X</span></i></a><i><span style="font-weight: 400;">. </span></i></p>
<p><i><span style="font-weight: 400;">See </span></i><a target="_blank" href="https://www.nvidia.com/en-eu/about-nvidia/terms-of-service/"><i><span style="font-weight: 400;">notice</span></i></a><i><span style="font-weight: 400;"> regarding software product information.</span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtc-2026-nv-blog-1280x680-1.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtc-2026-nv-blog-1280x680-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[NVIDIA and ComfyUI Streamline Local AI Video Generation for Game Developers and Creators at GDC]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>NVIDIA and Thinking Machines Lab Announce Long-Term Gigawatt-Scale Strategic Partnership</title>
		<link>https://blogs.nvidia.com/blog/nvidia-thinking-machines-lab/</link>
		
		<dc:creator><![CDATA[NVIDIA Newsroom]]></dc:creator>
		<pubDate>Tue, 10 Mar 2026 13:00:11 +0000</pubDate>
				<category><![CDATA[AI Infrastructure]]></category>
		<category><![CDATA[Corporate]]></category>
		<category><![CDATA[Open Source]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90692</guid>

					<description><![CDATA[NVIDIA and Thinking Machines Lab announced today a multiyear strategic partnership to deploy at least one gigawatt of next-generation NVIDIA Vera Rubin systems to support Thinking Machines’ frontier model training and platforms delivering customizable AI at scale. Deployment on the NVIDIA Vera Rubin platform is targeted for early next year. The partnership also includes an [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p class="Body">NVIDIA and Thinking Machines Lab announced today a multiyear strategic partnership to deploy at least one gigawatt<b> </b>of next-generation NVIDIA Vera Rubin systems to support Thinking Machines’ frontier model training and platforms delivering customizable AI at scale. Deployment on the NVIDIA Vera Rubin platform is targeted for early next year. The partnership also includes an effort to design training and serving systems for NVIDIA architectures and broaden access to frontier AI and open models for enterprises, research institutions and the scientific community.</p>
<p class="Body">NVIDIA has also made a significant investment in Thinking Machines Lab to support the company’s long-term growth.</p>
<p class="Body">“AI is the most powerful knowledge discovery instrument in human history,” said Jensen Huang, founder and CEO of NVIDIA. “Thinking Machines has brought together a world-class team to advance the frontier of AI. We are thrilled to partner with Thinking Machines to realize their exciting vision for the future of AI.”</p>
<p class="Body"><span lang="NL">“NVIDIA</span>’s technology is the foundation on which the entire field is built,” said Mira Murati, cofounder and CEO of Thinking Machines. “This partnership accelerates our capacity to build AI that people can shape and make their own, as it shapes human potential in turn.”</p>
<p class="Body">Building powerful AI systems that are understandable, customizable and collaborative demands advances in research, design and infrastructure at scale. This partnership provides that foundation, with the shared aim of ensuring that the most transformative technology of our time expands human capability.</p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/jensen-huang-mira-murati-nvidia-debs-3127-2-scaled.jpg" type="image/jpeg" width="2048" height="1554">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/jensen-huang-mira-murati-nvidia-debs-3127-2-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[NVIDIA and Thinking Machines Lab Announce Long-Term Gigawatt-Scale Strategic Partnership]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>AI Is a 5-Layer Cake</title>
		<link>https://blogs.nvidia.com/blog/ai-5-layer-cake/</link>
		
		<dc:creator><![CDATA[Jensen Huang]]></dc:creator>
		<pubDate>Tue, 10 Mar 2026 10:00:03 +0000</pubDate>
				<category><![CDATA[AI Infrastructure]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[AI Factory]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=89841</guid>

					<description><![CDATA[AI is one of the most powerful forces shaping the world today. It is not a clever app or a single model; it is essential infrastructure, like electricity and the internet.]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div>]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/02/5-layer-cake-hero-tablet-static.jpg" type="image/jpeg" width="1280" height="720">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/02/5-layer-cake-hero-tablet-static-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[AI Is a 5-Layer Cake]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>ABB Robotics Taps NVIDIA Omniverse to Deliver Industrial‑Grade Physical AI at Scale</title>
		<link>https://blogs.nvidia.com/blog/abb-robotics-omniverse/</link>
		
		<dc:creator><![CDATA[Scott Martin]]></dc:creator>
		<pubDate>Mon, 09 Mar 2026 15:00:34 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Industrial and Manufacturing]]></category>
		<category><![CDATA[Omniverse]]></category>
		<category><![CDATA[Physical AI]]></category>
		<category><![CDATA[Synthetic Data Generation]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90573</guid>

					<description><![CDATA[ABB Robotics and NVIDIA today announced a breakthrough partnership that brings industrial‑grade physical AI to the factory floor.  By integrating NVIDIA Omniverse libraries directly into its RobotStudio programming and simulation suite, ABB Robotics will now deliver physically accurate simulation capabilities in its platform, dramatically cutting engineering time, reducing deployment costs by up to 40% and [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">ABB Robotics and NVIDIA today announced a <a target="_blank" href="https://www.abb.com/global/en/news/134030">breakthrough partnership</a> that brings industrial‑grade </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/generative-physical-ai/"><span style="font-weight: 400;">physical AI</span></a><span style="font-weight: 400;"> to the factory floor. </span></p>
<p><span style="font-weight: 400;">By integrating </span><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/"><span style="font-weight: 400;">NVIDIA Omniverse libraries</span></a> directly into its RobotStudio programming and simulation suite, ABB Robotics will now deliver physically accurate simulation capabilities in its platform, dramatically cutting engineering time, reducing deployment costs by up to 40% and accelerating time to market by as much as 50%.</p>
<p><span style="font-weight: 400;">The new product — called RobotStudio HyperReality — will be available in the second half of 2026 and is already drawing strong interest from ABB Robotic’s global customer base. Early pilots include Foxconn, the world’s largest electronics manufacturer, and Workr, a U.S.‑based robotic workforce company bringing advanced automation to small and medium-size manufacturers.</span></p>
<p><span style="font-weight: 400;">The partnership marks a major milestone for the industrial sector, which has long sought a reliable way to bring AI-powered intelligence to <a target="_blank" href="https://www.nvidia.com/en-us/industries/robotics/">robots</a>, bridging the sim‑to‑real gap that separates virtual robot training from real‑world performance.  </span></p>
<p><span style="font-weight: 400;">“Combining RobotStudio with the physically accurate simulation power of NVIDIA Omniverse libraries, we have closed technology&#8217;s long-standing &#8216;sim-to-real’ gap – a huge milestone to deploying physical AI with industrial-grade precision, for real-world customer applications,” said Marc Segura, president of ABB Robotics.<br />
</span></p>
<h2><b>A Breakthrough in Physical AI for Industry</b></h2>
<p><span style="font-weight: 400;">ABB’s integration of NVIDIA Omniverse libraries into RobotStudio brings physically accurate, photorealistic simulation directly into the tool used by more than 60,000 robotics engineers worldwide. The result is a unified workflow where manufacturers can design, program, test and validate entire automation cells before deploying a single robot.</span></p>
<p>RobotStudio HyperReality exports a fully parameterized robot station — robots, sensors, lighting, kinematics and parts — as a USD file into NVIDIA Omniverse. There, ABB Robotics’ virtual controller runs the same firmware as the physical robot, enabling 99% correlation between simulation and real‑world behavior. Synthetic images generated in Omniverse feed directly into AI training pipelines, allowing vision models to be trained entirely in simulation.</p>
<p><span style="font-weight: 400;">This combination of physics‑rich simulation, </span><a target="_blank" href="https://www.nvidia.com/en-us/use-cases/synthetic-data-physical-ai/"><span style="font-weight: 400;">synthetic data generation</span></a><span style="font-weight: 400;"> and ABB’s Absolute Accuracy technology — which reduces positioning errors from 8-15 mm to around 0.5 mm — delivers unmatched precision for industrial‑grade applications.</span></p>
<h2><b>Closing the Sim‑to‑Real Gap</b></h2>
<p>For decades, manufacturers have struggled with the limitations of simulation: lighting that doesn’t match reality, materials that behave differently on the factory floor and models that fail when exposed to real‑world variation. ABB Robotics integration of NVIDIA Omniverse directly addresses these challenges.</p>
<p>“The industrial sector needs high‑fidelity simulation to bridge the gap between virtual training and real‑world deployment of AI‑driven robotics at scale,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Integrating NVIDIA Omniverse libraries into RobotStudio brings advanced simulation and accelerated computing to ABB’s virtual controller technology, accelerating how thousands of manufacturers bring complex products to market.”</p>
<p>With RobotStudio HyperReality, manufacturers can design and validate production lines virtually, cutting setup and commissioning times by up to 80% and eliminating the need for physical prototypes. The result is faster product ramps, lower cost and greater reliability — especially for industries like consumer electronics where precision is paramount.</p>
<p>ABB Robotics is also exploring the integration of the <a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/"><span style="font-weight: 400;">NVIDIA Jetson edge AI platform</span></a> into its Omnicore controller to enable real‑time inference across its robot portfolio.</p>
<h2><b>Real‑World Pilots: Foxconn and Workr</b></h2>
<p><span style="font-weight: 400;">Several customers are already testing RobotStudio HyperReality ahead of its full release.</span></p>
<p>Foxconn is piloting the technology in consumer electronics assembly, where delicate metal components and frequent product variations make automation challenging. Using HyperReality, Foxconn trains robots virtually with synthetic data, achieving unparalleled accuracy when deployed on the production line. The company expects to reduce setup time and eliminate costly physical testing.</p>
<p><span style="font-weight: 400;">Workr, a California‑based robotic workforce company, is integrating their own physical AI platform, WorkrCore, with ABB industrial robots </span><span style="font-weight: 400;">trained with synthetic data generated using NVIDIA Omniverse libraries</span><span style="font-weight: 400;"> to deploy advanced automation to small and medium-size manufacturers. At <a target="_blank" href="https://www.nvidia.com/gtc/">NVIDIA </a></span><span style="font-weight: 400;"><a target="_blank" href="https://www.nvidia.com/gtc/">GTC 2026</a> in San Jose</span><span style="font-weight: 400;">, Workr plans to demonstrate AI‑powered robotic systems that </span><span style="font-weight: 400;">can onboard new parts in minutes </span><span style="font-weight: 400;">and deploy without programming expertise.</span></p>
<p><i><span style="font-weight: 400;">Don’t miss NVIDIA founder and CEO </span></i><a target="_blank" href="https://www.nvidia.com/gtc/keynote/?regcode=pa-srch-goog-858585&amp;ncid=pa-srch-goog-858585"><i><span style="font-weight: 400;">Jensen Huang’s GTC keynote</span></i></a><i><span style="font-weight: 400;"> at the SAP Center on March 16 at 11:00 a.m. PT, where he’ll share the latest breakthroughs in AI and accelerated computing.</span></i></p>
<p><i><span style="font-weight: 400;">Explore </span></i><a target="_blank" href="https://www.nvidia.com/gtc/sessions/robotics/"><i><span style="font-weight: 400;">GTC robotics sessions</span></i></a><i><span style="font-weight: 400;"> and catch ABB Robotics on the ‘</span></i><a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/sessions/gtc26-s81573/"><i><span style="font-weight: 400;">Building the Future of Manufacturing</span></i></a><i><span style="font-weight: 400;">&#8216;</span></i><i><span style="font-weight: 400;"> panel as they share how these technologies are shaping the future of intelligent automation.</span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/2.24-scaled.jpg" type="image/jpeg" width="2048" height="1152">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/2.24-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[ABB Robotics Taps NVIDIA Omniverse to Deliver Industrial‑Grade Physical AI at Scale]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>How AI Is Driving Revenue, Cutting Costs and Boosting Productivity for Every Industry in 2026</title>
		<link>https://blogs.nvidia.com/blog/state-of-ai-report-2026/</link>
		
		<dc:creator><![CDATA[Dan Rowinski]]></dc:creator>
		<pubDate>Mon, 09 Mar 2026 15:00:23 +0000</pubDate>
				<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Financial Services]]></category>
		<category><![CDATA[Industrial and Manufacturing]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Retail]]></category>
		<category><![CDATA[Telecommunications]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90586</guid>

					<description><![CDATA[AI is everywhere and accelerating everything — becoming essential infrastructure to create the intelligence that will advance every industry. That’s why companies are increasingly focusing on the technology’s return on investment (ROI), as well as how to best apply AI to their own use cases. NVIDIA’s annual “State of AI” reports show how AI is [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">AI is everywhere and accelerating everything — becoming essential infrastructure to create the intelligence that will advance every industry.</span></p>
<p><span style="font-weight: 400;">That’s why companies are increasingly focusing on the technology’s return on investment (ROI), as well as how to best apply AI to their own use cases.</span></p>
<p><a target="_blank" href="https://www.nvidia.com/en-us/industries/#state-of-ai-survey"><span style="font-weight: 400;">NVIDIA’s annual “State of AI” reports</span></a> <span style="font-weight: 400;">show how AI is being adopted across industries, what it’s being used for and how companies are achieving ROI, as well as their challenges and goals with the technology. </span></p>
<p><span style="font-weight: 400;">This year’s reports garnered over 3,200 responses from around the world, providing a pulse check on the state of AI in financial services, retail and consumer packaged goods (CPG), healthcare and life sciences, telecommunications and manufacturing.</span></p>
<p><span style="font-weight: 400;">One takeaway is clear: The state of AI is strong. AI adoption is continuing to rise. Companies are building and deploying specialized AI programs with open source tools to tackle their specific challenges. Across every industry, AI is helping increase annual revenue and drive down annual costs while boosting productivity. </span></p>
<p><span style="font-weight: 400;">Read about the broad themes from this year’s reports:</span></p>
<ul>
<li><a href="https://blogs.nvidia.com/blog/state-of-ai-report-2026/#enterprise-adoption" target="_self"><span style="font-weight: 400;">Growing enterprise AI adoption</span></a></li>
<li><a href="https://blogs.nvidia.com/blog/state-of-ai-report-2026/#productivity" target="_self"><span style="font-weight: 400;">AI drives productivity gains across every industry</span></a></li>
<li><a href="https://blogs.nvidia.com/blog/state-of-ai-report-2026/#revenue-cost" target="_self"><span style="font-weight: 400;">AI increases annual revenue and drives down costs </span></a></li>
<li><a href="https://blogs.nvidia.com/blog/state-of-ai-report-2026/#agentic-ai" target="_self"><span style="font-weight: 400;">The rise of agentic AI</span></a></li>
<li><a href="https://blogs.nvidia.com/blog/state-of-ai-report-2026/#open-source" target="_self"><span style="font-weight: 400;">Open source drives AI strategy</span></a></li>
<li><a href="https://blogs.nvidia.com/blog/state-of-ai-report-2026/#investment" target="_self"><span style="font-weight: 400;">AI success leads to growing investment</span></a></li>
<li><a href="https://blogs.nvidia.com/blog/state-of-ai-report-2026/#challenge" target="_self"><span style="font-weight: 400;">The biggest challenges to AI adoption: lack of AI experts</span></a></li>
</ul>
<h2 id="enterprise-adoption"><b>Enterprise AI Adoption Matures </b></h2>
<p><span style="font-weight: 400;">Enterprise AI is continuing to scale.</span></p>
<p><span style="font-weight: 400;">Companies are moving from AI pilots and assessment to scaled deployment. In nearly every industry survey, the percentage of respondents who said their organizations are actively using AI increased, while the percentage of respondents who said they’re in the assessment phase declined.</span></p>
<p><span style="font-weight: 400;">Overall, 64% of respondents to the surveys said their organizations are actively using AI in their operations. A little over a quarter (28%) said they’re still in the assessment phase, while 8% said they’re not using AI and have no plans to start.</span></p>
<p><span style="font-weight: 400;">North America leads in AI adoption, with 70% actively using the technology, 27% assessing AI projects and just 3% who said they’re not using AI. Nearly two-thirds (65%) of respondents in the EMEA region report they’re actively using AI. AI adoption in the APAC region registers at 63%, with a higher percentage (15%) saying they’re not using AI.</span></p>
<p><span style="font-weight: 400;">A distinct throughline was clear across all surveys: Larger companies (with more than 1,000 employees) demonstrate broader adoption, deploy more use cases and report greater ROI. More than three-quarters (76%) of respondents from large companies report active AI usage, with just 2% saying they don’t use AI at all. These trends can be attributed to the fact that large companies have more capital to invest in AI infrastructure, data scientists and experts, leading executives to drive projects from pilot to production on highly specific and impactful use cases.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-90593 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-vertical-1680x627.png" alt="" width="1680" height="627" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-vertical-1680x627.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-vertical-960x358.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-vertical-1280x478.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-vertical-1536x573.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-vertical-630x235.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-vertical.png 1999w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /><img loading="lazy" decoding="async" class="alignnone wp-image-90620 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-region.png" alt="" width="1999" height="655" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-region.png 1999w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-region-960x315.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-region-1680x550.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-region-1280x419.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-region-1536x503.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-region-630x206.png 630w" sizes="auto, (max-width: 1999px) 100vw, 1999px" /><img loading="lazy" decoding="async" class="alignnone wp-image-90629 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-company-size-1680x475.png" alt="" width="1680" height="475" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-company-size-1680x475.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-company-size-960x271.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-company-size-1280x362.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-company-size-1536x434.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-company-size-630x178.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-usage-by-company-size.png 1999w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /></p>
<p><span style="font-weight: 400;">The financial services industry churns massive amounts of text, numbers, documents and analysis. Nasdaq, one of the world’s premier stock exchanges and leading financial technology platforms, has </span><a target="_blank" href="https://www.nvidia.com/en-us/case-studies/nasdaq/"><span style="font-weight: 400;">built an AI platform</span></a><span style="font-weight: 400;"> to optimize its internal operations and enhance its external products, helping</span><span style="font-weight: 400;"> improve functionality and user experience while streamlining internal work processes.</span></p>
<p><span style="font-weight: 400;">“At Nasdaq, we are a technology platform company, and AI has the ability for us to unite all the different businesses and products,” said Michael O’Rourke, senior vice president and head of AI and emerging technology at Nasdaq. “AI will help bring together data from all our businesses and technologies, and help us build better products and services.”</span></p>
<p><span style="font-weight: 400;">Read more in this year’s </span><a target="_blank" href="https://www.nvidia.com/en-us/industries/finance/ai-financial-services-report/"><span style="font-weight: 400;">State of AI in Financial Services</span></a><span style="font-weight: 400;"> report.</span></p>
<h2 id="productivity"><strong>AI Is Boosting Productivity</strong></h2>
<p><span style="font-weight: 400;">This year’s surveys revealed that the top three AI goals are creating operational efficiencies (34%), improving employee productivity (33%) and opening new business opportunities and revenue streams (23%). </span></p>
<p><span style="font-weight: 400;">More than half of respondents (53%) said improved employee productivity was one of the biggest impacts AI had on business operations, from speeding financial market analysis to boosting efficiency on factory floors with digital twins.</span></p>
<p><span style="font-weight: 400;">For example, in this year’s </span><a target="_blank" href="https://www.nvidia.com/en-us/lp/industries/telecommunications/state-of-ai-in-telecom-survey-report/"><span style="font-weight: 400;">NVIDIA State of AI in Telecommunications</span></a><span style="font-weight: 400;"> report, 99% of respondents said AI helped improve employee productivity, with a quarter saying the technology provided a major or significant improvement.</span></p>
<p><span style="font-weight: 400;">Productivity increases have cascading effects throughout the business. For instance, 42% of overall respondents said AI created operational efficiencies, and 34% said the technology helped open up new business and revenue opportunities.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-90608 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/top-three-goals-by-vertical-1680x590.png" alt="" width="1680" height="590" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/top-three-goals-by-vertical-1680x590.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-three-goals-by-vertical-960x337.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-three-goals-by-vertical-1280x450.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-three-goals-by-vertical-1536x539.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-three-goals-by-vertical-630x221.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-three-goals-by-vertical.png 1999w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /><img loading="lazy" decoding="async" class="alignnone wp-image-90614 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-business-operations-top-three-by-vertical-1680x590.png" alt="" width="1680" height="590" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-business-operations-top-three-by-vertical-1680x590.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-business-operations-top-three-by-vertical-960x337.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-business-operations-top-three-by-vertical-1280x450.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-business-operations-top-three-by-vertical-1536x539.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-business-operations-top-three-by-vertical-630x221.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-business-operations-top-three-by-vertical.png 1999w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /></p>
<p><span style="font-weight: 400;">Manufacturing is one such industry benefiting from AI integration. </span></p>
<p><span style="font-weight: 400;">Siemens, for example, is helping manufacturers </span><a target="_blank" href="https://www.nvidia.com/en-us/case-studies/siemens-accelerates-product-development-and-innovation-with-industrial-ai/"><span style="font-weight: 400;">realize productivity gains</span></a><span style="font-weight: 400;"> and optimize workflows by integrating AI into its tools and applications. </span></p>
<p><span style="font-weight: 400;">PepsiCo is an early adopter, working with Siemens and NVIDIA to convert selected U.S. manufacturing and warehouse facilities into high‑fidelity 3D digital twins that simulate end‑to‑end plant operations and supply chains. Using Siemens’ </span><a target="_blank" href="https://news.siemens.com/en-us/digital-twin-composer-ces-2026/"><span style="font-weight: 400;">Digital Twin Composer</span></a><span style="font-weight: 400;">, PepsiCo can recreate every machine, conveyor, pallet route and operator path with physics‑level accuracy, enabling AI agents to simulate and refine system changes and identify up to 90% of potential issues before any physical modifications occur. </span></p>
<p><span style="font-weight: 400;">This has already delivered a 20% increase in throughput on initial deployments, driven faster design cycles with nearly 100% design validation and produced 10-15% reductions in capital expenditure.</span></p>
<h2 id="revenue-cost"><strong>AI Is Boosting Revenue and Reducing Costs</strong></h2>
<p><span style="font-weight: 400;">A major concern surrounding AI adoption is whether investment in the technology actually results in revenue gains, decreased costs, increased productivity and enterprise efficiency.</span></p>
<p><span style="font-weight: 400;">According to survey respondents, the answer is a definitive yes.</span></p>
<p><span style="font-weight: 400;">Overall, 88% of respondents said AI has had an impact on increasing annual revenue, in some or all parts of the business. Nearly a third (30%) said the increase has been significant (greater than 10%), with 33% reporting a 5-10% increase and 25% saying the increase has been less than 5%. A little over 40% of executives (C-suite or vice president level) said they saw an annual revenue increase of more than 10%.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-90623 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-increasing-annual-revenue-by-vertical-1680x627.png" alt="" width="1680" height="627" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-increasing-annual-revenue-by-vertical-1680x627.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-increasing-annual-revenue-by-vertical-960x358.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-increasing-annual-revenue-by-vertical-1280x478.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-increasing-annual-revenue-by-vertical-1536x573.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-increasing-annual-revenue-by-vertical-630x235.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-increasing-annual-revenue-by-vertical.png 1999w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-90602 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-increasing-annual-revenue-by-role-1680x475.png" alt="" width="1680" height="475" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-increasing-annual-revenue-by-role-1680x475.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-increasing-annual-revenue-by-role-960x271.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-increasing-annual-revenue-by-role-1280x362.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-increasing-annual-revenue-by-role-1536x434.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-increasing-annual-revenue-by-role-630x178.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-increasing-annual-revenue-by-role.png 1999w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /></p>
<p><span style="font-weight: 400;">The story’s the same for AI’s role in reducing annual costs. Overall, 87% said AI helped reduce annual costs, with 25% saying the decrease was greater than 10%. Among the industry verticals, retail and CPG shone through with 37% saying costs had been reduced by more than 10%.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-90599 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-reducing-annual-cost-by-vertical-1680x627.png" alt="" width="1680" height="627" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-reducing-annual-cost-by-vertical-1680x627.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-reducing-annual-cost-by-vertical-960x358.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-reducing-annual-cost-by-vertical-1280x478.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-reducing-annual-cost-by-vertical-1536x573.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-reducing-annual-cost-by-vertical-630x235.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-reducing-annual-cost-by-vertical.png 1999w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-90617 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-reducing-annual-cost-by-role-1680x475.png" alt="" width="1680" height="475" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-reducing-annual-cost-by-role-1680x475.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-reducing-annual-cost-by-role-960x271.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-reducing-annual-cost-by-role-1280x362.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-reducing-annual-cost-by-role-1536x434.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-reducing-annual-cost-by-role-630x178.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-impact-on-reducing-annual-cost-by-role.png 1999w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /></p>
<p><span style="font-weight: 400;">Fortune 100 retailer </span><a target="_blank" href="https://www.nvidia.com/en-us/case-studies/lowes/"><span style="font-weight: 400;">Lowe’s</span></a><span style="font-weight: 400;"> has built AI-powered, physically accurate digital twins of 1,750+ stores to speed operations. The company also used AI to streamline asset discovery and enable 3D model generation — transforming 2D product images into precise, high-quality 3D models within minutes — at a cost of less than $1 per model.</span></p>
<p><span style="font-weight: 400;">Read more in this year’s </span><a target="_blank" href="https://www.nvidia.com/en-us/lp/industries/state-of-ai-in-retail-and-cpg/"><span style="font-weight: 400;">State of AI in Retail and CPG</span></a><span style="font-weight: 400;"> report.</span></p>
<h2 id="agentic-ai"><strong>The Dawn for Agentic AI in the Enterprise </strong></h2>
<p><span style="font-weight: 400;">In 2025, companies began to experiment with AI agents — advanced AI systems designed to autonomously reason, plan and execute complex tasks based on high-level goals. The survey data, which was collected from August through December, captures the experimentation phase well, with 44% of companies either deploying or assessing agents last year. Enterprises have seen those experimentations become full-fledged deployments in early 2026, touching everything from code development to legal and financial tasks, administrative support and more.</span></p>
<p><span style="font-weight: 400;">Telecommunications had the highest rate of adoption of agentic AI at 48%, followed by retail and CPG at 47%.</span></p>
<p><span style="font-weight: 400;">AI agents are coming into action in every industry, in enterprises large and small. For example, </span><a target="_blank" href="https://www.nvidia.com/en-us/case-studies/clinomic/"><span style="font-weight: 400;">Mona by Clinomic</span></a><span style="font-weight: 400;">, a medical onsite assistant that helps doctors and nurses manage patients in intensive-care units, consolidates, analyzes and visualizes patient data in real time. Mona has produced a 68% reduction in documentation errors, enhancing the accuracy of patient records and improving overall care quality while helping clinical-care professionals realize a 33% reduction in perceived workload.</span></p>
<p><span style="font-weight: 400;">Read more in this year’s </span><a target="_blank" href="https://www.nvidia.com/en-us/lp/industries/healthcare-life-sciences/ai-survey-report/"><span style="font-weight: 400;">State of AI in Healthcare and Life Sciences</span></a><span style="font-weight: 400;"> report.</span></p>
<p><span style="font-weight: 400;">Generative AI is proving to be a powerful, flexible tool in the hands of motivated enterprises, with data and predictive analytics as the top AI workload. </span></p>
<p><span style="font-weight: 400;">Overall, 62% of respondents cited data analytics among their top AI workloads. Generative AI was a close second at 61%, and even surpassed data analytics in industries including healthcare and life sciences and telecommunications. In addition, generative AI was the top workload among North American and EMEA respondents.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-90626 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/top-three-workloads-by-region-1680x551.png" alt="" width="1680" height="551" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/top-three-workloads-by-region-1680x551.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-three-workloads-by-region-960x315.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-three-workloads-by-region-1280x420.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-three-workloads-by-region-1536x504.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-three-workloads-by-region-630x207.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-three-workloads-by-region.png 1999w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /></p>
<h2 id="open-source"><strong>Open Source Drives AI Strategy</strong></h2>
<p><span style="font-weight: 400;">Companies are seeing significant ROI when deploying and scaling highly specific applications that target a distinct business opportunity. </span></p>
<p><span style="font-weight: 400;">The key to building highly specific and profitable AI applications is using open source and open weight models and software, which allows organizations to bring the right tools to solve specific problems and fine-tune models with their own data for deployment in generative and agentic applications.</span></p>
<p><span style="font-weight: 400;">Overall, 85% of respondents said open source is moderately to extremely important to their organization’s AI strategy. That includes nearly half (48%) who said open source is very to extremely important.</span></p>
<p><span style="font-weight: 400;">Small companies, which are often resource-constrained and prefer to build solutions rather than pay for commercial off-the-shelf products, were especially keen on open source, with 58% saying open source is very to extremely important. More than half of executives (51%) throughout the surveys also cited the high importance of open source.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-90632 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/open-source-importance-by-company-size-1680x475.png" alt="" width="1680" height="475" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/open-source-importance-by-company-size-1680x475.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/open-source-importance-by-company-size-960x271.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/open-source-importance-by-company-size-1280x362.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/open-source-importance-by-company-size-1536x434.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/open-source-importance-by-company-size-630x178.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/open-source-importance-by-company-size.png 1999w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-90611 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/open-source-importance-by-role-1680x475.png" alt="" width="1680" height="475" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/open-source-importance-by-role-1680x475.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/open-source-importance-by-role-960x271.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/open-source-importance-by-role-1280x362.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/open-source-importance-by-role-1536x434.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/open-source-importance-by-role-630x178.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/open-source-importance-by-role.png 1999w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /></p>
<h2 id="investment"><strong>AI Success Leads to Increased Budgets — and More AI</strong></h2>
<p><span style="font-weight: 400;">Nearly all the respondents in this year’s surveys said their AI budgets will increase or at least stay the same in 2026.</span></p>
<p><span style="font-weight: 400;">Overall, 86% of respondents said their AI budget will increase this year. Another 12% said budgets will stay the same. And nearly 40% of respondents said budgets will increase by 10% or more. North American organizations are especially keen on increasing their AI budgets, with 48% stating their budgets would increase by 10% or more, as well as 45% of executive-level respondents.</span></p>
<p><span style="font-weight: 400;">The surveys revealed that the financial services, retail and CPG, and healthcare and life sciences industries showed the strongest adoption and ROI results.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-90605 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-budget-change-in-2026-by-region-1680x550.png" alt="" width="1680" height="550" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-budget-change-in-2026-by-region-1680x550.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-budget-change-in-2026-by-region-960x315.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-budget-change-in-2026-by-region-1280x419.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-budget-change-in-2026-by-region-1536x503.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-budget-change-in-2026-by-region-630x206.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-budget-change-in-2026-by-region.png 1999w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /><img loading="lazy" decoding="async" class="alignnone wp-image-90596 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-budget-change-in-2026-by-vertical-1680x627.png" alt="" width="1680" height="627" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-budget-change-in-2026-by-vertical-1680x627.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-budget-change-in-2026-by-vertical-960x358.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-budget-change-in-2026-by-vertical-1280x478.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-budget-change-in-2026-by-vertical-1536x573.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-budget-change-in-2026-by-vertical-630x235.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/ai-budget-change-in-2026-by-vertical.png 1999w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /></p>
<p><span style="font-weight: 400;">The spending will go toward optimizing current AI solutions and finding more use cases across the enterprise. Overall, 42% of respondents said optimizing AI workflows and production cycles was the top spending priority in 2026, followed by 31% who said they’d spend on finding additional use cases. Another 31% said spending would go toward building and providing access to AI infrastructure, such as on-premises data centers, or to the cloud.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-90635 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/top-3-ai-spend-priorities-by-role-1680x551.png" alt="" width="1680" height="551" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/top-3-ai-spend-priorities-by-role-1680x551.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-3-ai-spend-priorities-by-role-960x315.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-3-ai-spend-priorities-by-role-1280x420.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-3-ai-spend-priorities-by-role-1536x504.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-3-ai-spend-priorities-by-role-630x207.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/top-3-ai-spend-priorities-by-role.png 1999w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /></p>
<h2 id="challenge"><strong>The Challenge: Finding AI Experts</strong></h2>
<p><span style="font-weight: 400;">AI has strong momentum in the enterprise, but it’s still fairly early in the adoption cycle. Nearly a third of respondents in the surveys are still in the pilot and assessment stage. Challenges persist in workflows and operations, as well as getting the right expertise to scale impactful solutions. </span></p>
<p><span style="font-weight: 400;">Organizations are also still grappling with their data. Building specialized AI applications requires enterprises to have a handle on their data to fine-tune models for their needs. Having sufficient data and other data-related issues were cited as the top challenge in the surveys, according to 48% of respondents.</span></p>
<p><span style="font-weight: 400;">Lack of AI experts and data scientists to implement that data and scale AI projects from pilot to production was the next most prominent challenge, according to 38% of respondents. </span></p>
<p><span style="font-weight: 400;">The benefits of AI can also be difficult to quantify. For example, improved productivity can be a subjective measurement for the everyday office worker. As such, 30% of respondents cited lack of clarity on AI’s ROI as one of their top challenges. </span></p>
<h2><strong>Methodology</strong></h2>
<p><span style="font-weight: 400;">Respondents of NVIDIA’s “State of AI” surveys comprise people who’ve opted in to receive communications from NVIDIA and have invested in or are curious about AI for their business. </span></p>
<p><span style="font-weight: 400;">Fielded from August to December 2025, the “State of AI” surveys garnered data from over 3,200  respondents across financial services, retail, healthcare, telecommunications and manufacturing. Respondents included a variety of roles, such as C-suite and vice presidents (27%), directors and managers (33%) and AI practitioners (40%). </span></p>
<p><span style="font-weight: 400;">Respondents represented organizations of varying scale, with 39% from large enterprises employing more than 1,000 people, 27% from mid-sized companies with 100-1,000 employees and 34% from smaller organizations with fewer than 100 employees.</span></p>
<p><span style="font-weight: 400;">Geographic distribution spanned four major regions: APAC (32%), North America (26%), EMEA (21%) and the rest of the world (20%).</span></p>
<p><span style="font-weight: 400;">The online surveys were sourced from NVIDIA’s distribution lists and through social media globally, and in China and Japan through a third-party agency.</span></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/industry-press-state-of-the-ai-report-overreaching-blog-1920x1080-2.png" type="image/png" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/industry-press-state-of-the-ai-report-overreaching-blog-1920x1080-2-842x450.png" width="842" height="450" />
			<media:title type="html"><![CDATA[How AI Is Driving Revenue, Cutting Costs and Boosting Productivity for Every Industry in 2026]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>March Into the Cloud With 15 New Games Coming to GeForce NOW</title>
		<link>https://blogs.nvidia.com/blog/geforce-now-thursday-march-2026-games-list/</link>
		
		<dc:creator><![CDATA[GeForce NOW Community]]></dc:creator>
		<pubDate>Thu, 05 Mar 2026 14:00:30 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Cloud Gaming]]></category>
		<category><![CDATA[GeForce NOW]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90493</guid>

					<description><![CDATA[March is in full bloom, and that means a fresh wave of games heading to the cloud. 15 new titles are joining the GeForce NOW library this month. Leading the March lineup is Pearl Abyss’ Crimson Desert, an open‑world action‑adventure set in a war‑torn fantasy land, alongside plenty of other games to explore. Whether looking [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>March is in full bloom, and that means a fresh wave of games heading to the cloud. 15 new titles are joining the <a target="_blank" href="https://www.nvidia.com/en-us/geforce-now/">GeForce NOW</a> library this month.</p>
<p>Leading the March lineup is Pearl Abyss’ <i>Crimson Desert</i>, an open‑world action‑adventure set in a war‑torn fantasy land, alongside plenty of other games to explore. Whether looking to shake off the winter blues or jump into some bracket‑worthy gaming action, there’s something for everyone in the cloud.</p>
<p>March into the cloud and see what’s new — and keep an eye on GFN Thursdays all month for more updates. This week kicks off the month with eight new games.</p>
<h2><b>March Gaming Madness</b></h2>
<figure id="attachment_90499" aria-describedby="caption-attachment-90499" style="width: 1680px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-90499" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-LORT-1680x840.jpg" alt="LORT on GeForce NOW" width="1680" height="840" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-LORT-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-LORT-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-LORT-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-LORT-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-LORT-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-LORT.jpg 2048w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /><figcaption id="caption-attachment-90499" class="wp-caption-text"><em>In LORT we trust.</em></figcaption></figure>
<p><i>LORT </i>dials chaos up to 11 and snaps the knob clean off. Big Distraction’s off‑the‑rails adventure hurls players into a world where every corner hides a bad idea waiting to become a great story, powered by wild weapons, weirder characters and “Did that just happen?” moments. Catch every glorious disaster in full fidelity and play it on GeForce NOW, available this week.</p>
<p>Here’s are this week’s eight new additions:<i></i></p>
<ul>
<li><i>Kingdom Come: Deliverance II</i> (New release on <a target="_blank" href="https://www.xbox.com/games/store/kingdom-come-deliverance-ii/9n4w31hsmvvd?utm_source=nvidia&amp;utm_campaign=geforce_now">Xbox</a>, available on Game Pass, March 3, GeForce RTX 5080-ready)</li>
<li><i>Legacy of Kain: Defiance Remastered </i>(New release on <a target="_blank" href="https://store.steampowered.com/app/3747730?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, available March 3)</li>
<li><i>Esoteric Ebb </i>(New release on <a target="_blank" href="https://store.steampowered.com/app/2057760?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, available March 3)</li>
<li><i>The Legend of Khiimori </i>(New release on <a target="_blank" href="https://store.steampowered.com/app/2697000?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, available March 3, GeForce RTX 5080-ready)</li>
<li><i>Slay the Spire 2 </i>(New release on <a target="_blank" href="https://store.steampowered.com/app/2868840?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, available March 5)</li>
<li><i>Docked </i>(New release on <a target="_blank" href="https://store.steampowered.com/app/2487300?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, available March 5)</li>
<li><i>Death Stranding Director’s Cut </i>(<a target="_blank" href="https://store.steampowered.com/app/1850570?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, GeForce RTX 5080-ready)</li>
<li><i>LORT </i>(<a target="_blank" href="https://store.steampowered.com/app/2956680?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>)</li>
</ul>
<p>And look forward to the games coming throughout the rest of the month:</p>
<ul>
<li style="font-weight: 400"><i>John Carpenter’s Toxic Commando</i> (New release on <a target="_blank" href="https://store.steampowered.com/app/2157830?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, March 12, GeForce RTX 5080-ready)</li>
<li style="font-weight: 400"><i>Everwind </i>(New release on <a target="_blank" href="https://store.steampowered.com/app/2253100?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, March 17)</li>
<li style="font-weight: 400"><i>Crimson Desert </i>(New release on <a target="_blank" href="https://store.steampowered.com/app/3321460?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, March 19)</li>
<li style="font-weight: 400"><i>Screamer </i>(New release on <a target="_blank" href="https://store.steampowered.com/app/2814990?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, March 23)</li>
<li style="font-weight: 400"><i>Nova Roma </i>(New release on <a target="_blank" href="https://store.steampowered.com/app/2426530?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a> and <a target="_blank" href="https://www.xbox.com/games/store/nova-roma-game-preview/9nbnfbq546dt?utm_source=nvidia&amp;utm_campaign=geforce_now">Xbox</a>, available on Game Pass, March 26)</li>
<li style="font-weight: 400"><i>Legacy of Kain: Ascendance </i>(New release on <a target="_blank" href="https://store.steampowered.com/app/4233530?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, March 31)</li>
<li style="font-weight: 400"><i>Subliminal </i>(New release on <a target="_blank" href="https://store.steampowered.com/app/2300840?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, March 31)</li>
</ul>
<h2><b>February in the Books</b></h2>
<p>In addition to the 24 games announced last month, 18 more joined the <a target="_blank" href="http://play.geforcenow.com">GeForce NOW library</a>:</p>
<ul>
<li><i>Anno: Mutationem </i>(<a target="_blank" href="https://www.xbox.com/games/store/anno-mutationem/9pmhghxlgljm?utm_source=nvidia&amp;utm_campaign=geforce_now">Xbox</a>, available on Game Pass)</li>
<li><i>Blizzard Arcade Collection </i>(<a target="_blank" href="https://store.ubisoft.com/ubisoftplus?ucid=AFL-272089&amp;addinfo=&amp;bi=">Ubisoft Connect</a>)</li>
<li><i>Capcom Beat ‘Em Up Bundle </i>(<a target="_blank" href="https://store.steampowered.com/app/885150?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>)</li>
<li><i>Capcom Fighting Collection </i>(<a target="_blank" href="https://store.steampowered.com/app/1685750?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>)</li>
<li><i>Diablo </i>(Ubisoft Connect)</li>
<li><i>Diablo + Hellfire Expansion  </i>(Ubisoft Connect)</li>
<li><i>Diablo II: Resurrected </i>(Ubisoft Connect)</li>
<li><i>Galactic Civilizations 3 </i>(<a target="_blank" href="https://www.xbox.com/games/store/galactic-civilizations-iii/9NKBH79N7HVW?utm_source=nvidia&amp;utm_campaign=geforce_now">Xbox</a>, available on the Microsoft Store)</li>
<li><i>KILLER INN </i>(<a target="_blank" href="https://store.steampowered.com/app/1598230?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>)</li>
<li><i>Mega Man 11 </i>(<a target="_blank" href="https://store.steampowered.com/app/742300?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>)</li>
<li><i>MotoGP22 </i>(<a target="_blank" href="https://www.xbox.com/games/store/motogp22---windows-edition/9NV1V176PTC2?utm_source=nvidia&amp;utm_campaign=geforce_now">Xbox</a>, available on the Microsoft Store)</li>
<li><i>Spellcasters Chronicles </i>(<a target="_blank" href="https://store.steampowered.com/app/2458470?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, GeForce RTX 5080-ready)</li>
<li><i>STALCRAFT: X </i>(<a target="_blank" href="https://www.epicgames.com/store/p/stalcraft-x-stalcraft-x-starter-edition-0b06d4?utm_source=nvidia&amp;utm_campaign=geforce_now">Epic Games Store</a>)</li>
<li><i>Street Fighter 30th Anniversary Collection </i>(<a target="_blank" href="https://store.steampowered.com/app/586200?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>)</li>
<li><i>Torment: Tides of Numenera </i>(<a target="_blank" href="https://store.steampowered.com/app/272270?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a> and <a target="_blank" href="https://www.xbox.com/games/store/torment-tides-of-numenera/bqnd4nqxffv2?utm_source=nvidia&amp;utm_campaign=geforce_now">Xbox</a>, available on Game Pass)</li>
<li><i>TCG Card Shop Simulator </i>(<a target="_blank" href="https://www.xbox.com/games/store/tcg-card-shop-simulator-game-preview/9p9gx5d8h9dw?utm_source=nvidia&amp;utm_campaign=geforce_now">Xbox</a>, available on Game Pass)</li>
<li><i>Trine Enchanted Edition </i>(<a target="_blank" href="https://www.epicgames.com/store/p/trine-enchanted-edition-d1d59d?utm_source=nvidia&amp;utm_campaign=geforce_now">Epic Games Store</a>)</li>
<li><i>Trine 2: Complete Story </i>(<a target="_blank" href="https://www.epicgames.com/store/p/trine-2-complete-story-d7aa74?utm_source=nvidia&amp;utm_campaign=geforce_now">Epic Games Store</a>)</li>
</ul>
<p>What are you planning to play this weekend? Let us know on <a target="_blank" href="https://www.twitter.com/nvidiagfn">X</a> or in the comments below, then see what Blue Thunder Gaming thinks of GeForce NOW.</p>
<p><iframe loading="lazy" title="NVIDIA GeForce NOW is now indistinguishable from native gameplay &#x1f92f;" width="563" height="1000" src="https://www.youtube.com/embed/3tOgTx6QC1k?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gfn-thursday-3-5-nv-blog-1280x680-logo.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gfn-thursday-3-5-nv-blog-1280x680-logo-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[March Into the Cloud With 15 New Games Coming to GeForce NOW]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>NVIDIA Advances Autonomous Networks With Agentic AI Blueprints and Telco Reasoning Models</title>
		<link>https://blogs.nvidia.com/blog/nvidia-agentic-ai-blueprints-telco-reasoning-models/</link>
		
		<dc:creator><![CDATA[Amogh Dendukuri]]></dc:creator>
		<pubDate>Sun, 01 Mar 2026 07:00:45 +0000</pubDate>
				<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Events]]></category>
		<category><![CDATA[Nemotron]]></category>
		<category><![CDATA[NVIDIA Blueprints]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Telecommunications]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90401</guid>

					<description><![CDATA[Autonomous networks — intelligent, self-managing telecommunications operations — are moving from a future vision to a current priority for telecom operators. In the latest NVIDIA State of AI in Telecommunications report, network automation emerged as the top AI use case for investment and return on investment. Automation is different from autonomy. Beyond executing predefined workflows, [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><a target="_blank" href="https://www.nvidia.com/en-us/glossary/autonomous-networks/">Autonomous networks</a> — intelligent, self-managing telecommunications operations — are moving from a future vision to a current priority for telecom operators. In the latest NVIDIA <a target="_blank" href="https://www.nvidia.com/en-us/lp/industries/telecommunications/state-of-ai-in-telecom-survey-report/">State of AI in Telecommunications report</a>, network automation emerged as the top AI use case for investment and return on investment.</p>
<p>Automation is different from autonomy. Beyond executing predefined workflows, autonomous networks must understand operator intent, reason over tradeoffs and decide what actions to take. Reasoning models and AI agents fine-tuned on telecom data are key to enabling this shift.</p>
<p>For networks to become autonomous, there’s a need for an end-to-end agentic system that includes key components like telco network models and AI agents that talk to each other and use network simulation tools to validate actions.</p>
<p>Ahead of Mobile World Congress Barcelona, NVIDIA unveiled an open NVIDIA Nemotron-based large telco model (LTM), a comprehensive guide for building reasoning agents for network operations, and new NVIDIA Blueprints for energy saving and network configuration with multi-agent orchestration to help operators advance toward autonomy.</p>
<p>And as part of GSMA’s new Open Telco AI initiative — launching tomorrow — NVIDIA is releasing the new open source LTM, implementation guide and agentic AI blueprints as open resources through GSMA, an organization for the mobile communications industry.</p>
<h2><b>Open Nemotron 3 Large Telco Model Brings Reasoning to Telecom </b></h2>
<p>For telcos to successfully operationalize generative and agentic AI across their operations, AI models must have the ability to understand the language of telecom and reason through complex workflows. NVIDIA has collaborated with <a target="_blank" href="https://adaptkey.ai/blog.html">AdaptKey AI</a> to release a new open source, 30-billion-parameter NVIDIA Nemotron LTM that operators around the world can use to build autonomous networks.</p>
<p>Built on the NVIDIA Nemotron 3 family of foundation models and fine-tuned by AdaptKey AI using open telecom datasets including industry standards and synthetic logs, the LTM is optimized to understand telecom industry terminology and reason through workflows such as fault isolation, remediation planning and change validation.</p>
<p>As an open model, the Nemotron LTM gives telcos full transparency into how it was trained and what data was used, enabling secure and fast on‑premises deployment within their networks, where they can build and run agents directly. It also lets telcos safely adapt and extend telecom‑tuned reasoning with their own network and operational data, so they can move toward autonomous operations without sacrificing control over data or security.</p>
<h2><b>Teaching AI Agents to Reason Like Network Engineers</b></h2>
<p>NVIDIA and Tech Mahindra have published an open source <a target="_blank" href="https://developer.nvidia.com/blog/building-telco-reasoning-models-for-autonomous-networks-with-nvidia-nemo/">guide</a> that shows telecom operators how to fine-tune domain-specific reasoning models and build agents that can safely execute network operations center (NOC) workflows.</p>
<p>The guide outlines a framework for teaching models to reason like NOC engineers: focus on high‑impact, high‑frequency incident categories, translate expert resolutions into step‑by‑step procedures and turn those into structured reasoning traces that capture each action, tool call, outcome and decision. These traces become the “thinking examples” the model learns from, so it understands not just what to do, but why a particular sequence of checks and fixes is safe and effective.</p>
<p>Using the NVIDIA NeMo-Skills pipeline, operators can fine-tune a reasoning model on these traces, laying the foundation for telco-specialized AI agents that can reason and solve problems like a network engineer.</p>
<h2><b>Maximizing Energy Efficiency With New Intent-Driven Energy Saving Blueprint</b></h2>
<p>Autonomous networks rely on closed‑loop operation: models that understand the network, agents that act on intent and simulation that feeds results back into the system to validate and refine decisions. The new <a target="_blank" href="https://build.nvidia.com/viavi/intent-driven-ran-energy-efficiency">NVIDIA Blueprint for intent-driven RAN energy efficiency</a> brings these pieces together, helping operators systematically reduce power consumption in 5G radio access networks (RAN) while maintaining quality of service.</p>
<p>The blueprint integrates network test and measurement leader <a target="_blank" href="https://blog.viavisolutions.com/2026/03/01/accelerating-ai-native-networks-with-nvidia-ai-ran-platforms/">VIAVI’s</a> TeraVM AI RAN Scenario Generator (AI RSG) platform to generate synthetic network data — including cell utilization, user throughput and other traffic patterns — and convert it into a simple, queryable format.</p>
<p>An energy planning agent then reasons over the synthetic data to generate energy-saving policies that can be simulated in AI RSG, allowing operators to safely validate energy-saving policies in a closed loop to meet their intent without changing live configurations or impacting subscribers.</p>
<h2><b>Telcos Put the NVIDIA Blueprint for Network Configuration to Work</b></h2>
<p>The <a href="https://blogs.nvidia.com/blog/ai-blueprint-telco-network-configuration/">NVIDIA Blueprint for telco network configuration</a> is being adopted by operators around the world.</p>
<p>Cassava Technologies is using the blueprint to build <a target="_blank" href="https://www.cassavatechnologies.com/launch-of-cassava-autonomous-network-agentic-ai-for-4g-and-5g-radio-access-networks/">Cassava Autonomous Network</a>, an agentic platform designed to optimize Africa’s diverse, multi-vendor mobile network environment. The platform implements three agents: one to monitor the network and recommend configuration changes, one to apply changes with documentation and governance, and one to assess the impact of changes made and safely roll them back if they have unintended effects.</p>
<p>NTT DATA is implementing the blueprint to bring intelligence to traffic regulation, helping the network manage surges when users reconnect after an outage, and is deploying it with a tier 1 operator in Japan.</p>
<p>An AI agent looks at real-time demand across the network and then decides when and how to admit new users on specific cells. As conditions stabilize, the agent adapts its decisions, turning what used to be manual configurations into a data-driven optimization cycle for more resilient mobile networks.</p>
<h2><b>Evolving Network Configuration With Multi-Agent Orchestration</b></h2>
<p>To help telcos design, observe and optimize complex agentic workflows across the RAN, NVIDIA and <a target="_blank" href="https://bubbleran.com/">BubbleRAN</a> are enhancing the <a target="_blank" href="https://build.nvidia.com/nvidia/telco-network-configuration">NVIDIA Blueprint for telco network configuration</a> with <a target="_blank" href="https://developer.nvidia.com/nemo-agent-toolkit">NVIDIA NeMo Agent Toolkit</a> (NAT) and <a target="_blank" href="https://github.com/bubbleran/bat">BubbleRAN Agentic Toolkit</a> (BAT), complementary frameworks for multi-agent orchestration.</p>
<p>BubbleRAN is integrating NAT and BAT into its <a target="_blank" href="https://bubbleran.com/news/opti-sphere/">Opti-Sphere</a> platform to manage network monitoring, configuration and validation agents more flexibly across containers and workloads, and connect them to tools that report network metrics and traffic status so they can continuously propose and validate configuration changes.</p>
<p>Telenor Group will be the first telco to adopt the blueprint with BubbleRAN to enhance its 5G network for Telenor Maritime, the group’s global connectivity provider at sea.</p>
<p><i>Learn more about the latest advancements in agentic AI for telecommunications at </i><a target="_blank" href="https://www.nvidia.com/en-us/events/mobile-world-congress/"><i>Mobile World Congress</i></a><i>, taking place in Barcelona from March 2-5. </i></p>
<p><i>See </i><a target="_blank" href="https://www.nvidia.com/en-eu/about-nvidia/terms-of-service/"><i>notice</i></a><i> regarding software product information.</i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/02/Autonomous-Networks-Blog-Image.png" type="image/png" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/02/Autonomous-Networks-Blog-Image-842x450.png" width="842" height="450" />
			<media:title type="html"><![CDATA[NVIDIA Advances Autonomous Networks With Agentic AI Blueprints and Telco Reasoning Models]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>NVIDIA and Partners Show That Software-Defined AI-RAN Is the Next Wireless Generation</title>
		<link>https://blogs.nvidia.com/blog/software-defined-ai-ran/</link>
		
		<dc:creator><![CDATA[Kanika Atri]]></dc:creator>
		<pubDate>Sun, 01 Mar 2026 07:00:03 +0000</pubDate>
				<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[AI-RAN]]></category>
		<category><![CDATA[Events]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Telecommunications]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90420</guid>

					<description><![CDATA[AI-RAN is moving from lab to field, showing that a software-defined approach is the only viable way to build future AI-native wireless networks. Ahead of Mobile World Congress (MWC), running March 2-5 in Barcelona, NVIDIA and Nokia announced new AI-RAN collaborations with top telecom operators across Europe, Asia and North America, powered by NVIDIA AI-RAN [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>AI-RAN is moving from lab to field, showing that a software-defined approach is the only viable way to build future AI-native wireless networks.</p>
<p>Ahead of Mobile World Congress (MWC), running March 2-5 in Barcelona, NVIDIA and Nokia announced new AI-RAN collaborations with top telecom operators across Europe, Asia and North America, powered by NVIDIA AI-RAN platforms. Industry pioneers T-Mobile U.S., SoftBank and Indosat Ooredoo Hutchison (IOH) passed implementation milestones, taking NVIDIA-powered AI-RAN outdoors and over the air.</p>
<p>New benchmarking results from partners like <a target="_blank" href="https://www.synaxg.com/synaxg-nvidia-aerial/">SynaXG</a> showed that AI-RAN running on NVIDIA platforms delivers high-speed, carrier-grade performance — meaning extreme reliability — across multiple 5G spectrum bands. And over 20 <a target="_blank" href="https://ai-ran.org/press-releases/mwc-2026-momentum">AI-RAN Alliance</a> demos built on NVIDIA platforms will be showcased at MWC, highlighting how AI is boosting 5G performance and efficiency, and unlocking new edge AI applications.</p>
<p>All of this represents momentum and convergence toward a common, software-defined foundation that will set the stage for <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-and-global-telecom-leaders-commit-to-build-6g-on-open-and-secure-ai-native-platforms">secure, open and AI-native 6G systems</a>.</p>
<h2><b>AI-RAN Goes From Lab to Live</b></h2>
<p>Top telecom operators and partners are using NVIDIA platforms to bring AI-RAN to commercial deployment. <b></b></p>
<p>T-Mobile U.S. demonstrated concurrent AI and RAN processing on NVIDIA AI-RAN platform using Nokia’s CUDA-accelerated RAN software. In T-Mobile’s over-the-air field environment, Nokia’s AirScale massive multiple-input and multiple-output (MIMO) radio in the 3.7GHz band supported commercial devices running applications like video streaming, generative AI and AI-powered video captioning, alongside 5G. <b></b></p>
<p>SoftBank’s AITRAS live field trial achieved an industry-first, 16-layer massive MIMO using fully software-defined 5G running on NVIDIA’s AI-RAN platform, marking an important technical milestone toward AI-RAN commercialization. <b></b></p>
<p>IOH has implemented software-defined 5G with Nokia’s vRAN software on NVIDIA AI-RAN platforms, moving from proof of concept to pre-commercial field validation. This milestone was showcased at MWC through Southeast Asia’s first AI-powered 5G call, where AI and network intelligence operated seamlessly to enable secure, real-time cross-border connectivity, including responsive remote control of a robotic dog over the live 5G network. This achievement demonstrates IOH’s readiness to scale AI-native network capabilities and bring intelligent connectivity to communities across Indonesia.<b></b></p>
<p>SynaXG demonstrated fully software-defined AI-RAN using <a target="_blank" href="https://developer.nvidia.com/industries/telecommunications/ai-aerial">NVIDIA AI Aerial</a> — a suite of accelerated computing platforms, software libraries and tools to build, train, simulate and deploy AI-native wireless networks — running 4G, 5G in both sub-6GHz [FR1] and millimeter wave [FR2] spectrum bands, alongside agentic AI workloads, on a single NVIDIA GH200 server. This marks the world’s first implementation of AI-RAN on FR2 bands.<b></b></p>
<p>SynaXG’s setup activated 20 component carriers with both a centralized unit (CU) and distributed unit (DU) on one platform, achieving a throughput of <a target="_blank" href="https://www.youtube.com/watch?v=V7Wm4GN80c0">36 Gbps</a> and under 10 milliseconds latency. These breakthrough results highlight AI-RAN-based 5G performance as well as seamless orchestration between AI and RAN workloads.</p>
<h2><b>Tripled Pace of AI-RAN Innovation</b></h2>
<p>This year’s MWC will see triple the number of AI-RAN innovations over last year, with 26 out of 33 AI-RAN Alliance demos built using NVIDIA AI Aerial and a software-defined architecture.</p>
<p>Some of these demos include:</p>
<ul>
<li><a target="_blank" href="https://www.deepsig.ai/ai-native-open-ran-and-spectrum-awareness-integration-at-mwc26/">DeepSig</a> is reinventing how devices “speak” to networks by letting AI learn a smarter signal format at both ends of the link — the communications channel that connects two devices. An AI‑native air interface jointly learns how to best encode and decode signals using neural techniques at the device and base station, removing pilot overheads and adapting to site‑specific channels. Early results on NVIDIA platforms show up to about 2x higher throughput and better spectral and energy efficiency from the same spectrum.</li>
<li>SUTD, NVIDIA and partners will show how robots and autonomous vehicles can distribute their “thinking” across the device, edge and cloud — bringing split-inferencing from concept to implementation. By deciding in real time where each AI task runs, the demos prove how AI-RAN can meet tight latency, privacy and coverage service-level agreements to scale physical AI and vision language models through the network edge.</li>
<li>zTouch Networks and partners built an AI-RAN orchestration blueprint showing how operators can safely share GPUs across AI and RAN workloads. By using <a target="_blank" href="https://www.nvidia.com/en-us/technologies/multi-instance-gpu/">NVIDIA Multi-Instance GPU</a> technology, the blueprint steers resources in real time, maximizing GPU utilization and improving energy management while ensuring RAN quality of service. This is a key step for making multi-tenant AI-RAN solutions ready for commercial use, so operators can turn GPU capacity into revenue.</li>
<li>Northeastern University and SoftBank will demonstrate an AI switching solution for NVIDIA AI Aerial that flips seamlessly and without data loss between AI and classic algorithms for channel estimation. This selects, in real time, the best possible processing solution at all times depending on conditions, improving stability and throughput while proving AI can coexist with classical approaches.</li>
</ul>
<p>“AI-RAN is emerging as a unifying architecture for future radio networks,” said Alex Choi, chair of the AI-RAN Alliance. “By aligning operators, vendors and researchers around software-defined, GPU-accelerated architectures, we are boosting innovation, validating new concepts quickly and building the foundation for AI-native 6G, now.”</p>
<p>As intelligence moves into the physical world, autonomous systems such as robots and cars depend on AI-RAN networks to see, sense, reason and act.</p>
<p><a target="_blank" href="https://www.capgemini.com/insights/expert-perspectives/ai-ran-in-action-turning-5g-infrastructure-into-an-intelligent-growth-platform/">Capgemini</a> is working within Project ULTIMO, a Horizon Europe-funded initiative, to show how AI-RAN can support large-scale autonomous mobility services across European cities. Autonomous shuttles equipped with the <a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/">NVIDIA Jetson Orin</a> module process sensor data locally, while select video and telemetry streams are sent over 5G to agentic AI applications on NVIDIA AI-RAN servers. These workloads handle scene understanding, incident and safety detection, and accessibility insights at scale, while mission-critical 5G gets priority access to GPU resources.</p>
<h2><b>A Growing Ecosystem</b></h2>
<p>A growing ecosystem of partners is forming around NVIDIA-powered AI-RAN platforms, enabling operators to choose from a range of deployment solutions. NVIDIA Aerial RAN Computer (ARC) platforms harness the NVIDIA Grace CPU and a variety of GPUs, providing a high-performance, energy-efficient compute foundation for AI-native RAN infrastructure.</p>
<ul>
<li><a target="_blank" href="https://www.qct.io/Press-Releases/index/PR/Server/QCT-Unveils-QuantaEdge-EGN77C-2U-New-AI-RAN-Server-Supporting-Nokia-anyRAN-and-NVIDIA-ARC-Pro">Quanta Cloud Technology (QCT)</a> is announcing commercial off-the-shelf AI-RAN products that support NVIDIA ARC platforms and Nokia software, giving operators standardized building blocks for AI-RAN.</li>
<li>Supermicro is extending support across the full NVIDIA AI-RAN portfolio, including NVIDIA ARC-Pro and <a target="_blank" href="https://www.nvidia.com/en-us/data-center/rtx-pro-6000-blackwell-server-edition/">NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs</a>, as well as ARC-Compact systems with Nokia software.</li>
<li><a target="_blank" href="https://www.wnc.com.tw/en/news/mwc2026/detail">WNC</a> has introduced a new AI-optimized indoor and outdoor open radio unit, integrated with NVIDIA AI Aerial Testbed and NVIDIA ARC platforms, that supports 5GA and 6G use cases.</li>
<li><a target="_blank" href="https://eridan.io/eridan-introduces-4t4r-radio-at-mwc-demonstrates-ai-ran-integration-with-nvidia">Eridan</a> has launched a 4T4R O-RU along with its 2T2R O-RU, which was integrated with NVIDIA AI Aerial, and a DU running on the NVIDIA DGX Spark desktop supercomputer, combining spectrally efficient radios with GPU-based baseband processing to create a powerful and portable outdoor base station.</li>
<li><a target="_blank" href="https://www.liteon.com/en/news/press-center/content/liteon-mwc-2026">LITEON</a> has completed integration of its sub-6 GHz and millimeter wave radio units with NVIDIA AI Aerial, and has expanded its collaboration with ecosystem partners like Supermicro and SynaXG to accelerate AI-RAN commercialization.</li>
</ul>
<h2><b>Laying the Foundation for Open, Secure, AI-Native 6G</b></h2>
<p>NVIDIA’s latest <a href="https://blogs.nvidia.com/blog/ai-in-telco-survey-2026/">State of AI in Telecom</a> report showed that the industry is stepping up AI-native RAN and 6G investments — signaling a major intercept ahead of the traditional 6G deployment cycle, with 77% of respondents anticipating a much faster time to deployment of this new AI-native wireless network architecture.</p>
<p>This latest progress on software-defined AI-RAN is setting the stage for secure, open and AI-native 6G systems.</p>
<p>NVIDIA has already <a target="_blank" href="https://github.com/NVIDIA/aerial-cuda-accelerated-ran">open sourced NVIDIA Aerial CUDA-accelerated RAN</a> libraries, fueling the pace of AI-RAN innovation. NVIDIA has also now joined the <a target="_blank" href="https://ocudu.org/news/linux-foundation-announces-ocudu-ecosystem-foundation-to-accelerate-open-source-ai-ran-innovation">OCUDU</a> (Open CU DU) Ecosystem Foundation, hosted by the Linux Foundation, contributing to open source RAN software development to accelerate research and commercialization for next-generation wireless networks.</p>
<p><i>Learn more by meeting NVIDIA and partners at </i><a target="_blank" href="https://www.nvidia.com/en-us/events/mobile-world-congress/"><i>Mobile World Congress</i></a><i>. Explore key insights from the </i><a target="_blank" href="https://www.nvidia.com/en-us/lp/industries/telecommunications/state-of-ai-in-telecom-survey-report/"><i>State of AI in Telecom</i></a><i> survey.</i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/02/AI-RAN-Commercialization-Blog-Image.jpg" type="image/jpeg" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/02/AI-RAN-Commercialization-Blog-Image-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[NVIDIA and Partners Show That Software-Defined AI-RAN Is the Next Wireless Generation]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Now Live: The World’s Most Powerful AI Factory for Pharmaceutical Discovery and Development</title>
		<link>https://blogs.nvidia.com/blog/lilly-ai-factory-live/</link>
		
		<dc:creator><![CDATA[Rory Kelleher]]></dc:creator>
		<pubDate>Thu, 26 Feb 2026 19:00:58 +0000</pubDate>
				<category><![CDATA[AI Infrastructure]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[AI Factory]]></category>
		<category><![CDATA[AI for Good]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Healthcare and Life Sciences]]></category>
		<category><![CDATA[High-Performance Computing]]></category>
		<category><![CDATA[NVIDIA DGX]]></category>
		<category><![CDATA[Scientific Visualization]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90297</guid>

					<description><![CDATA[Lilly this week launched the most powerful AI factory wholly owned and operated by a pharmaceutical company to help its teams make meaningful medical advancements faster, more accurately and at unprecedented scale. Dubbed LillyPod, it’s the world’s first NVIDIA DGX SuperPOD with DGX B300 systems. ]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div>]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/02/Lilly-AI-Factory_Poster-Image-scaled.png" type="image/png" width="2048" height="1152">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/02/Lilly-AI-Factory_Poster-Image-scaled-842x450.png" width="842" height="450" />
			<media:title type="html"><![CDATA[Now Live: The World’s Most Powerful AI Factory for Pharmaceutical Discovery and Development]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>The Nightmare Returns in the Cloud: GeForce NOW Unleashes Capcom’s ‘Resident Evil Requiem’</title>
		<link>https://blogs.nvidia.com/blog/geforce-now-thursday-resident-evil-requiem/</link>
		
		<dc:creator><![CDATA[GeForce NOW Community]]></dc:creator>
		<pubDate>Thu, 26 Feb 2026 14:00:48 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Cloud Gaming]]></category>
		<category><![CDATA[GeForce NOW]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90198</guid>

					<description><![CDATA[GeForce NOW’s anniversary celebration reaches a chilling crescendo as Capcom’s Resident Evil Requiem creeps into the cloud — and the horrors look better than ever on a GeForce NOW Ultimate membership. To mark the occasion, a special launch bundle rises from the shadows, pairing the game with a yearlong Ultimate membership for a limited time. [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><a target="_blank" href="https://www.nvidia.com/en-us/geforce-now/">GeForce NOW</a>’s anniversary celebration reaches a chilling crescendo as Capcom’s <i>Resident Evil Requiem </i>creeps into the cloud — and the horrors look better than ever on a GeForce NOW Ultimate membership.</p>
<p>To mark the occasion, a special launch bundle rises from the shadows, pairing the game with a yearlong Ultimate membership for a limited time.</p>
<p>It’s not a celebration party without treats. GeForce NOW is also offering members a new reward to use in <i>Delta Force</i>.</p>
<p>Suit up and grab it alongside 11 new games joining the cloud this week.</p>
<h2><b>The Nightmare Returns in the Cloud</b></h2>
<p>A new era of survival horror arrives with <em>Resident Evil Requiem</em>, the latest and most immersive entry yet in the iconic <em>Resident Evil</em> series. Experience terrifying survival horror with FBI analyst Grace Ashcroft and dive into pulse-pounding action with legendary agent Leon S. Kennedy. Their journeys and unique gameplay styles intertwine into a heart-stopping, emotional experience that will chill gamers to their core.</p>
<figure id="attachment_90211" aria-describedby="caption-attachment-90211" style="width: 1680px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-90211" src="https://blogs.nvidia.com/wp-content/uploads/2026/02/gfn-social-rtx-5080-spotlights-resident-evil-requiem-2048x1024-1-1680x840.jpg" alt="" width="1680" height="840" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/02/gfn-social-rtx-5080-spotlights-resident-evil-requiem-2048x1024-1-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/02/gfn-social-rtx-5080-spotlights-resident-evil-requiem-2048x1024-1-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/02/gfn-social-rtx-5080-spotlights-resident-evil-requiem-2048x1024-1-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/02/gfn-social-rtx-5080-spotlights-resident-evil-requiem-2048x1024-1-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/02/gfn-social-rtx-5080-spotlights-resident-evil-requiem-2048x1024-1-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/02/gfn-social-rtx-5080-spotlights-resident-evil-requiem-2048x1024-1.jpg 2048w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /><figcaption id="caption-attachment-90211" class="wp-caption-text"><em><i>Requiem for the dead. Nightmare for the living</i>.</em></figcaption></figure>
<p>With GeForce RTX 5080-class power in the cloud, experience <i>Requiem </i>with lifelike lighting, full path tracing, ray‑traced reflections and cinematic realism at up to 5K resolution with high dynamic range, plus <a target="_blank" href="https://www.nvidia.com/en-us/geforce/technologies/dlss/">NVIDIA DLSS 4</a> with Multi Frame Generation for maximum performance. The Ultimate membership keeps every encounter smooth and immersive when streaming from powerful GPUs in the cloud.</p>
<p>To celebrate GeForce NOW’s sixth anniversary, a special launch offer emerges from the fog: For a limited time, <i>Resident Evil Requiem</i> is included with the purchase of a 12‑month Ultimate membership. It’s the perfect way to return to the city of disaster and despair, now more haunting and beautiful than ever.</p>
<h2><b>Priority Package</b></h2>
<p>GeForce NOW marks six epic years in the cloud, and the party lands on the <i>Delta Force</i> frontline with a new reward drop.</p>
<figure id="attachment_90203" aria-describedby="caption-attachment-90203" style="width: 1680px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-90203 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/02/GFN_Thursday_Delta_Force-1680x840.jpg" alt="Delta Force reward on GeForce NOW" width="1680" height="840" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/02/GFN_Thursday_Delta_Force-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/02/GFN_Thursday_Delta_Force-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/02/GFN_Thursday_Delta_Force-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/02/GFN_Thursday_Delta_Force-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/02/GFN_Thursday_Delta_Force-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/02/GFN_Thursday_Delta_Force.jpg 2048w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /><figcaption id="caption-attachment-90203" class="wp-caption-text"><em>What a drop.</em></figcaption></figure>
<p>Being a GeForce NOW member is rewarding. All members can get an edge in the <i>Delta Force</i> extraction and warfare game modes with a reward bundle packed with standard gear tickets, premium weapon XP tokens and armament vouchers to fine-tune loadouts and push every op further.</p>
<p>Performance and Ultimate members gain even more battlefield muscle with an early unlock of the PP‑19 Bizon, a weapon that brings close-quarters stopping power to every mission.</p>
<p>This special sixth-anniversary reward is available through Thursday, March 26, while supplies last. Redeem now and deploy.</p>
<h2><b>Ready, Set, Stream</b></h2>
<p>This week, members can look for the following:</p>
<ul>
<li><i>TCG Card Shop Simulator </i>(New release on <a target="_blank" href="https://www.xbox.com/games/store/tcg-card-shop-simulator-game-preview/9p9gx5d8h9dw?utm_source=nvidia&amp;utm_campaign=geforce_now">Xbox</a>, available on Game Pass, Feb. 24)</li>
<li><i>Blizzard Arcade Collection </i>(New release on <a target="_blank" href="https://store.ubisoft.com/ubisoftplus?ucid=AFL-272089&amp;addinfo=&amp;bi=">Ubisoft Connect</a>, Feb. 25)</li>
<li><i>Diablo II: Resurrected </i>(New release on <a target="_blank" href="https://store.ubisoft.com/ubisoftplus?ucid=AFL-272089&amp;addinfo=&amp;bi=">Ubisoft Connect</a>, Feb. 25)</li>
<li><i>Spellcasters Chronicles </i>(New release on <a target="_blank" href="https://store.steampowered.com/app/2458470?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, Feb. 26, GeForce RTX 5080-ready)</li>
<li><i>Resident Evil Requiem</i> (New release on <a target="_blank" href="https://store.steampowered.com/app/3764200?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>, Feb. 27, GeForce RTX 5080-ready)</li>
<li><i>Anno: Mutationem </i>(<a target="_blank" href="https://www.xbox.com/games/store/anno-mutationem/9pmhghxlgljm?utm_source=nvidia&amp;utm_campaign=geforce_now">Xbox</a>, available on Game Pass)</li>
<li><i>ARC Raiders </i>(<a target="_blank" href="https://www.xbox.com/games/store/arc-raiders/9ndf1f263rz4?utm_source=nvidia&amp;utm_campaign=geforce_now">Xbox</a>, available on the Microsoft Store, GeForce RTX 5080-ready)</li>
<li><i>DEVOUR </i>(<a target="_blank" href="https://store.steampowered.com/app/1274570?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>)</li>
<li><i>Galactic Civilizations 3 </i>(<a target="_blank" href="https://www.xbox.com/games/store/galactic-civilizations-iii/9NKBH79N7HVW?utm_source=nvidia&amp;utm_campaign=geforce_now">Xbox</a>, available on the Microsoft Store)</li>
<li><i>MotoGP22 </i>(<a target="_blank" href="https://www.xbox.com/games/store/motogp22---windows-edition/9NV1V176PTC2?utm_source=nvidia&amp;utm_campaign=geforce_now">Xbox</a>, available on the Microsoft Store)</li>
<li><i>Torque Drift 2 </i>(<a target="_blank" href="https://store.steampowered.com/app/3116640?utm_source=nvidia&amp;utm_campaign=geforce_now">Steam</a>)</li>
</ul>
<p>What are you planning to play this weekend? Let us know on <a target="_blank" href="https://www.twitter.com/nvidiagfn">X</a> or in the comments below.</p>
<blockquote class="twitter-tweet" data-width="550" data-dnt="true">
<p lang="en" dir="ltr">What&#39;s your bragging rights? <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f3c5.png" alt="🏅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Share the latest achievement you got in a game.</p>
<p>&mdash; <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f329.png" alt="🌩" class="wp-smiley" style="height: 1em; max-height: 1em;" /> NVIDIA GeForce NOW (@NVIDIAGFN) <a target="_blank" href="https://twitter.com/NVIDIAGFN/status/2025978963395305880?ref_src=twsrc%5Etfw">February 23, 2026</a></p></blockquote>
<p><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/02/gfn-thursday-2-26-resident-evil-requiem-nv-blog-1280x680-logo.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/02/gfn-thursday-2-26-resident-evil-requiem-nv-blog-1280x680-logo-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[The Nightmare Returns in the Cloud: GeForce NOW Unleashes Capcom’s ‘Resident Evil Requiem’]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>From Radiology to Drug Discovery, Survey Reveals AI Is Delivering Clear Return on Investment in Healthcare</title>
		<link>https://blogs.nvidia.com/blog/ai-in-healthcare-survey-2026/</link>
		
		<dc:creator><![CDATA[Kathy Benemann]]></dc:creator>
		<pubDate>Tue, 24 Feb 2026 14:00:34 +0000</pubDate>
				<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Healthcare and Life Sciences]]></category>
		<category><![CDATA[Open Source]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90175</guid>

					<description><![CDATA[AI is accelerating every aspect of healthcare — from radiology and drug discovery to medical device manufacturing and new treatment methods enabled by digital twins of the human body. NVIDIA&#8217;s second annual “State of AI in Healthcare and Life Sciences” survey report reveals how the industry is moving from AI experimentation to execution, reaping return [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>AI is accelerating every aspect of healthcare — from radiology and drug discovery to medical device manufacturing and new treatment methods enabled by digital twins of the human body.</p>
<p>NVIDIA&#8217;s second annual “<a target="_blank" href="https://www.nvidia.com/en-us/lp/industries/healthcare-life-sciences/ai-survey-report/">State of AI in Healthcare and Life Sciences</a>” survey report reveals how the industry is moving from AI experimentation to execution, reaping return on investment (ROI) on core applications like medical imaging and drug discovery.</p>
<p>The industry is also embracing open source software and AI models to tackle specific use cases, as well as exploring using agentic AI to speed knowledge retrieval and research paper analysis.</p>
<p>Highlights from this year’s report include:</p>
<ul>
<li>70% of respondents said their organizations are actively using AI, up from 63% in 2024.</li>
<li>69% said they’re using generative AI and large language models, up from 54%.</li>
<li>82% said open source software and models are moderately to extremely important to their organizations’ AI strategy.</li>
<li>47% said they’re using or assessing agentic AI.</li>
<li>85% of executives said AI is helping increase revenue, and 80% said it’s helping reduce costs.</li>
</ul>
<p>“Over the next 12-18 months, the most visible and scalable impact of AI will come from logistics and administrative streamlining,” said John Nosta, president of NostaLab, a healthcare think tank. “That’s where adoption curves are already steep — scheduling, documentation, coding, utilization management and care coordination.”</p>
<p>Read more below on some of the report’s key findings.</p>
<h2><strong>AI Adoption Ramps Up Across Healthcare and Life Sciences</strong></h2>
<p>AI adoption is up across every industry segment in this year’s survey — spanning digital healthcare, pharmaceutical and biotechnology, payers and providers, and medical technology and tools — with digital healthcare leading at 78%, followed by medical technology at 74%.</p>
<p>The top industry workload was generative AI and large language models, according to 69% of respondents. AI for data analytics and data science was the second most-used workload, followed by predictive analytics. New to the survey, agentic AI ranked fourth, with 47% of respondents saying they’re using or assessing AI agents.</p>
<p>“Scaling generative AI in healthcare starts with focusing on real clinical and operational problems, rather than the technology itself,” said Dr. Annabelle Painter, clinical AI strategy lead at Visiba U.K. “The organizations seeing impact are those that embed AI into existing workflows instead of layering AI on top as a separate tool.”</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-90179 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/02/hc-2026-top-5-ai-areas-by-industry-segment.png" alt="" width="1013" height="336" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/02/hc-2026-top-5-ai-areas-by-industry-segment.png 1013w, https://blogs.nvidia.com/wp-content/uploads/2026/02/hc-2026-top-5-ai-areas-by-industry-segment-960x318.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/02/hc-2026-top-5-ai-areas-by-industry-segment-630x209.png 630w" sizes="auto, (max-width: 1013px) 100vw, 1013px" /></p>
<p>Healthcare and life sciences organizations are deploying these AI workloads across a variety of use cases, each specific to their primary functions. For example, 61% of respondents from medical technology said they’re using AI for medical imaging, such as radiologists using it to work more quickly and efficiently, while 57% from pharmaceutical and biotechnology said drug discovery is being driven by AI.</p>
<p>For the entire industry, the top AI use cases were clinical decision support (such as radiologists highlighting areas of concern on a scan), medical imaging and workflow optimization.</p>
<h2><strong>AI Budgets to Increase With Strong ROI</strong></h2>
<p>AI is helping healthcare and life sciences organizations become even better at their core competencies — underscoring strong ROI.</p>
<p>In addition to increasing annual revenue and reducing annual costs, AI is boosting back-office productivity through workflow optimization and is scaling across other key business operations such as patient interaction and administrative tasks.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-90185 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/02/hc-2026-ai-impact-on-revenue-costs.png" alt="" width="996" height="386" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/02/hc-2026-ai-impact-on-revenue-costs.png 996w, https://blogs.nvidia.com/wp-content/uploads/2026/02/hc-2026-ai-impact-on-revenue-costs-960x372.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/02/hc-2026-ai-impact-on-revenue-costs-630x244.png 630w" sizes="auto, (max-width: 996px) 100vw, 996px" /></p>
<p>For example, 57% of respondents from the medical technology segment reported seeing ROI from deploying AI for medical imaging. Nearly half (46%) of pharmaceutical and biotechnology respondents said AI for drug discovery and development was among their top ROI use cases.</p>
<p>The top ROI use case for digital healthcare providers was virtual health assistants and chatbots, according to 37%, while 39% of respondents from payers and providers (which include hospitals, primary care providers and insurance companies) cited administrative tasks and workflow optimization as their top area of ROI.</p>
<p>As a result of AI’s positive impact, 85% of respondents said their AI budgets would increase this year, with another 12% saying budgets would stay the same. For almost half of respondents (46%), AI spending will increase significantly, by more than 10%.</p>
<p>“Healthcare organizations that successfully integrate AI are those that explicitly fund and prioritize evaluation as a core operational function, ensuring AI delivers measurable improvements in safety, quality and patient care over time,” said Painter.</p>
<h2><strong>Using Open Source for Domain-Specific AI Deployment</strong></h2>
<p>Leaning into open source models and software allows enterprises to build domain-specific applications, lending them greater flexibility and efficiency while boosting business returns.</p>
<p>The healthcare industry has embraced open source, with 82% of survey respondents stating it’s moderately to extremely important to their AI strategy.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-90182 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/02/hc-2026-open-source-software.png" alt="" width="996" height="222" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/02/hc-2026-open-source-software.png 996w, https://blogs.nvidia.com/wp-content/uploads/2026/02/hc-2026-open-source-software-960x214.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/02/hc-2026-open-source-software-630x140.png 630w" sizes="auto, (max-width: 996px) 100vw, 996px" /></p>
<p>“Open models will shape the intellectual field,” said Nosta. “They are essential for exploration and for keeping the field honest. But in clinical environments where safety, liability and accountability are nonnegotiable, proprietary systems will remain necessary for validation, integration and trust. The key insight here is that discovery will be open, and deployment will demand stewardship.”</p>
<p>Download the “<a target="_blank" href="https://www.nvidia.com/en-us/lp/industries/healthcare-life-sciences/ai-survey-report/">State of AI in Healthcare and Life Sciences: 2026 Trends</a>” report for in-depth results and insights.</p>
<p><i>Sign up for </i><a target="_blank" href="https://www.nvidia.com/en-us/industries/healthcare-life-sciences/healthcare-news-sign-up/"><i>NVIDIA’s healthcare and life sciences newsletter</i></a><i>.</i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/02/hc-promo-pack-state-of-AI-2026-1280x680-1.png" type="image/png" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/02/hc-promo-pack-state-of-AI-2026-1280x680-1-842x450.png" width="842" height="450" />
			<media:title type="html"><![CDATA[From Radiology to Drug Discovery, Survey Reveals AI Is Delivering Clear Return on Investment in Healthcare]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
	</channel>
</rss>
