<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:media="http://search.yahoo.com/mrss/">

<channel>
	<title>NVIDIA Blog</title>
	<atom:link href="https://blogs.nvidia.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://blogs.nvidia.com/</link>
	<description></description>
	<lastBuildDate>Thu, 30 Apr 2026 21:56:53 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Nemotron Labs: What OpenClaw Agents Mean for Every Organization</title>
		<link>https://blogs.nvidia.com/blog/what-openclaw-agents-mean-for-every-organization/</link>
		
		<dc:creator><![CDATA[Justin Boitano]]></dc:creator>
		<pubDate>Thu, 30 Apr 2026 20:00:39 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Nemotron]]></category>
		<category><![CDATA[Nemotron Labs]]></category>
		<category><![CDATA[Open Source]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92588</guid>

					<description><![CDATA[By early 2026, the open source project OpenClaw had become a phenomenon. In January, its GitHub star count crossed 100,000 as developer interest surged.]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><i><span style="font-weight: 400;">Editor’s note: This post is part of the </span></i><a href="https://blogs.nvidia.com/blog/tag/nemotron-labs/"><i><span style="font-weight: 400;">Nemotron Labs</span></i></a><i><span style="font-weight: 400;"> blog series, which explores how the latest open models, datasets and training techniques help businesses build specialized AI systems and applications on NVIDIA platforms. Each post highlights practical ways to use an open stack to deliver real value in production — from transparent research copilots to scalable AI agents.</span></i></p>
<p><span style="font-weight: 400;">By early 2026, the open source project </span><a target="_blank" href="https://github.com/openclaw/openclaw"><span style="font-weight: 400;">OpenClaw</span></a><span style="font-weight: 400;"> had become a phenomenon. In January, its GitHub star count crossed 100,000 as developer interest surged. Community dashboards and traffic analytics showed more than 2 million visitors in a single week. By March, OpenClaw topped 250,000 stars — overtaking React to become the most-starred software project on GitHub in just 60 days.</span></p>
<p><img fetchpriority="high" decoding="async" class="aligncenter wp-image-92599 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/star-history-chart-nemotron-labs.jpg" alt="" width="433" height="316" /></p>
<p><span style="font-weight: 400;">Created by </span><a target="_blank" href="https://x.com/steipete"><span style="font-weight: 400;">Peter Steinberger</span></a><span style="font-weight: 400;">, OpenClaw is a self-hosted, persistent AI assistant designed to run locally or on private servers. The project drew attention for its accessibility and unbounded autonomy: Users could deploy an AI model locally without depending on cloud infrastructure or external application programming interfaces (APIs).</span></p>
<p><span style="font-weight: 400;">Most </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/ai-agents/"><span style="font-weight: 400;">AI agents</span></a><span style="font-weight: 400;"> today are triggered by a prompt, complete a defined task and then stop running. A long-running autonomous agent, or “claw,” works differently. These agents run persistently in the background, completing tasks on their own and surfacing only what requires a human decision. They operate on a heartbeat: At regular intervals, they check their task list, evaluate what needs action, and either act or wait for the next cycle.</span></p>
<p><span style="font-weight: 400;">OpenClaw’s rapid adoption also sparked debate. Security researchers raised concerns about how self-hosted AI tools manage sensitive data, authentication and model updates. Others questioned whether local deployments could expose users to new risks — from unpatched server instances to malicious contributions in community forks. As contributors and maintainers worked to address these issues, OpenClaw’s rise prompted a broader conversation across the AI ecosystem about the trade-offs between openness, privacy and safety.</span></p>
<p><span style="font-weight: 400;">To help enhance the security and robustness of the </span><a target="_blank" href="https://openclaw.ai/"><span style="font-weight: 400;">OpenClaw</span></a><span style="font-weight: 400;"> project, NVIDIA is collaborating with </span><a target="_blank" href="https://www.ted.com/talks/peter_steinberger_how_i_created_openclaw_the_breakthrough_ai_agent"><span style="font-weight: 400;">Steinberger</span></a><span style="font-weight: 400;"> and the OpenClaw developer community to address potential vulnerabilities, as detailed in a </span><a target="_blank" href="https://openclaw.ai/blog/openclaw-security-in-public"><span style="font-weight: 400;">recent </span><span style="font-weight: 400;">blog post </span><span style="font-weight: 400;">by OpenClaw</span></a><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;">NVIDIA contributes code and guidance focused on improving model isolation, better managing local data access and strengthening the processes for verifying community code contributions. The goal is to support the project’s momentum by contributing its security and systems expertise in an open, transparent way that strengthens the community’s work while preserving OpenClaw’s independent governance.</span></p>
<p><span style="font-weight: 400;"> </span><span style="font-weight: 400;">To help make long-running agents safer for enterprises, NVIDIA also introduced NVIDIA NemoClaw, a reference implementation that uses a single command to install OpenClaw, the NVIDIA OpenShell secure runtime and NVIDIA Nemotron open models with hardened defaults for networking, data access and security. NemoClaw serves as a blueprint for organizations to deploy claws more securely.</span></p>
<p><iframe title="OpenClaw: The ChatGPT Moment for Long-Running, Autonomous Agents" width="1200" height="675" src="https://www.youtube.com/embed/-2nCxItGNvE?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<h2><strong>Inference Demand Multiplies With Each AI Wave</strong></h2>
<p><span style="font-weight: 400;">AI has moved through four phases, and the time between each is shortening. Predictive AI took years to become mainstream. </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/generative-ai/"><span style="font-weight: 400;">Generative AI</span></a><span style="font-weight: 400;"> moved faster. </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/ai-reasoning/"><span style="font-weight: 400;">Reasoning AI</span></a><span style="font-weight: 400;"> arrived faster still. Autonomous AI — the wave OpenClaw represents — is setting an even faster pace.</span></p>
<p><span style="font-weight: 400;">What compounds with each wave is </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/ai-inference/"><span style="font-weight: 400;">inference</span></a><span style="font-weight: 400;"> demand. Generative AI increased </span><a href="https://blogs.nvidia.com/blog/ai-tokens-explained/"><span style="font-weight: 400;">token</span></a><span style="font-weight: 400;"> usage over predictive AI. Reasoning AI increased it another 100x. Autonomous agents, which run continuously and act across long time horizons, drive inference demand up by another 1,000x over reasoning AI. Each wave multiplies the compute required.</span></p>
<p><img decoding="async" class="aligncenter size-medium wp-image-92602" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-demand-graphic-nemotron-labs-960x367.jpg" alt="" width="960" height="367" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-demand-graphic-nemotron-labs-960x367.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-demand-graphic-nemotron-labs-630x241.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-demand-graphic-nemotron-labs.jpg 1210w" sizes="(max-width: 960px) 100vw, 960px" /></p>
<p><span style="font-weight: 400;">This increase in token usage is enabling organizations to speed their productivity by orders of magnitude. For example, long-running agents can help researchers work through a problem overnight, iterate on a design across thousands of configurations, or monitor systems and surface only the anomalies that require human judgment — freeing up researchers’ work days for higher-value tasks.</span></p>
<h2><strong>Choosing the Tool: When to Deploy a ‘Claw’</strong></h2>
<p><span style="font-weight: 400;">While generative AI has become a staple for on-demand tasks, there are specific scenarios where the persistent “heartbeat” of a claw offers distinct advantages. Determining when to move from a standard prompt-based AI to a long-running agent often comes down to the nature of the workflow:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><b>From “On-Demand” to “Always-On”:</b><span style="font-weight: 400;"> While standard models are excellent for immediate, human-triggered queries, claws are often better suited for tasks that require continuous background monitoring or periodic system checks without a manual start.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Managing High-Iteration Loops: </b><span style="font-weight: 400;">For complex problems, like testing thousands of chemical combinations or simulating infrastructure stress tests, a claw can manage the sheer volume of iterations that might otherwise be bottlenecked by human intervention.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Shifting from Suggestions to Actions</b><span style="font-weight: 400;">: In many workflows, standard AI is used to provide information or drafts. A claw is often considered when the goal is for the AI to move into the execution phase — interacting with APIs, updating databases or managing files across a long time horizon.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Resource Optimization:</b><span style="font-weight: 400;"> For massive, token-heavy reasoning tasks, deploying a local claw on dedicated hardware like an </span><a target="_blank" href="https://www.nvidia.com/en-us/products/workstations/dgx-spark/"><span style="font-weight: 400;">NVIDIA DGX Spark</span></a><span style="font-weight: 400;"> personal AI supercomputer allows for more predictable costs and data privacy compared with high-frequency cloud API calls.</span></li>
</ul>
<h2><strong>How Are Organizations Using Long-Running Autonomous Agents?</strong></h2>
<p><span style="font-weight: 400;">The practical applications of long-running autonomous agents span every function and sector.</span></p>
<p><span style="font-weight: 400;">In financial services, agents continuously monitor trading systems and regulatory feeds, flagging material events before the morning review. In drug discovery, agents sweep new scientific literature, extracting relevant findings and updating internal databases in real time without researcher intervention — a process that previously took weeks.</span></p>
<p><span style="font-weight: 400;">In engineering and manufacturing, agents speed problem analysis by testing thousands of parameter combinations, ranking results and flagging the configurations worth examining — and all this can happen overnight. </span></p>
<p><span style="font-weight: 400;">In IT operations, agents diagnose infrastructure incidents, apply known remediations and escalate only the novel problems — compressing average time to resolution from hours to minutes. At </span><span style="font-weight: 400;">ServiceNow</span><span style="font-weight: 400;">, AI specialists leveraging Apriel and NVIDIA Nemotron models can resolve 90% of tickets autonomously. </span></p>
<p><iframe loading="lazy" title="How ServiceNow’s AI Agents Resolve 90% of Tickets Autonomously" width="1200" height="675" src="https://www.youtube.com/embed/ZPC6cfr1RVk?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<h2><strong>How Can Companies Deploy Autonomous Agents Responsibly? </strong></h2>
<p><span style="font-weight: 400;">Autonomous agents are hands-on. They can send communications, write files, call APIs and update live systems. When an agent produces a wrong action, there are real consequences. Getting the accountability framework right from the start is essential, and organizations deploying autonomous agents in production must treat governance as a first-order requirement.</span></p>
<p><span style="font-weight: 400;">Organizations need to see what their agents are doing, inspect their reasoning at each step, audit their actions and intervene when needed. </span></p>
<p><span style="font-weight: 400;">Organizations deploying autonomous agents responsibly are focused on three priorities: </span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><b>An open, auditable framework:</b><span style="font-weight: 400;"> NemoClaw is built on OpenClaw’s MIT licensed codebase, which means organizations own the full agent harness. They can read, fork and modify every layer of how their agents are built and deployed. That transparency enables teams to understand and control the system at the code level. Running open source models like </span><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/"><span style="font-weight: 400;">NVIDIA Nemotron</span></a><span style="font-weight: 400;"> locally keeps sensitive workloads, including patient records, legal documents, financial transactions and proprietary research, within the organization’s own environment, ensuring that trace data stays under organizational control.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Securing the runtime environment:</b> <a target="_blank" href="https://www.nvidia.com/en-us/ai/nemoclaw/"><span style="font-weight: 400;">NemoClaw</span></a><span style="font-weight: 400;"> runs agents inside </span><a href="https://blogs.nvidia.com/blog/secure-autonomous-ai-agents-openshell/"><span style="font-weight: 400;">OpenShell</span></a><span style="font-weight: 400;">, a sandboxed environment that defines precisely what the agent can and cannot do, enforcing clear permission boundaries from the start. </span></li>
<li style="font-weight: 400;" aria-level="1"><b>Local compute:</b><span style="font-weight: 400;"> NVIDIA DGX Spark supercomputers deliver data-center-class GPU performance in a deskside form factor built for continuous local inference that’s always on, with local model hosting and data that stays within the organization’s environment. </span><a target="_blank" href="https://www.nvidia.com/en-us/products/workstations/dgx-station/"><span style="font-weight: 400;">NVIDIA DGX Station</span></a><span style="font-weight: 400;"> systems scale that capability for teams running multiple agents simultaneously across complex, sustained workloads. </span></li>
</ul>
<p><span style="font-weight: 400;">The organizations defining what autonomous agents do in practice are accumulating something valuable: months of live operational learning, governance frameworks developed through real workloads and agents that have absorbed the institutional context that makes them genuinely useful. This foundation will only deepen over time.</span></p>
<h2><b>Get Started With NVIDIA NemoClaw</b></h2>
<p><span style="font-weight: 400;">Access a step-by-step tutorial on </span><a target="_blank" href="https://developer.nvidia.com/blog/build-a-secure-always-on-local-ai-agent-with-nvidia-nemoclaw-and-openclaw/"><span style="font-weight: 400;">how to build a more secure AI agent with NemoClaw on NVIDIA DGX Spark</span></a><span style="font-weight: 400;">. Explore how NemoClaw can deploy more secure, always-on AI assistants with a single command.​ </span></p>
<p><iframe loading="lazy" title="Build an Always-On AI Assistant with OpenClaw and NemoClaw on DGX Spark" width="1200" height="675" src="https://www.youtube.com/embed/nCy5Hpg-ozU?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">Experiment with NemoClaw, available on </span><a target="_blank" href="https://github.com/NVIDIA/NemoClaw"><span style="font-weight: 400;">GitHub</span></a><span style="font-weight: 400;">, </span><span style="font-weight: 400;">and j</span><span style="font-weight: 400;">oin the community of developers on </span><a target="_blank" href="https://discord.com/channels/1019361803752456192/1482072289511211200"><span style="font-weight: 400;">Discord</span></a><span style="font-weight: 400;"> building with </span><a target="_blank" href="https://build.nvidia.com/spark/nemoclaw/overview"><span style="font-weight: 400;">NemoClaw using NVIDIA Nemotron 3 Super and Telegram on DGX Spark</span></a><span style="font-weight: 400;">.</span></p>
<p><i><span style="font-weight: 400;">Stay up to date on agentic AI, </span></i><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/"><i><span style="font-weight: 400;">NVIDIA Nemotron</span></i></a><i><span style="font-weight: 400;"> and more by subscribing to </span></i><a target="_blank" href="https://www.nvidia.com/en-us/executive-insights/generative-ai-tools/?modal=stay-inf"><i><span style="font-weight: 400;">NVIDIA AI news</span></i></a><i><span style="font-weight: 400;">, </span></i><a target="_blank" href="https://developer.nvidia.com/community"><i><span style="font-weight: 400;">joining the community</span></i></a><i><span style="font-weight: 400;"> and following NVIDIA AI on </span></i><a target="_blank" href="https://www.linkedin.com/showcase/nvidia-ai/posts/?feedView=all"><i><span style="font-weight: 400;">LinkedIn</span></i></a><i><span style="font-weight: 400;">, </span></i><a target="_blank" href="https://www.instagram.com/nvidiaai/?hl=en"><i><span style="font-weight: 400;">Instagram</span></i></a><i><span style="font-weight: 400;">, </span></i><a target="_blank" href="https://x.com/NVIDIAAIDev"><i><span style="font-weight: 400;">X</span></i></a><i><span style="font-weight: 400;"> and </span></i><a target="_blank" href="https://www.facebook.com/NVIDIAAI"><i><span style="font-weight: 400;">Facebook</span></i></a><i><span style="font-weight: 400;">.  </span></i></p>
<p><i><span style="font-weight: 400;">Explore </span></i><a target="_blank" href="https://youtube.com/playlist?list=PL5B692fm6--vdRKB14FImVi7MTJ77zjn4&amp;feature=shared"><i><span style="font-weight: 400;">self-paced video tutorials and livestreams</span></i></a><i><span style="font-weight: 400;">.</span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/nemotron-labs-openclaw-1920x1080-1.jpg" type="image/jpeg" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/nemotron-labs-openclaw-1920x1080-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Nemotron Labs: What OpenClaw Agents Mean for Every Organization]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>It’s Gonna Be May: 16 Games Hit the Cloud This Month, With More NVIDIA GeForce RTX 5080 Power</title>
		<link>https://blogs.nvidia.com/blog/geforce-now-thursday-may-2026-games-list/</link>
		
		<dc:creator><![CDATA[GeForce NOW Community]]></dc:creator>
		<pubDate>Thu, 30 Apr 2026 13:00:47 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Cloud Gaming]]></category>
		<category><![CDATA[GeForce NOW]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92775</guid>

					<description><![CDATA[[Editor&#8217;s note] The blog has been updated to note that GeForce RTX 5080-power expansion also extends to the Install-to-Play library. It’s gonna be May — and the cloud’s in full festival mode.  16 games are joining GeForce NOW this month, including new AAA titles arriving on launch day from Steam, Xbox, PC Game Pass and [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><em>[Editor&#8217;s note] The blog has been updated to note that GeForce RTX 5080-power expansion also extends to the Install-to-Play library.</em></p>
<p><span style="font-weight: 400">It’s gonna be May — and the cloud’s in full festival mode. </span></p>
<p><span style="font-weight: 400">16 </span><span style="font-weight: 400">games are joining </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce-now/"><span style="font-weight: 400">GeForce NOW</span></a><span style="font-weight: 400"> this month, including new AAA titles arriving on launch day from Steam, Xbox, PC Game Pass and GOG, so members can stream their PC libraries instantly across almost any device. Headline launches include the highly anticipated </span><i><span style="font-weight: 400">Forza Horizon 6</span></i><span style="font-weight: 400"> and </span><i><span style="font-weight: 400">007 First Light,</span></i><span style="font-weight: 400"> both ready to hit the cloud on day one.</span></p>
<p><span style="font-weight: 400">Meanwhile, the GeForce NOW Ultimate membership dials things up with expanded RTX 5080‑class performance across the cloud gaming library, bringing higher frame rates, richer visuals and more responsive gameplay to even more of the games members already love. Ultimate members now get priority access to RTX 5080‑class rigs, making it easier than ever to tap into next‑generation PC power from almost any device.</span></p>
<p><span style="font-weight: 400">GeForce NOW is also celebrating 30 years of Firaxis in style, with more of the studio’s classics coming as Install‑to‑Play titles for instant access — just in time for a Steam celebration sale on select Firaxis games supported in the cloud.</span></p>
<p><span style="font-weight: 400">And this week kicks things off with </span><span style="font-weight: 400">six </span><span style="font-weight: 400">new games dropping into the cloud, setting the stage for a May packed with new adventures.</span></p>
<h2><b>The Cloud Keeps Getting Bigger</b></h2>
<p><span style="font-weight: 400">AAA titles are landing throughout the month on their launch dates. Members can pick them up from their favorite PC stores — Steam, Xbox, PC Game Pass and GOG — and stream their PC gaming libraries instantly on GeForce NOW without having to repurchase titles to play across devices.</span></p>
<p><span style="font-weight: 400">The highly anticipated titles hitting the cloud this month include: </span></p>
<figure id="attachment_92790" aria-describedby="caption-attachment-92790" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-92790" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/ForzaHorizon6-PreOrder-06-Cover_Car_Mt_Fuji-16x9_WM-1-1680x945.jpg" alt="forza horizon 6 on GFN" width="1200" height="675" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/ForzaHorizon6-PreOrder-06-Cover_Car_Mt_Fuji-16x9_WM-1-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ForzaHorizon6-PreOrder-06-Cover_Car_Mt_Fuji-16x9_WM-1-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ForzaHorizon6-PreOrder-06-Cover_Car_Mt_Fuji-16x9_WM-1-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ForzaHorizon6-PreOrder-06-Cover_Car_Mt_Fuji-16x9_WM-1-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ForzaHorizon6-PreOrder-06-Cover_Car_Mt_Fuji-16x9_WM-1-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ForzaHorizon6-PreOrder-06-Cover_Car_Mt_Fuji-16x9_WM-1-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ForzaHorizon6-PreOrder-06-Cover_Car_Mt_Fuji-16x9_WM-1-630x354.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ForzaHorizon6-PreOrder-06-Cover_Car_Mt_Fuji-16x9_WM-1-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ForzaHorizon6-PreOrder-06-Cover_Car_Mt_Fuji-16x9_WM-1-400x225.jpg 400w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92790" class="wp-caption-text"><em>Stream it on GeForce NOW.</em></figcaption></figure>
<p><b><i>Forza Horizon 6</i></b><b> from Playground Games:</b><span style="font-weight: 400"> The Horizon festival is back and bigger than ever. This is Horizon Japan — stretching from the iconic downtown streets of Tokyo City all the way to the snowy Japanese Alps — introducing the most dense and vertical map yet. A massive new open world, a stacked car roster and that legendary “one more race” energy are all heading to GeForce NOW on the title’s launch date. Members can pick it up on PC via Steam and Xbox, including Game Pass, then hit full throttle from the cloud on almost any device.</span></p>
<p><b><i>007 First Light</i></b><b> from IO Interactive:</b><span style="font-weight: 400"> The game brings a modern James Bond origin story to the cloud when it launches Wednesday, May 27, for PC, inviting members to step into the shadows as Bond begins his journey. Experience a mix of stealth and action built around a “breathing” gameplay loop with IOI’s cinematic, set piece-driven storytelling. Approach encounters through stealth, direct action or creative improvisation, with multiple viable paths to each objective that let players define Bond’s style. It’s ready on day one for players grabbing it on Steam, Xbox on PC or GOG.</span></p>
<h2><b>More Power</b></h2>
<p><span style="font-weight: 400">GeForce NOW Ultimate is expanding what premium cloud gaming looks like. Starting today, Ultimate members can stream even more of their games on RTX 5080 virtual gaming rigs — bringing the power of the </span><a target="_blank" href="https://www.nvidia.com/en-us/data-center/technologies/blackwell-architecture/"><span style="font-weight: 400">NVIDIA Blackwell RTX</span></a><span style="font-weight: 400"> architecture to a wide range of titles.</span></p>
<p><span style="font-weight: 400">This update significantly broadens access to 5080 performance beyond the list of GeForce RTX 5080-optimized titles. Now, across nearly the entire GeForce NOW library, members can experience higher frame rates, richer visuals, and more responsive gameplay by default at up to 5K 120 frames per second or 360 fps at 1080p.</span></p>
<p><span style="font-weight: 400">With RTX 5080 in the cloud, Ultimate members unlock the same cutting-edge features available to GeForce RTX 50 Series GPU owners. That includes </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce/technologies/dlss/"><span style="font-weight: 400">NVIDIA DLSS 4</span></a><span style="font-weight: 400"> technology for sharper image quality and higher performance, </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce/technologies/reflex/"><span style="font-weight: 400">NVIDIA Reflex</span></a><span style="font-weight: 400"> for reduced system latency and faster input response, and advanced ray tracing for more lifelike lighting and reflections. </span></p>
<p><span style="font-weight: 400">With RTX 5080 powering the default Ultimate experience, GeForce NOW delivers next-generation performance to more games, extending the value of a GeForce NOW membership. See more details in the </span><a target="_blank" href="https://nvidia.custhelp.com/app/answers/detail/a_id/5702"><span style="font-weight: 400">knowledgebase article</span></a><span style="font-weight: 400">.</span></p>
<h2><b>30 Years of ‘One More Turn’</b></h2>
<figure id="attachment_92787" aria-describedby="caption-attachment-92787" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-92787" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-spotlight-civilization-v-tw-li-2048x1024-1-1680x840.jpg" alt="Firaxis games on GFN" width="1200" height="600" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-spotlight-civilization-v-tw-li-2048x1024-1-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-spotlight-civilization-v-tw-li-2048x1024-1-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-spotlight-civilization-v-tw-li-2048x1024-1-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-spotlight-civilization-v-tw-li-2048x1024-1-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-spotlight-civilization-v-tw-li-2048x1024-1-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-spotlight-civilization-v-tw-li-2048x1024-1.jpg 2048w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92787" class="wp-caption-text"><em>Happy 30th, O mighty Firaxis Games.</em></figcaption></figure>
<p><span style="font-weight: 400">GeForce NOW is celebrating three decades of Firaxis — the legendary studio behind </span><i><span style="font-weight: 400">Civilization </span></i><span style="font-weight: 400">and </span><i><span style="font-weight: 400">XCOM </span></i><span style="font-weight: 400">— with a growing lineup of some of the best strategy games on the cloud.</span></p>
<p><span style="font-weight: 400">Throughout May, GeForce NOW is rolling out a collection of classic Firaxis games, available as </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce-now/how-to-play/#install-to-play"><span style="font-weight: 400">Install-to-Play titles</span></a><span style="font-weight: 400"> for instant access. Expanding beyond </span><i><span style="font-weight: 400">Civilization V</span></i><span style="font-weight: 400"> and </span><i><span style="font-weight: 400">Civilization VI </span></i><span style="font-weight: 400">and the series’ signature “one more turn” gameplay,  GeForce NOW will showcase the studio’s broader legacy. </span></p>
<p><span style="font-weight: 400">Take to the stars in </span><i><span style="font-weight: 400">Sid Meier’s Civilization: Beyond Earth</span></i><span style="font-weight: 400">, lay tracks in </span><i><span style="font-weight: 400">Sid Meier’s Railroads!</span></i><span style="font-weight: 400">, defend the globe in </span><i><span style="font-weight: 400">XCOM 2</span></i><span style="font-weight: 400">, and command the skies in</span><i><span style="font-weight: 400"> Ace Patrol</span></i><span style="font-weight: 400"> and </span><i><span style="font-weight: 400">Ace Patrol: Pacific Skies</span></i><span style="font-weight: 400">.</span></p>
<p><span style="font-weight: 400">Add to that the anniversary celebration on Steam with discounts of up to 90% on games in the Firaxis catalog for a limited time. It’s the perfect time for members to build their Firaxis collection in the cloud.</span></p>
<h2><b>May-jor New Games</b></h2>
<figure id="attachment_92784" aria-describedby="caption-attachment-92784" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-92784" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/ss_fd4133c98cd56fac2cf1f45893ae62dc0037078a-1680x945.jpg" alt="heroes of Might and magic: olden era" width="1200" height="675" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/ss_fd4133c98cd56fac2cf1f45893ae62dc0037078a-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ss_fd4133c98cd56fac2cf1f45893ae62dc0037078a-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ss_fd4133c98cd56fac2cf1f45893ae62dc0037078a-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ss_fd4133c98cd56fac2cf1f45893ae62dc0037078a-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ss_fd4133c98cd56fac2cf1f45893ae62dc0037078a-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ss_fd4133c98cd56fac2cf1f45893ae62dc0037078a-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ss_fd4133c98cd56fac2cf1f45893ae62dc0037078a-630x354.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ss_fd4133c98cd56fac2cf1f45893ae62dc0037078a-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/04/ss_fd4133c98cd56fac2cf1f45893ae62dc0037078a-400x225.jpg 400w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92784" class="wp-caption-text"><em>One wrong turn from glory to graveyard.</em></figcaption></figure>
<p><i><span style="font-weight: 400">Heroes of Might and Magic: Olden Era</span></i><span style="font-weight: 400"> is the official prequel that returns to the roots of the genre-defining, critically acclaimed turn-based strategy series. Raise grand armies and wield devastating spells to overcome foes in solo and multiplayer, set against a grim, war-torn world of uneasy alliances and hard choices. Now available to stream on GeForce NOW, every decision shapes a path defined by power, strategy and hard-earned victories</span></p>
<figure id="attachment_92781" aria-describedby="caption-attachment-92781" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-92781" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Anno_117_DLC-1680x840.jpg" alt="Anno 117 DLC on GeForce NOW" width="1200" height="600" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Anno_117_DLC-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Anno_117_DLC-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Anno_117_DLC-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Anno_117_DLC-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Anno_117_DLC-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Anno_117_DLC.jpg 2048w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92781" class="wp-caption-text"><em>All roads lead to Rome. Some also lead directly into a volcano.</em></figcaption></figure>
<p><i><span style="font-weight: 400">Anno 117: PAX Romana</span></i><span style="font-weight: 400"> is turning up the heat with its first downloadable content (DLC), Prophecies of Ash, dropping a massive new volcanic island into the heart of the Roman empire. Build a thriving city on Cinis, juggle new production chains like obsidian crafting and new items, and try to stay on the good side of Vulcan — a new deity to worship —  while a very active volcano looks in the background. Additionally, the DLC introduces a new resource a new trader, Ceacilia, and several new specialists. Available to stream on GeForce NOW, the DLC is ready to play without waiting on big downloads or patches — just jump in, claim the new land and see whether the empire can handle a little fire and brimstone.</span></p>
<p><span style="font-weight: 400">Check out what else is available this week:</span><i></i></p>
<ul>
<li><i><span style="font-weight: 400">Global Rescue </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2873660?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 27)</span></li>
<li><i><span style="font-weight: 400">s&amp;box </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/590830?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 28)</span></li>
<li><i><span style="font-weight: 400">Far Far West </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3124540?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 28)</span></li>
<li><i><span style="font-weight: 400">INDUSTRIA 2 </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2154070?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 29)</span></li>
<li><i><span style="font-weight: 400">Heroes of Might and Magic: Olden Era</span></i><span style="font-weight: 400"> (New release on </span><a target="_blank" href="https://store.steampowered.com/app/3105440?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.xbox.com/games/store/heroes-of-might-and-magic-olden-era-game-preview/9p2tfccrkmrd?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Game Pass, April 30)</span></li>
<li><i><span style="font-weight: 400">Bus Bound </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2095420?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 30)</span><i></i></li>
</ul>
<p><span style="font-weight: 400">And look forward to the games coming throughout the month:</span></p>
<ul>
<li><i><span style="font-weight: 400">Conan Exiles </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/440900?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.epicgames.com/store/p/conan-exiles?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Epic Games Store</span></a><span style="font-weight: 400">, May 5)</span></li>
<li><i><span style="font-weight: 400">Dead as Disco </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3404260?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, May 5)</span></li>
<li><i><span style="font-weight: 400">HUNTDOWN: OVERTIME </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2473350?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, May 7)</span></li>
<li><i><span style="font-weight: 400">Outbound </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2681030?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, May 14)</span></li>
<li><i><span style="font-weight: 400">Starminer </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/1116050?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, May 27)</span></li>
<li><i><span style="font-weight: 400">Deep Rock Galactic: Rogue Core </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2605790?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, May 18)</span></li>
<li><i><span style="font-weight: 400">Forza Horizon 6</span></i><span style="font-weight: 400"> (New release on </span><a target="_blank" href="https://store.steampowered.com/app/2483190?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.xbox.com/games/store/forza-horizon-6/9nr1r1xwlcnb?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Game Pass, May 19)</span></li>
<li><i><span style="font-weight: 400">Luna Abyss </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/1933000?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, May 21)</span></li>
<li><i><span style="font-weight: 400">ZERO PARADES</span></i><span style="font-weight: 400"> (New release on </span><a target="_blank" href="https://store.steampowered.com/app/2863680?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, May 21)</span></li>
<li><i><span style="font-weight: 400">007 First Light</span></i><span style="font-weight: 400"> (New release on </span><a target="_blank" href="https://store.steampowered.com/app/3768760?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, </span><a target="_blank" href="https://www.epicgames.com/store/p/007-first-light-182cea?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Epic Games Store</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.xbox.com/games/store/007-first-light/9pj34m93zv7z?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on the Microsoft store, May 27)</span></li>
<li><i><span style="font-weight: 400">Hotel Architect </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/1602000?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Kiln</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/1165990?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.xbox.com/games/store/kiln/9mw53zkzh168?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Game Pass)</span></li>
<li><i><span style="font-weight: 400">Nuclear Option </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/2168680?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Sintopia </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/2213700?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Sudden Strike 5 </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/2808550?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Super Battle Golf </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/4069520?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
</ul>
<h2><b>Adding to April</b></h2>
<p><span style="font-weight: 400">In addition to the 10 games announced last month, </span><span style="font-weight: 400">11 </span><span style="font-weight: 400">more joined the </span><a target="_blank" href="https://play.geforcenow.com"><span style="font-weight: 400">GeForce NOW library</span></a><span style="font-weight: 400">:</span></p>
<ul>
<li><i><span style="font-weight: 400">‘83</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/1059220?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Crimson Desert </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://www.xbox.com/games/store/crimson-desert/9ndmb2smq5q7?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available to play via </span><a target="_blank" href="https://www.xbox.com/games/xbox-play-anywhere"><span style="font-weight: 400">Xbox Play Anywhere</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">DayZ</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://www.xbox.com/games/store/dayz/bsr9nlhvf1kl?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Game Pass) </span></li>
<li><i><span style="font-weight: 400">Diablo III </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.ubisoft.com/ubisoftplus?ucid=AFL-272089&amp;addinfo=&amp;bi="><span style="font-weight: 400">Ubisoft Connect</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">MapleStory M</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/3969080?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Morbid Metal</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/1866130?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">PRAGMATA SKETCHBOOK &#8211; Demo</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/4003800?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Rayman: 30th Anniversary Edition</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/4094670?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://store.ubi.com/69683af797044c480eb79e03.html?ucid=AFL-ID_152062&amp;maltcode=geforcenow_convst_AFL_geforcenow_vg__STORE____&amp;addinfo="><span style="font-weight: 400">Ubisoft</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Tides of Tomorrow </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/2678080?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Vampire Crawlers: The Turbo Wildcard from Vampire Survivors </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://www.xbox.com/games/store/vampire-crawlers-the-turbo-wildcard-from-vampire-survivors/9nmpvj7tcfd0?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Windrose </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/3041230/Windrose/"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
</ul>
<p><i><span style="font-weight: 400">Outbound</span></i><span style="font-weight: 400"> didn’t make it to the cloud this month due to its launch date being moved. Stay tuned to GFN Thursdays for the latest.</span></p>
<p><span style="font-weight: 400">What are you planning to play this weekend? </span></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-thursday-4-30-nv-blog-1280x680-logo.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-thursday-4-30-nv-blog-1280x680-logo-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[It’s Gonna Be May: 16 Games Hit the Cloud This Month, With More NVIDIA GeForce RTX 5080 Power]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>NVIDIA Launches Nemotron 3 Nano Omni Model, Unifying Vision, Audio and Language for up to 9x More Efficient AI Agents</title>
		<link>https://blogs.nvidia.com/blog/nemotron-3-nano-omni-multimodal-ai-agents/</link>
		
		<dc:creator><![CDATA[Kari Briski]]></dc:creator>
		<pubDate>Tue, 28 Apr 2026 16:00:28 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Nemotron]]></category>
		<category><![CDATA[NVIDIA NeMo]]></category>
		<category><![CDATA[Open Source]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92732</guid>

					<description><![CDATA[AI agent systems today juggle separate models for vision, speech and language — losing time and context as they pass data from one model to the other. Unveiled today, NVIDIA Nemotron 3 Nano Omni is an open multimodal model that brings these capabilities together into one system, enabling agents to deliver faster, smarter responses with [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>AI agent systems today juggle separate models for vision, speech and language — losing time and context as they pass data from one model to the other.</p>
<p><span style="font-weight: 400;">Unveiled today, NVIDIA Nemotron 3 Nano Omni is an open multimodal model that brings these capabilities together into one system, </span><span style="font-weight: 400;">enabling agents to deliver faster, smarter responses with advanced reasoning across video, audio, image and text. </span><span style="font-weight: 400;">This best-in-class model gives enterprises and developers a production path for more efficient and accurate multimodal AI agents with full deployment flexibility and control. </span></p>
<p><span style="font-weight: 400;">Nemotron 3 Nano Omni sets a new efficiency frontier for open multimodal models with leading accuracy and low cost, <a target="_blank" href="https://developer.nvidia.com/blog/nvidia-nemotron-3-nano-omni-powers-multimodal-agent-reasoning-in-a-single-efficient-open-model">topping six leaderboards</a> for complex document intelligence, and video and audio understanding.</span></p>
<aside style="float: right; width: 320px; max-width: 100%; margin: 0.25rem 0 1.25rem 1.75rem; padding: 1.25rem 1.5rem 1.5rem; background: #fafafa; border-top: 3px solid #76b900; font-family: -apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,'Helvetica Neue',Arial,sans-serif; color: #1a1a1a; font-size: 15px; line-height: 1.55; box-sizing: border-box;">
<p style="margin: 0 0 0.875rem 0; font-size: 11px; letter-spacing: 0.12em; text-transform: uppercase; font-weight: bold; color: #76b900;">At a Glance</p>
<div style="padding: 0 0 0.75rem 0;">
<p style="margin: 0 0 0.2rem 0; font-size: 11px; letter-spacing: 0.06em; text-transform: uppercase; font-weight: 600; color: #6b6b6b;">What it is</p>
<p style="margin: 0; color: #1a1a1a;">An open, omni-modal reasoning model — the highest-efficiency open multimodal model of its kind with leading accuracy</p>
</div>
<div style="padding: 0.75rem 0; border-top: 1px solid #e5e5e5;">
<p style="margin: 0 0 0.2rem 0; font-size: 11px; letter-spacing: 0.06em; text-transform: uppercase; font-weight: 600; color: #6b6b6b;">What it handles</p>
<p style="margin: 0; color: #1a1a1a;">Text, images, audio, video, documents, charts and graphical interfaces (input); text (output)</p>
</div>
<div style="padding: 0.75rem 0; border-top: 1px solid #e5e5e5;">
<p style="margin: 0 0 0.2rem 0; font-size: 11px; letter-spacing: 0.06em; text-transform: uppercase; font-weight: 600; color: #6b6b6b;">Who it’s for</p>
<p style="margin: 0; color: #1a1a1a;">Enterprises and developers building fast and reliable, agentic systems that need a multimodal perception sub-agent</p>
</div>
<div style="padding: 0.75rem 0; border-top: 1px solid #e5e5e5;">
<p style="margin: 0 0 0.2rem 0; font-size: 11px; letter-spacing: 0.06em; text-transform: uppercase; font-weight: 600; color: #6b6b6b;">How it works</p>
<p style="margin: 0; color: #1a1a1a;">Functions as the “eyes and ears” in a system of agents, working alongside models like Nemotron 3 Super and Ultra or other proprietary models</p>
</div>
<div style="padding: 0.75rem 0; border-top: 1px solid #e5e5e5;">
<p style="margin: 0 0 0.2rem 0; font-size: 11px; letter-spacing: 0.06em; text-transform: uppercase; font-weight: 600; color: #6b6b6b;">Why it matters</p>
<p style="margin: 0; color: #1a1a1a;">Leading multimodal accuracy and 9x higher throughput than other open omni models with the same interactivity, resulting in lower cost and better scalability without sacrificing responsiveness.</p>
</div>
<div style="padding: 0.75rem 0; border-top: 1px solid #e5e5e5;">
<p style="margin: 0 0 0.2rem 0; font-size: 11px; letter-spacing: 0.06em; text-transform: uppercase; font-weight: 600; color: #6b6b6b;">Architecture</p>
<p style="margin: 0; color: #1a1a1a;">30B-A3B hybrid MoE with Conv3D, EVS, 256K context</p>
</div>
<div style="padding: 0.75rem 0 0 0; border-top: 1px solid #e5e5e5;">
<p style="margin: 0 0 0.2rem 0; font-size: 11px; letter-spacing: 0.06em; text-transform: uppercase; font-weight: 600; color: #6b6b6b;">Availability</p>
<p style="margin: 0; color: #1a1a1a;">April 28th, 2026 via Hugging Face, OpenRouter, build.nvidia.com and 25+ partner platforms</p>
</div>
</aside>
<p>AI and<span style="font-weight: 400;"> software companies already adopting Nemotron 3 Nano Omni include </span><a target="_blank" href="https://www.aible.com/nemotron3-nano-omni-ai-agent"><span style="font-weight: 400;">Aible</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://appliedscientific.ai/research/scientific-ai-literature-agent-nvidia-nemotron-nano-omni?utm_source=nvidia-blog"><span style="font-weight: 400;">Applied Scientific Intelligence (ASI)</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://info.eka.care/services/how-ekacare-is-building-agentic-multimodal-healthcare-for-india-scale-patient-care-with-nvidia-nemotron-3-nano-omni"><span style="font-weight: 400;">Eka Care</span></a><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Foxconn</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;"><a target="_blank" href="https://hcompany.ai/holotron3">H Company</a>, Palantir and </span><a target="_blank" href="https://pyler.tech/articles/scaling-trustworthy-video-safety-with-nvidia-nemotron-3-nano-omni"><span style="font-weight: 400;">Pyler</span></a><span style="font-weight: 400;">,</span><span style="font-weight: 400;"> with </span><span style="font-weight: 400;">Dell Technologies</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Docusign, Infosys, <a target="_blank" href="https://www.k-dense.ai/blog/nvidia-nemotron-nano-omni-multimodal-agentic-science">K-Dense</a>, Lila, Oracle </span><span style="font-weight: 400;">and </span><a target="_blank" href="https://zefr.com/press/zefr-evaluates-nvidia-nemotron-3-nano-omni-to-power-cognition-ai"><span style="font-weight: 400;">Zefr</span></a><span style="font-weight: 400;"> evaluating the model. </span></p>
<p><span style="font-weight: 400;">“To build useful agents, you can’t wait seconds for a model to interpret a screen,”</span> <span style="font-weight: 400;">said Gautier Cloix, CEO of H Company.</span> <span style="font-weight: 400;">“By building on Nemotron 3 Nano Omni, our agents can rapidly interpret full HD screen recordings — something that wasn’t practical before. This isn’t just a speed boost: It’s a fundamental shift in how our agents perceive and interact with digital environments in real time.”</span></p>
<h2><b>Nemotron 3 Nano Omni Enables Faster, Leaner Multimodal Agents</b></h2>
<p><span style="font-weight: 400;">Consider an AI agent for customer support processing a screen recording while analyzing uploaded call audio and checking data logs — or an agent for finance tasked with parsing PDFs, spreadsheets, charts and voice notes. Today, most agentic systems accomplish these tasks with separate models for vision, speech and language. </span></p>
<p><span style="font-weight: 400;">This approach increases latency through repeated inference passes, fragments context across modalities, and adds cost and inaccuracies over time.</span></p>
<p><span style="font-weight: 400;">By combining vision and audio encoders within its 30B-A3B, hybrid </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/mixture-of-experts/"><span style="font-weight: 400;">mixture-of-experts</span></a><span style="font-weight: 400;"> architecture, Nemotron 3 Nano Omni eliminates the need for separate perception models, driving inference efficiency at scale. It pairs this efficiency with strong multimodal perception accuracy, enabling <a target="_blank" href="https://huggingface.co/blog/nvidia/nemotron-3-nano-omni-multimodal-inteligence">AI systems to achieve 9x higher throughput</a> than other open omni models with the same interactivity. The result is lower costs and better scalability without sacrificing responsiveness or quality. </span></p>
<p><span style="font-weight: 400;">In agentic systems, Nemotron 3 Nano Omni can work alongside proprietary cloud models or other NVIDIA Nemotron open models — such as Nemotron 3 Super for high-frequency execution or Nemotron 3 Ultra for complex planning — as well as proprietary models from other providers, to power sub-agents for agentic workflows such as computer use, document intelligence and audio-video reasoning.</span></p>
<ul>
<li><b>Computer use agents —</b><span style="font-weight: 400;"> Nemotron 3 Nano Omni powers the perception loop for agents navigating graphical user interfaces, reasoning over onscreen content and understanding user interface state over time. </span><span style="font-weight: 400;">H Company’s latest </span><a target="_blank" href="https://www.youtube.com/watch?v=kSi9JS2l0Ww"><span style="font-weight: 400;">computer usage agent</span></a><span style="font-weight: 400;">, powered by Nemotron 3 Nano Omni, uses a native input resolution of 1920&#215;1080 pixels to achieve high-fidelity visual reasoning. In preliminary evaluations on the OSWorld benchmark, this integration showed a significant leap in navigating complex graphical interfaces and used Nemotron 3 Nano Omni’s ability to process very high-resolution images.</span><span style="font-weight: 400;"> </span></li>
<li><b>Document intelligence</b><span style="font-weight: 400;"> — Interprets documents, charts, tables, screenshots and mixed-media inputs, enabling agents to reason across visual structure and text content coherently. Critical for enterprise analysis and compliance workflows.</span></li>
<li><b>Audio and video understanding</b><span style="font-weight: 400;"> — For customer service, research and monitoring workflows, Nemotron 3 Nano Omni maintains audio-video context, tying what was said, shown and documented into a single reasoning stream instead of disconnected summaries.</span></li>
</ul>
<p><span style="font-weight: 400;"><img loading="lazy" decoding="async" class="aligncenter size-medium wp-image-92736" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/nemotron-3-nano-omni-graphic-960x260.jpg" alt="" width="960" height="260" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/nemotron-3-nano-omni-graphic-960x260.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/nemotron-3-nano-omni-graphic-630x171.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/nemotron-3-nano-omni-graphic.jpg 1026w" sizes="auto, (max-width: 960px) 100vw, 960px" /></span></p>
<h2><b>Open and Customizable, Deployable Anywhere</b></h2>
<p><span style="font-weight: 400;">Nemotron 3 Nano Omni is released with open weights, datasets and training techniques — giving organizations full transparency and control over how the model is customized and deployed. </span></p>
<p><span style="font-weight: 400;">Developers can use tools like </span><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/products/nemo/"><span style="font-weight: 400;">NVIDIA NeMo</span></a><span style="font-weight: 400;"> for customization, evaluation and optimization for domain-specific use cases. Because the Nemotron family of models is open, organizations can deploy them in environments that meet regulatory, sovereignty or data localization requirements. </span><span style="font-weight: 400;"><br />
</span><span style="font-weight: 400;"><br />
</span><span style="font-weight: 400;">The Nemotron 3 family — including Nano, Super and Ultra models — has seen over </span><span style="font-weight: 400;">50 million downloads in the past year</span><span style="font-weight: 400;">. Omni extends the family’s capabilities into multimodal and agentic domains. </span></p>
<p><span style="font-weight: 400;">The model is available on </span><a target="_blank" href="https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16"><span style="font-weight: 400;">Hugging Face</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://openrouter.ai/nvidia/nemotron-3-nano-omni-30b-a3b-reasoning:free"><span style="font-weight: 400;">OpenRouter</span></a> <span style="font-weight: 400;">and </span><a target="_blank" href="https://build.nvidia.com/nvidia/nemotron-3-nano-omni-30b-a3b-reasoning"><span style="font-weight: 400;">build.nvidia.com</span></a><span style="font-weight: 400;"> as an NVIDIA NIM microservice and through a broad ecosystem of </span><a target="_blank" href="https://www.nvidia.com/en-us/data-center/gpu-cloud-computing/partners/"><span style="font-weight: 400;">NVIDIA Cloud Partners</span></a><span style="font-weight: 400;">, inference platforms</span><span style="font-weight: 400;"> and cloud service providers. </span></p>
<p><span style="font-weight: 400;">Its open, lightweight architecture supports consistent deployment from local systems like <a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/">NVIDIA Jetson</a> hardware, </span><a target="_blank" href="https://www.nvidia.com/en-us/products/workstations/dgx-spark/"><span style="font-weight: 400;">NVIDIA DGX Spark</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://www.nvidia.com/en-us/products/workstations/dgx-station/"><span style="font-weight: 400;">DGX Station</span></a><span style="font-weight: 400;"> to data center and cloud environments. </span></p>
<p><i><span style="font-weight: 400;">Visit the NVIDIA technical blog for </span></i><a target="_blank" href="https://developer.nvidia.com/blog/nvidia-nemotron-3-nano-omni-powers-multimodal-agent-reasoning-in-a-single-efficient-open-model"><i><span style="font-weight: 400;">tutorials, cookbooks and deployment guides</span></i></a><i> </i><i></i><i><span style="font-weight: 400;">for Nemotron 3 Nano Omni use cases. </span></i><i></i><i><span style="font-weight: 400;">S</span></i><i><span style="font-weight: 400;">tay up to date on agentic AI, </span></i><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/"><i><span style="font-weight: 400;">NVIDIA Nemotron</span></i></a><i><span style="font-weight: 400;"> and more by subscribing to </span></i><a target="_blank" href="https://www.nvidia.com/en-us/executive-insights/generative-ai-tools/?modal=stay-inf"><i><span style="font-weight: 400;">NVIDIA news</span></i></a><i><span style="font-weight: 400;">,</span></i><a target="_blank" href="https://developer.nvidia.com/community"><i><span style="font-weight: 400;"> joining the community</span></i></a><i><span style="font-weight: 400;"> and following NVIDIA AI on </span></i><a target="_blank" href="https://www.linkedin.com/showcase/nvidia-ai/posts/?feedView=all"><i><span style="font-weight: 400;">LinkedIn</span></i></a><i><span style="font-weight: 400;">, </span></i><a target="_blank" href="https://www.instagram.com/nvidiaai/?hl=en"><i><span style="font-weight: 400;">Instagram</span></i></a><i><span style="font-weight: 400;">, </span></i><a target="_blank" href="https://x.com/NVIDIAAIDev"><i><span style="font-weight: 400;">X</span></i></a><i><span style="font-weight: 400;"> and </span></i><a target="_blank" href="https://www.facebook.com/NVIDIAAI"><i><span style="font-weight: 400;">Facebook</span></i></a><i><span style="font-weight: 400;">.  </span></i></p>
<p><i><span style="font-weight: 400;">Explore </span></i><a target="_blank" href="https://youtube.com/playlist?list=PL5B692fm6--vdRKB14FImVi7MTJ77zjn4&amp;feature=shared"><i><span style="font-weight: 400;">self-paced video tutorials and livestreams</span></i></a><i><span style="font-weight: 400;">.</span></i><i><span style="font-weight: 400;"><br />
</span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/nemotron-3-nano-omni-featured-1920x1080-1.jpg" type="image/jpeg" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/nemotron-3-nano-omni-featured-1920x1080-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[NVIDIA Launches Nemotron 3 Nano Omni Model, Unifying Vision, Audio and Language for up to 9x More Efficient AI Agents]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Into the Omniverse: Manufacturing’s Simulation-First Era Has Arrived</title>
		<link>https://blogs.nvidia.com/blog/manufacturing-simulation-first/</link>
		
		<dc:creator><![CDATA[Bhoomi Gadhia]]></dc:creator>
		<pubDate>Tue, 28 Apr 2026 13:00:33 +0000</pubDate>
				<category><![CDATA[Pro Graphics]]></category>
		<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Cosmos]]></category>
		<category><![CDATA[Digital Twin]]></category>
		<category><![CDATA[Industrial and Manufacturing]]></category>
		<category><![CDATA[Into the Omniverse]]></category>
		<category><![CDATA[Metropolis]]></category>
		<category><![CDATA[Omniverse]]></category>
		<category><![CDATA[Physical AI]]></category>
		<category><![CDATA[Simulation and Design]]></category>
		<category><![CDATA[Smart Spaces]]></category>
		<category><![CDATA[Universal Scene Description]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92723</guid>

					<description><![CDATA[Manufacturing’s traditional design-build-test cycle rested on a single assumption: Real-world testing was the only reliable test environment. ]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><i><span style="font-weight: 400;">Editor’s note: This post is part of </span></i><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/news/"><i><span style="font-weight: 400;">Into the Omniverse</span></i></a><i><span style="font-weight: 400;">, a series focused on how developers, 3D practitioners, and enterprises can transform their workflows using the latest advances in </span></i><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/usd/"><i><span style="font-weight: 400;">OpenUSD</span></i></a><i><span style="font-weight: 400;"> and </span></i><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/usd/"><i><span style="font-weight: 400;">NVIDIA Omniverse</span></i></a><i><span style="font-weight: 400;">.</span></i></p>
<p><span style="font-weight: 400;">Manufacturing’s traditional design-build-test cycle rested on a single assumption: Real-world testing was the only reliable test environment. </span></p>
<p><span style="font-weight: 400;">That assumption is now shifting. </span></p>
<p><span style="font-weight: 400;">Today, high-fidelity simulation produces synthetic training data accurate enough for production-grade AI. This is enabling perception systems, reasoning models and agentic workflows to excel in live factory environments.</span></p>
<p><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/usd/"><span style="font-weight: 400;">OpenUSD</span></a><span style="font-weight: 400;"> has emerged as the connective standard that makes this practical, and the manufacturers building on it are already experiencing measurable results. </span></p>
<h2><strong>SimReady: The Content Standard for Physical AI </strong></h2>
<p><span style="font-weight: 400;">As </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/generative-physical-ai/"><span style="font-weight: 400;">physical AI</span></a><span style="font-weight: 400;"> becomes integral to industrial operations, manufacturers face a foundational challenge: Assets don’t travel reliably between 3D pipelines. Every time an asset moves from a computer-aided design tool to a simulation platform, physics properties, geometry and metadata are lost — forcing teams to rebuild from scratch.</span></p>
<p><a target="_blank" href="https://www.nvidia.com/en-us/glossary/simready/"><span style="font-weight: 400;">SimReady</span></a><span style="font-weight: 400;"> is the content standard, built on OpenUSD, defining what physically accurate 3D assets must contain to work reliably across rendering, simulation and AI training pipelines. </span></p>
<p><span style="font-weight: 400;">In addition, </span><a target="_blank" href="https://developer.nvidia.com/omniverse?size=n_12_n&amp;sort-field=featured&amp;sort-direction=desc"><span style="font-weight: 400;">NVIDIA Omniverse libraries</span></a><span style="font-weight: 400;"> provide the physics-accurate, photorealistic simulation layer where AI models are trained and validated before deployment. </span></p>
<h2><strong>Four Ways Manufacturers Are Putting the NVIDIA Physical AI Stack to Work</strong></h2>
<h3><b>ABB Robotics Closes the Sim-to-Real Gap at 99% Accuracy</b></h3>
<p><span style="font-weight: 400;">ABB Robotics has integrated NVIDIA Omniverse libraries directly into RobotStudio HyperReality, its simulation platform used by more than 60,000 engineers globally. </span></p>
<p><span style="font-weight: 400;">The platform represents robot stations as USD files running the same firmware as their physical counterparts, making it possible to train robots, test part tolerances and validate AI models before a production line exists. </span></p>
<p><span style="font-weight: 400;">Synthetic training variations — such as lighting conditions and geometry differences — can be generated at scale, covering scenarios that would be impractical to replicate manually. </span></p>
<p><span style="font-weight: 400;">“We’ve managed to vertically integrate the complete technology stack and optimize it to a point where we’re now achieving 99% accuracy on the simulated version,” said Craig McDonnell, managing director of business line industries at ABB Robotics.</span></p>
<p><span style="font-weight: 400;">The downstream outcomes: up to 50% reduction in product introduction cycles, up to 80% reduction in commissioning time and a 30-40% reduction in total equipment lifecycle cost.</span></p>
<h3><b>JLR Compresses Four Hours of Aerodynamic Simulation to One Minute</b></h3>
<p><span style="font-weight: 400;">JLR applied the same simulation-first principle to vehicle aerodynamics. Engineers trained neural surrogate models on more than 20,000 wind-tunnel-correlated </span><a target="_blank" href="https://www.nvidia.com/en-us/use-cases/computational-fluid-dynamics-simulation/"><span style="font-weight: 400;">computational fluid dynamics simulations</span></a><span style="font-weight: 400;"> across the vehicle portfolio — with 95% of aero-thermal workloads now running on NVIDIA GPUs. </span></p>
<p><span style="font-weight: 400;">The Neural Concept Design Lab — built on Omniverse and deployed at JLR — visualizes aerodynamic changes in real time as designers adjust vehicle geometry, collapsing what was a sequential design-then-simulate cycle into a continuous loop. A result that once took four hours now takes one minute. </span></p>
<p><iframe loading="lazy" title="How AI is Transforming Manufacturing End-to-End" width="1200" height="675" src="https://www.youtube.com/embed/D8wSXABcW-A?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<h3><b>Tulip Brings Real-Time Factory Intelligence to Terex for Operational Gains</b></h3>
<p><span style="font-weight: 400;">Once a factory goes into production, a different intelligence challenge begins — one that simulation alone can’t address. </span></p>
<p><span style="font-weight: 400;">Tulip Interface’s </span><a target="_blank" href="https://tulip.co/press/tulip-announces-factory-playback-nvidia/"><span style="font-weight: 400;">Factory Playback</span></a><span style="font-weight: 400;"> platform demonstrates how existing infrastructure can become an intelligence layer, turning operations records into something users can actually learn from. Tulip built Factory Playback on the </span><a target="_blank" href="https://build.nvidia.com/nvidia/video-search-and-summarization/blueprintcard"><span style="font-weight: 400;">NVIDIA Metropolis VSS Blueprint</span></a><span style="font-weight: 400;"> — a reference architecture for extracting structured intelligence from factory camera feeds — connecting camera streams, machine sensor data and operational context into a unified timeline of what actually happened. </span></p>
<p><span style="font-weight: 400;">In addition, Factory Playback uses the </span><a target="_blank" href="https://www.nvidia.com/en-us/ai/cosmos/"><span style="font-weight: 400;">NVIDIA Cosmos Reason</span></a> <a target="_blank" href="https://www.nvidia.com/en-us/glossary/vision-language-models/"><span style="font-weight: 400;">vision language model</span></a><span style="font-weight: 400;"> to interpret camera streams and operator behaviors in real time, running on premises on NVIDIA GPUs.</span></p>
<p><span style="font-weight: 400;">Deployed at Terex, a global industrial equipment manufacturer with over 40 plants, the system is expected to deliver a 3% increase in yield and 10% reduction in rework. </span></p>
<p><span style="font-weight: 400;">“I am excited to see what manufacturers will do with the power of AI to augment their daily capabilities,” said Rony Kubat, cofounder and chief information officer of Tulip Interfaces. </span></p>
<h2><b>Getting Started</b></h2>
<p><span style="font-weight: 400;">SimReady assets, Omniverse libraries and NVIDIA’s physical AI stack provide a foundation developers can adopt, extend and combine across any industrial application. Here’s how to get started:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">See how NVIDIA and partners put physical AI to work on the factory floor at </span><a href="https://blogs.nvidia.com/blog/ai-manufacturing-hannover-messe"><span style="font-weight: 400;">Hannover Messe</span></a><span style="font-weight: 400;">.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Start building autonomous robots, digital twins and AI-powered systems with these </span><a target="_blank" href="https://docs.nvidia.com/learning/physical-ai/"><span style="font-weight: 400;">free, self-paced courses</span></a><span style="font-weight: 400;">.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Explore NVIDIA Isaac Sim and Omniverse libraries on the </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400;">NVIDIA developer portal</span></a><span style="font-weight: 400;">.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Deploy the </span><a href="https://blogs.nvidia.com/blog/ai-blueprint-video-search-and-summarization/"><span style="font-weight: 400;">NVIDIA Metropolis VSS Blueprint on existing camera infrastructure</span></a><span style="font-weight: 400;"> to gain new insights from the shop floor. </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Explore the SimReady Foundation specification framework on </span><a target="_blank" href="https://github.com/nvidia/simready-foundation"><span style="font-weight: 400;">GitHub</span></a><span style="font-weight: 400;">.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Browse </span><a href="https://blogs.nvidia.com/blog/cosmos-world-foundation-models/"><span style="font-weight: 400;">NVIDIA Cosmos Cookbook recipes</span></a><span style="font-weight: 400;"> for domain-specific physical AI applications across robotics, simulation and autonomous systems.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Access the full </span><a target="_blank" href="https://developer.nvidia.com/omniverse"><span style="font-weight: 400;">Omniverse developer hub</span></a><span style="font-weight: 400;">.</span></li>
<li style="font-weight: 400;" aria-level="1"><a target="_blank" href="https://discord.com/invite/nvidiaomniverse"><span style="font-weight: 400;">Join the community</span></a><span style="font-weight: 400;"> to connect with fellow developers and innovators who are building the future with NVIDIA technologies.</span></li>
</ul>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/april-ito-1920x1080-1.jpg" type="image/jpeg" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/april-ito-1920x1080-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Into the Omniverse: Manufacturing’s Simulation-First Era Has Arrived]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>OpenAI’s New GPT-5.5 Powers Codex on NVIDIA Infrastructure — and NVIDIA Is Already Putting It to Work</title>
		<link>https://blogs.nvidia.com/blog/openai-codex-gpt-5-5-ai-agents/</link>
		
		<dc:creator><![CDATA[Justin Boitano]]></dc:creator>
		<pubDate>Thu, 23 Apr 2026 18:57:55 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Infrastructure]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[NVIDIA Blackwell]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92703</guid>

					<description><![CDATA[AI agents have revolutionized developer workflows, and their next frontier is knowledge work: processing information, solving complex problems, coming up with new ideas and driving innovation.  Codex, OpenAI’s agentic coding application, is enabling this new frontier. It’s now powered by GPT-5.5, OpenAI’s latest frontier model, which runs on NVIDIA GB200 NVL72 rack-scale systems.  Over 10,000 [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">AI agents have revolutionized developer workflows, and their next frontier is knowledge work: processing information, solving complex problems, coming up with new ideas and driving innovation. </span></p>
<p><span style="font-weight: 400;">Codex, OpenAI’s agentic coding application, is enabling this new frontier. It’s now powered by GPT-5.5, OpenAI’s latest frontier model, which runs on NVIDIA GB200 NVL72 rack-scale systems. </span></p>
<p><span style="font-weight: 400;">Over 10,000 NVIDIANs — across engineering, product, legal, marketing, finance, sales, HR, operations and developer programs — are already using GPT-5.5-powered Codex to achieve, in their words, “mind-blowing” and “life-changing” results. </span></p>
<p><span style="font-weight: 400;">NVIDIA engineers have had access to GPT-5.5 through the Codex app for a few weeks, and the gains are measurable. Served on GB200 NVL72, which is capable of delivering 35x lower cost per million tokens and 50x higher token output per second per megawatt compared with prior-generation systems — </span><a href="https://blogs.nvidia.com/blog/lowest-token-cost-ai-factories/"><span style="font-weight: 400;">economics</span></a><span style="font-weight: 400;"> that make frontier-model inference viable at enterprise scale.</span></p>
<p><span style="font-weight: 400;">Debugging cycles that once stretched across days are closing in hours. Experimentation that previously required weeks is turning into overnight progress in complex, multi-file codebases. Teams are shipping end-to-end features from natural-language prompts, with stronger reliability and fewer wasted cycles than earlier models. </span></p>
<p><span style="font-weight: 400;">OpenAI’s stunning progress is just the latest example of NVIDIA’s work with every frontier model company — not just to accelerate the use of AI agents inside NVIDIA, but to help the company’s partners build the world’s best, lowest cost and most power efficient models for everyone.</span></p>
<p><span style="font-weight: 400;">As NVIDIA founder and CEO Jensen Huang told employees in a company-wide email urging everyone to use Codex: “Let’s jump to lightspeed. Welcome to the age of AI.”</span></p>
<h2><b>A Deployment Built for Enterprise Security </b></h2>
<p><span style="font-weight: 400;">Just like humans, every agent needs its own dedicated computer. </span></p>
<p><span style="font-weight: 400;">To ensure seamless operation within secure enterprise environments, </span><span style="font-weight: 400;">the Codex app supports remote Secure Shell (SSH) connections to approved cloud virtual machines, allowing agents to work with real company data without exposing it externally. </span></p>
<p><span style="font-weight: 400;">So </span><span style="font-weight: 400;">to ensure maximum security and auditability, NVIDIA IT rolled out cloud virtual machines (VMs) for every employee to run their agent safely. This provides a dedicated sandbox for the agent to operate at its maximum capabilities while maintaining full auditability. Users can control the Codex agent running in the cloud VM from a user interface that every employee is familiar with.</span></p>
<p><span style="font-weight: 400;">A zero-data retention policy governs NVIDIA’s deployment, and agents access production systems with read-only permissions through command-line interfaces and Skills — the same agentic toolkit NVIDIA uses to run automation workflows across the company.</span></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-92707" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/GPT55-Codex-Launch_v1-1-1680x945.jpg" alt="" width="1200" height="675" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/GPT55-Codex-Launch_v1-1-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GPT55-Codex-Launch_v1-1-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GPT55-Codex-Launch_v1-1-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GPT55-Codex-Launch_v1-1-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GPT55-Codex-Launch_v1-1-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GPT55-Codex-Launch_v1-1-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GPT55-Codex-Launch_v1-1-630x355.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GPT55-Codex-Launch_v1-1-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GPT55-Codex-Launch_v1-1-400x225.jpg 400w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p>
<h2><b>A Decade of Full-Stack Collaboration</b></h2>
<p><span style="font-weight: 400;">The GPT-5.5 launch and the Codex rollout reflect more than 10 years of collaboration between NVIDIA and OpenAI. The partnership began in 2016, when Huang hand-delivered the first NVIDIA DGX-1 AI supercomputer to OpenAI’s San Francisco headquarters.</span></p>
<p><span style="font-weight: 400;">Since then, the two companies have worked closely across the full AI stack. </span></p>
<p><span style="font-weight: 400;">NVIDIA was a day-zero partner for OpenAI’s gpt-oss open-weight model launch, optimizing model weights for NVIDIA TensorRT-LLM and ecosystem frameworks including vLLM and Ollama. </span></p>
<p><span style="font-weight: 400;">OpenAI has committed to deploying more than 10 gigawatts of NVIDIA systems for its next-generation AI infrastructure — a buildout that will put millions of NVIDIA GPUs at the foundation of OpenAI’s model training and inference for years ahead.</span></p>
<p><span style="font-weight: 400;">And OpenAI and NVIDIA are early silicon and codesign partners: OpenAI provides feedback that informs NVIDIA’s hardware roadmap, and in turn gains early access to new architectures. That relationship produced a concrete milestone — the joint bring-up of the first GB200 NVL72 100,000-GPU cluster. The cluster completed multiple large-scale training runs and set a new benchmark for system-level reliability at frontier scale.</span></p>
<p><span style="font-weight: 400;">GPT-5.5 is the product of that infrastructure running at full strength. </span></p>
<p><i><span style="font-weight: 400;">Learn more in </span></i><a target="_blank" href="https://openai.com/index/introducing-gpt-5-5/"><i><span style="font-weight: 400;">OpenAI’s announcement</span></i></a><i><span style="font-weight: 400;">. </span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/logo-lockup-codex-tech-blog-v-1920x1080-5175350.png" type="image/png" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/logo-lockup-codex-tech-blog-v-1920x1080-5175350-842x450.png" width="842" height="450" />
			<media:title type="html"><![CDATA[OpenAI’s New GPT-5.5 Powers Codex on NVIDIA Infrastructure — and NVIDIA Is Already Putting It to Work]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Tag, You’re It: GeForce NOW Levels Up Game Discovery With Xbox Game Pass and Ubisoft+ Labels</title>
		<link>https://blogs.nvidia.com/blog/geforce-now-thursday-in-app-labels/</link>
		
		<dc:creator><![CDATA[GeForce NOW Community]]></dc:creator>
		<pubDate>Thu, 23 Apr 2026 13:00:51 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Cloud Gaming]]></category>
		<category><![CDATA[GeForce NOW]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92667</guid>

					<description><![CDATA[GeForce NOW is doubling down on what matters most: gamers. This week’s upgrades bring smarter libraries, making it easier than ever for gamers to turn a PC collection into a cloud-powered flex. It starts with giving existing libraries time to shine. Gamers can bring the games they love to the cloud, stream them with high [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><a target="_blank" href="https://www.nvidia.com/en-us/geforce-now/"><span style="font-weight: 400">GeForce NOW</span></a><span style="font-weight: 400"> is doubling down on what matters most: gamers. This week’s upgrades bring smarter libraries, making it easier than ever for gamers to turn a PC collection into a cloud-powered flex.</span></p>
<p><span style="font-weight: 400">It starts with giving existing libraries time to shine. Gamers can bring the games they love to the cloud, stream them with high performance and see the value of a GeForce NOW membership grow with new games, rewards and features.</span></p>
<p><span style="font-weight: 400">First up, finding something to play gets an upgrade. The new in-app labels, first </span><a href="https://blogs.nvidia.com/blog/geforce-now-thursday-gdc-2026/"><span style="font-weight: 400">announced at GDC</span></a><span style="font-weight: 400">, are now live — making it simple to spot titles and new releases from connected subscription services like Xbox Game Pass and Ubisoft+.</span></p>
<p><span style="font-weight: 400">All that sets the stage for </span><span style="font-weight: 400">six </span><span style="font-weight: 400">new games rolling onto the cloud this week. Leading the charge is </span><i><span style="font-weight: 400">Vampire Crawlers: The Turbo Wildcard</span></i><span style="font-weight: 400">, a chaotic new spin on the fast, unpredictable and packed-with-personality</span><i><span style="font-weight: 400"> Vampire Survivors</span></i><span style="font-weight: 400"> universe. Even the vampires can’t avoid the spotlight this time.</span></p>
<p><span style="font-weight: 400">Round it out with a thunderous new </span><i><span style="font-weight: 400">Marvel Rival</span></i><span style="font-weight: 400">s skin for Thor, and it’s a lineup fitted to perfection.</span></p>
<h2><b>Just Play It</b></h2>
<p><span style="font-weight: 400">GeForce NOW lets members connect their existing gaming subscriptions to bring more of their games with them on the go, across devices. Today, keeping track of which games can be streamed just got even easier. New in‑app game labels on GeForce NOW make it easy to see which titles are part of Xbox Game Pass or Ubisoft+ game libraries once accounts are connected.</span></p>
<figure id="attachment_92672" aria-describedby="caption-attachment-92672" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-92672" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/choose_gamestore94-1680x1003.png" alt="in-app labels on geforce now" width="1200" height="716" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/choose_gamestore94-1680x1003.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/choose_gamestore94-960x573.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/choose_gamestore94-1280x764.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/choose_gamestore94-1536x917.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/choose_gamestore94-scaled.png 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/04/choose_gamestore94-630x376.png 630w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92672" class="wp-caption-text"><em>Tagged and ready.</em></figcaption></figure>
<p><span style="font-weight: 400">Clear labels now appear directly on each game’s details — eliminating guesswork and making it simple to see exactly what’s available to play instantly from connected subscriptions. Stream hundreds of NVIDIA RTX-powered titles with a single click.</span></p>
<p><span style="font-weight: 400">More clarity, less searching — and one more reason the cloud gaming library has never looked better.</span></p>
<h2><b>Feeding Time</b></h2>
<figure id="attachment_92675" aria-describedby="caption-attachment-92675" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-92675" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Vampire_Crawlers_The_Turbo_Wildcard-1680x840.jpg" alt="Vampire Survivors on GeForce NOW" width="1200" height="600" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Vampire_Crawlers_The_Turbo_Wildcard-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Vampire_Crawlers_The_Turbo_Wildcard-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Vampire_Crawlers_The_Turbo_Wildcard-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Vampire_Crawlers_The_Turbo_Wildcard-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Vampire_Crawlers_The_Turbo_Wildcard-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Vampire_Crawlers_The_Turbo_Wildcard.jpg 2048w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92675" class="wp-caption-text"><em>Make sure to bite hard.</em></figcaption></figure>
<p><span style="font-weight: 400">Get fired up for the feeding frenzy — </span><i><span style="font-weight: 400">Vampire Crawlers: The Turbo Wildcard</span></i><span style="font-weight: 400"> is a roguelite horde escape game that stacks absurd power‑ups and turns every run into a highlight reel of close calls. Stay mobile, grab upgrades and try not to become the snack.</span></p>
<p><span style="font-weight: 400">Every build becomes a delightful walking disaster, with weapons, perks and abilities combining into wild synergies that can turn a run from barely surviving to absolutely steamrolling in just a few upgrades. Dense enemy waves, screen‑filling attacks and unpredictable upgrade paths keep each attempt fresh — and dangerously prone to “one more run.”</span></p>
<p><span style="font-weight: 400">On GeForce NOW, all that chaos stays razor sharp, with smooth performance for dodge‑heavy moments, crisp effects even when the screen is packed and the ability to jump back into the madness from any supported device.</span></p>
<h2><b>Thunder Drip</b></h2>
<figure id="attachment_92678" aria-describedby="caption-attachment-92678" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-92678" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Marvel_Rivals_Thor_reward-1680x840.jpg" alt="Marvel Rivals reward on GeForce NOW" width="1200" height="600" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Marvel_Rivals_Thor_reward-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Marvel_Rivals_Thor_reward-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Marvel_Rivals_Thor_reward-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Marvel_Rivals_Thor_reward-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Marvel_Rivals_Thor_reward-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Marvel_Rivals_Thor_reward.jpg 2048w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92678" class="wp-caption-text"><em>Thor is striking poses again.</em></figcaption></figure>
<p><span style="font-weight: 400">The God of Thunder is trading his classic red cape for an umber‑toned, battle‑ready look that can only mean one thing: a serious style upgrade. GeForce NOW Premium members get first access to the Thor Midgard Umber Skin in </span><i><span style="font-weight: 400">Marvel Rivals </span></i><span style="font-weight: 400">starting today, and those on the free tier can grab it starting Friday, April 24, bringing stormy flair to the battlefield. </span></p>
<p><span style="font-weight: 400">Claim the reward by Saturday, May 23 — or before Odin hoards them all. Redeem through the GeForce NOW account portal and enter the code in game on Steam. Then stride forth, majestic and moody, because this look is eternal — at least until the next mythic crossover.</span></p>
<h2><b>Time for Adventures</b></h2>
<p><span style="font-weight: 400">Members can look for the following new games this week:</span><i></i></p>
<ul>
<li><i><span style="font-weight: 400">Vampire Crawlers: The Turbo Wildcard from Vampire Survivors </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3265700?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.xbox.com/games/store/vampire-crawlers-the-turbo-wildcard-from-vampire-survivors/9nmpvj7tcfd0?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Game Pass, April 21)</span></li>
<li><i><span style="font-weight: 400">Tides of Tomorrow </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2678080?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 22, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">‘83</span></i><span style="font-weight: 400"> (New release on </span><a target="_blank" href="https://store.steampowered.com/app/1059220?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 23, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">Diablo III </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.ubisoft.com/ubisoftplus?ucid=AFL-272089&amp;addinfo=&amp;bi="><span style="font-weight: 400">Ubisoft Connect</span></a><span style="font-weight: 400">, April 23)</span></li>
<li><i><span style="font-weight: 400">Crimson Desert </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://www.xbox.com/games/store/crimson-desert/9ndmb2smq5q7?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available to play via </span><a target="_blank" href="https://www.xbox.com/games/xbox-play-anywhere"><span style="font-weight: 400">Xbox Play Anywhere</span></a><span style="font-weight: 400">, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">MapleStory M</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/3969080?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
</ul>
<p><i></i><br />
<span style="font-weight: 400">What are you planning to play this weekend? Let us know on </span><a target="_blank" href="https://www.twitter.com/nvidiagfn"><span style="font-weight: 400">X</span></a><span style="font-weight: 400"> or in the comments below.</span></p>
<blockquote class="twitter-tweet" data-width="550" data-dnt="true">
<p lang="en" dir="ltr"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f30c.png" alt="🌌" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Settled Systems<br /><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f3dc.png" alt="🏜" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Pandora<br /><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2623.png" alt="☣" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Raccoon City</p>
<p> Thanks for taking us on a trip through the cloud <a target="_blank" href="https://twitter.com/airiesummer?ref_src=twsrc%5Etfw">@airiesummer</a> <a target="_blank" href="https://twitter.com/hashtag/GFNShare?src=hash&amp;ref_src=twsrc%5Etfw">#GFNShare</a> <a target="_blank" href="https://t.co/4XKgnCPl6J">pic.twitter.com/4XKgnCPl6J</a></p>
<p>&mdash; <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f329.png" alt="🌩" class="wp-smiley" style="height: 1em; max-height: 1em;" /> NVIDIA GeForce NOW (@NVIDIAGFN) <a target="_blank" href="https://twitter.com/NVIDIAGFN/status/2046619974408483075?ref_src=twsrc%5Etfw">April 21, 2026</a></p></blockquote>
<p><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-thursday-feature-launch-library-nv-blog-1280x680-logo.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-thursday-feature-launch-library-nv-blog-1280x680-logo-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Tag, You’re It: GeForce NOW Levels Up Game Discovery With Xbox Game Pass and Ubisoft+ Labels]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Making Sense of the Early Universe</title>
		<link>https://blogs.nvidia.com/blog/ai-gpu-early-universe-astronomy/</link>
		
		<dc:creator><![CDATA[Brian Caulfield]]></dc:creator>
		<pubDate>Thu, 23 Apr 2026 13:00:22 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[AI for Good]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Computer Vision]]></category>
		<category><![CDATA[Education]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[Scientific Visualization]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92391</guid>

					<description><![CDATA[This Spring Astronomy Day, here’s a look at how AI and GPUs are helping astronomers work through unprecedented volumes of cosmic data.]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>text goes here</p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/AstroUCSC_IntroPoster_v2.png" type="image/png" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/AstroUCSC_IntroPoster_v2-842x450.png" width="842" height="450" />
			<media:title type="html"><![CDATA[Making Sense of the Early Universe]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>From Rainforests to Recycling Plants: 5 Ways NVIDIA AI Is Protecting the Planet</title>
		<link>https://blogs.nvidia.com/blog/earth-day-2026-ai-accelerated-computing/</link>
		
		<dc:creator><![CDATA[NVIDIA Writers]]></dc:creator>
		<pubDate>Wed, 22 Apr 2026 13:00:42 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[AI for Good]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Climate]]></category>
		<category><![CDATA[Inception]]></category>
		<category><![CDATA[Isaac]]></category>
		<category><![CDATA[NVIDIA Earth-2]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Physical AI]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[Scientific Visualization]]></category>
		<category><![CDATA[TensorRT]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92462</guid>

					<description><![CDATA[Across climate, conservation, disaster monitoring and recycling, NVIDIA AI is enabling applications protecting the planet.]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div>]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/Earth-2_thumbnail.png" type="image/png" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/Earth-2_thumbnail-842x450.png" width="842" height="450" />
			<media:title type="html"><![CDATA[From Rainforests to Recycling Plants: 5 Ways NVIDIA AI Is Protecting the Planet]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>NVIDIA and Google Cloud Collaborate to Advance Agentic and Physical AI</title>
		<link>https://blogs.nvidia.com/blog/google-cloud-agentic-physical-ai-factories/</link>
		
		<dc:creator><![CDATA[Ian Buck]]></dc:creator>
		<pubDate>Wed, 22 Apr 2026 12:00:42 +0000</pubDate>
				<category><![CDATA[AI Infrastructure]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Cloud Services]]></category>
		<category><![CDATA[Cosmos]]></category>
		<category><![CDATA[Inception]]></category>
		<category><![CDATA[Isaac]]></category>
		<category><![CDATA[Nemotron]]></category>
		<category><![CDATA[NVIDIA Blackwell]]></category>
		<category><![CDATA[NVIDIA NeMo]]></category>
		<category><![CDATA[NVIDIA Rubin]]></category>
		<category><![CDATA[NVLink]]></category>
		<category><![CDATA[Omniverse]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Physical AI]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92454</guid>

					<description><![CDATA[NVIDIA and Google Cloud have collaborated for more than a decade, co‑engineering a full‑stack AI platform that spans every technology layer — from performance‑optimized libraries and frameworks to enterprise‑grade cloud services.  This foundation enables developers, startups and enterprises to push agentic and physical AI out of the lab and into production — from agents that [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">NVIDIA and Google Cloud have collaborated for more than a decade, co‑engineering a full‑stack AI platform that spans every technology layer — from performance‑optimized libraries and frameworks to enterprise‑grade cloud services. </span></p>
<p><span style="font-weight: 400;">This foundation enables developers, startups and enterprises to push agentic and physical AI out of the lab and into production — from agents that manage complex workflows to robots and digital twins on the factory floor.</span></p>
<p><span style="font-weight: 400;">At Google Cloud Next this week in Las Vegas, the partnership reaches a new milestone, with advancements to expand Google Cloud AI Hypercomputer for AI factories that will  power the next frontier of agentic and physical AI. </span></p>
<p><span style="font-weight: 400;">These include the new </span><a target="_blank" href="https://www.nvidia.com/en-us/data-center/technologies/rubin/"><span style="font-weight: 400;">NVIDIA Vera Rubin</span></a><span style="font-weight: 400;">-powered A5X bare-metal instances; a </span><span style="font-weight: 400;">preview </span><span style="font-weight: 400;">of Google Gemini on Google Distributed Cloud running on </span><a target="_blank" href="https://www.nvidia.com/en-us/data-center/technologies/blackwell-architecture/"><span style="font-weight: 400;">NVIDIA Blackwell</span></a><span style="font-weight: 400;"> and NVIDIA Blackwell Ultra GPUs; confidential VMs with NVIDIA Blackwell GPUs; and agentic AI on Gemini Enterprise Agent Platform with </span><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/"><span style="font-weight: 400;">NVIDIA Nemotron</span></a><span style="font-weight: 400;"> open models and the </span><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/products/nemo/"><span style="font-weight: 400;">NVIDIA NeMo</span></a><span style="font-weight: 400;"> framework.</span></p>
<h2><b>Next-Generation Infrastructure: From NVIDIA Blackwell to Vera Rubin</b></h2>
<p><span style="font-weight: 400;">At Google Cloud Next, Google announced A5X powered by </span><a target="_blank" href="https://www.nvidia.com/en-us/data-center/vera-rubin-nvl72/"><span style="font-weight: 400;">NVIDIA Vera Rubin NVL72</span></a><span style="font-weight: 400;"> rack-scale systems, which — through extreme codesign across chips, systems and software — deliver up to 10x lower inference cost per token and 10x higher token throughput per megawatt than the prior generation. </span></p>
<p><span style="font-weight: 400;">A5X will use </span><a target="_blank" href="https://www.nvidia.com/en-us/networking/products/ethernet/supernic/"><span style="font-weight: 400;">NVIDIA ConnectX-9 SuperNICs</span></a><span style="font-weight: 400;">, combined with next-generation Google Virgo networking, scaling to up to </span><span style="font-weight: 400;">80,000</span><span style="font-weight: 400;"> NVIDIA Rubin GPUs within a single site cluster and up to </span><span style="font-weight: 400;">960,000</span><span style="font-weight: 400;"> NVIDIA Rubin GPUs in a multisite cluster, enabling customers to run their largest AI workloads on NVIDIA‑optimized infrastructure. </span></p>
<p><span style="font-weight: 400;">“At Google Cloud, we believe the next decade of AI will be shaped by customers’ ability to run their most demanding workloads on a truly integrated, AI‑optimized infrastructure stack,” said </span><span style="font-weight: 400;">Mark Lohmeyer, vice president and general manager of AI and computing infrastructure at Google Cloud.</span> <span style="font-weight: 400;">“By combining Google Cloud’s scalable infrastructure and managed AI services with NVIDIA’s industry‑leading platforms, systems and software, we’re giving customers flexibility to train, tune and serve everything from frontier and open models to agentic and physical AI workloads — while optimizing for performance, cost and sustainability.”</span></p>
<p><span style="font-weight: 400;">Google Cloud’s broad NVIDIA Blackwell portfolio ranges from A4 VMs with NVIDIA HGX B200 systems to rack-scale A4X VMs with NVIDIA GB200 NVL72 and A4X Max NVIDIA GB300 NVL72 systems, all the way to </span><a target="_blank" href="https://cloud.google.com/blog/products/compute/google-cloud-ai-infrastructure-at-nvidia-gtc-2026#:~:text=of%20Engineering%2C%20Imgix-,Introducing%20fractional%20G4%20VMs%C2%A0,-We%20are%20excited"><span style="font-weight: 400;">fractional G4 VMs</span></a><span style="font-weight: 400;"> with </span><a target="_blank" href="https://www.nvidia.com/en-us/data-center/rtx-pro-6000-blackwell-server-edition/"><span style="font-weight: 400;">NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs</span></a><span style="font-weight: 400;">. </span></p>
<p><span style="font-weight: 400;">Customers can right-size their acceleration capabilities, whether using multiple interconnected NVL72 racks that scale out to tens of thousands of NVIDIA Blackwell GPUs, a single rack that can scale up to 72 Blackwell GPUs with fifth-generation NVIDIA NVLink and NVLink 5 Switch, or just one-eighth of a GPU. </span></p>
<p><span style="font-weight: 400;">This comprehensive platform helps teams optimize every workload, from mixture-of-experts reasoning, multimodal inference and data processing to complex simulations for the next frontier of physical AI and robotics.</span></p>
<p><span style="font-weight: 400;">Leading frontier AI labs are already putting this infrastructure to work. </span><a target="_blank" href="https://www.googlecloudpresscorner.com/2026-04-22-Thinking-Machines-Expands-Use-of-NVIDIA-GPUs-through-Google-Cloud"><span style="font-weight: 400;">Thinking Machines Lab</span></a><span style="font-weight: 400;"> is scaling its Tinker application programming interface (API) on A4X Max VMs with GB300 NVL72 systems to accelerate training, while </span><a target="_blank" href="https://www.googlecloudevents.com/next-vegas/session-library?session_id=3912935&amp;name=how-openai-builds-kubernetes-gpu-clusters&amp;_gl=1*sm49ye*_up*MQ..&amp;gclid=CjwKCAjwyYPOBhBxEiwAgpT8P0jnAu2Y19tTXTZkTgC8O2I0zSOvzUi_KoaLLGWJmbIWkZShLG4MVhoCjPYQAvD_BwE&amp;gclsrc=aw.ds&amp;gbraid=0AAAAApdQcwelAABlHDeA_C2gSApVjSwSs"><span style="font-weight: 400;">OpenAI</span></a><span style="font-weight: 400;"> is running large‑scale inference on NVIDIA GB300 (A4X Max VMs) and GB200 NVL72 systems (A4X VMs) on Google Cloud for some of its most demanding inference workloads, including for ChatGPT. </span></p>
<h2><b>Secure AI Wherever It Needs to Run: Sovereign and Confidential</b></h2>
<p><span style="font-weight: 400;">Google Gemini models running on NVIDIA Blackwell and Blackwell Ultra GPUs are now </span><a target="_blank" href="https://cloud.google.com/blog/topics/hybrid-cloud/google-distributed-cloud-at-next26"><span style="font-weight: 400;">in preview</span></a> <span style="font-weight: 400;">on Google Distributed Cloud, so customers can bring Google’s frontier models wherever their most sensitive data resides. </span></p>
<p><a target="_blank" href="https://www.nvidia.com/en-us/data-center/solutions/confidential-computing/"><span style="font-weight: 400;">NVIDIA Confidential Computing</span></a><span style="font-weight: 400;"> with the NVIDIA Blackwell platform enables Gemini models to run in a protected environment where prompts and fine‑tuning data stay encrypted and can’t be seen or altered by unauthorized parties, including the infrastructure operators. </span></p>
<p><span style="font-weight: 400;">In the public cloud, the </span><a target="_blank" href="https://cloud.google.com/blog/products/identity-security/next26-redefining-security-for-the-ai-era-with-google-cloud-and-wiz"><span style="font-weight: 400;">preview</span></a><span style="font-weight: 400;"> of Confidential G4 VMs with NVIDIA RTX PRO 6000 Blackwell GPUs brings these protections to multi‑tenant environments — helping safeguard prompts, AI models and data so customers in regulated industries can access the power of AI without compromising on security or performance. </span></p>
<p><span style="font-weight: 400;">This is the first confidential computing offering of NVIDIA Blackwell GPUs in the cloud, giving Google Cloud customers a new foundation for secure, high‑performance AI.</span></p>
<h2><b>Open Models and APIs for Agentic AI</b></h2>
<p><span style="font-weight: 400;">The NVIDIA platform on Google Cloud is optimized to run every kind of model — from Google’s frontier Gemini and </span><a target="_blank" href="https://developer.nvidia.com/blog/bringing-ai-closer-to-the-edge-and-on-device-with-gemma-4/"><span style="font-weight: 400;">Gemma</span></a><span style="font-weight: 400;"> families to NVIDIA Nemotron open models and the broader open weight ecosystem — equipping developers to build agentic AI systems that reason, plan and act. </span></p>
<p><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-debuts-nemotron-3-family-of-open-models"><span style="font-weight: 400;">NVIDIA Nemotron 3</span></a><span style="font-weight: 400;"> Super is available on Gemini Enterprise Agent Platform, giving developers a direct path to discovering, customizing and deploying NVIDIA‑optimized reasoning and multimodal models for agentic workflows. </span></p>
<p><span style="font-weight: 400;">Google Cloud and NVIDIA are also making it easier to train and customize open models at scale. Managed Training Clusters on Gemini Enterprise Agent Platform introduced a new managed reinforcement learning (RL) API built with </span><a target="_blank" href="https://github.com/NVIDIA-NeMo/RL"><span style="font-weight: 400;">NVIDIA NeMo RL</span></a><span style="font-weight: 400;"> for accelerating RL training at scale while automating cluster sizing, failure recovery and job execution, so teams can focus on agent behavior and model quality instead of infrastructure management.</span></p>
<p><span style="font-weight: 400;">Cybersecurity leader </span><a target="_blank" href="https://www.googlecloudevents.com/next-vegas/session-library?session_id=4033904&amp;name=accelerate-domain-ai-agents-on-google-cloud-vertex-ai-with-nvidia-nemo-and-nvidia-nemotron"><span style="font-weight: 400;">CrowdStrike</span></a> <span style="font-weight: 400;">uses </span><a target="_blank" href="https://github.com/NVIDIA-NeMo"><span style="font-weight: 400;">NVIDIA NeMo</span></a><span style="font-weight: 400;"> open libraries such as NeMo Data Designer, NeMo Automodel and NeMo Megatron Bridge to generate synthetic data and fine-tuning Nemotron and other open large language models for domain-specific cybersecurity. Running on Managed Training Clusters on Gemini Enterprise Agent Platform with NVIDIA Blackwell GPUs, these capabilities accelerate threat detection, investigation and response.</span></p>
<h2><b>Building the Future of Industrial and Physical AI</b></h2>
<p><span style="font-weight: 400;">Building industrial and physical AI at scale demands powerful hardware and a combination of open models, libraries and frameworks to develop these complex end-to-end workflows. </span></p>
<p><span style="font-weight: 400;">NVIDIA AI infrastructure, open models and physical AI libraries available on Google Cloud, is mainstreaming </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-and-global-industrial-software-giants-bring-design-engineering-and-manufacturing-into-the-ai-era"><span style="font-weight: 400;">industrial</span></a><span style="font-weight: 400;"> and physical AI applications, enabling customers to simulate, optimize and automate real-world workflows.</span></p>
<p><span style="font-weight: 400;">Solutions from leading industrial software providers, including </span><span style="font-weight: 400;">Cadence</span><span style="font-weight: 400;"> and </span><span style="font-weight: 400;">Siemens Digital Industries Software</span><span style="font-weight: 400;">, are now available on Google Cloud, accelerated on NVIDIA AI infrastructure. These applications are powering the next-generation design, engineering and manufacturing of everything from chips to autonomous vehicles, robotics, aerospace platforms, heavy machinery and large-scale production systems.   </span></p>
<p><span style="font-weight: 400;">With </span><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/"><span style="font-weight: 400;">NVIDIA Omniverse</span></a><span style="font-weight: 400;"> libraries and the open source </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400;">NVIDIA Isaac Sim</span></a><span style="font-weight: 400;"> robotics simulation framework available on </span><a target="_blank" href="https://console.cloud.google.com/marketplace/browse?_gl=1*6u2y51*_up*MQ..*_gs*MQ..&amp;gclid=Cj0KCQjwqPLOBhCiARIsAKRMPZoXw6AmBoY0n3Ogp2aZPgYaBvypBXygQfMyiyPjN_LCtFrOl99PP_QaApBwEALw_wcB&amp;gclsrc=aw.ds&amp;pli=1&amp;rapt=AEjHL4PbiwiHdb9qi9WEmm8smGFoRJsO0_no2rD3cR5YybMNyg8tSG97i5ihTlfBK2_YjiXIV5NFPoRfgpK8Ej-_smBjtUVSYsNqxcxd6YuIzYOhyVdP9bA&amp;q=nvidia%20omniverse"><span style="font-weight: 400;">Google Cloud Marketplace</span></a><span style="font-weight: 400;">, developers can build physically accurate digital twins and develop custom robotics simulations pipelines to train, simulate and validate robots before real-world deployment. </span></p>
<p><span style="font-weight: 400;">NVIDIA NIM microservices for models like </span><a target="_blank" href="https://huggingface.co/blog/nvidia/nvidia-cosmos-reason-2-brings-advanced-reasoning"><span style="font-weight: 400;">NVIDIA Cosmos Reason 2</span></a><span style="font-weight: 400;"> can be deployed to Google Vertex AI and Google Kubernetes Engine. This enables robots and vision AI agents to see, reason and act in the physical world like humans, powering use cases such as automated data curation and annotation, advanced robot planning and reasoning, and intelligent video analytics agents for real-time insights and decision-making.</span></p>
<p><span style="font-weight: 400;">Together, these technologies help developers seamlessly move from computer-aided design to living industrial digital twins and AI‑driven robots, accelerating processes from design sign‑off to factory optimization on the NVIDIA platform running on Google Cloud.</span></p>
<h2><b>Proven Impact: From Startups to Global Enterprises</b></h2>
<p><span style="font-weight: 400;">Global enterprises, AI labs and high‑growth startups are using NVIDIA and Google Cloud’s co-engineered platform to move from prototyping to production faster, including </span><span style="font-weight: 400;">Snap</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Schrödinger</span><span style="font-weight: 400;"> and </span><span style="font-weight: 400;">Salesforce</span><span style="font-weight: 400;">. </span><a href="https://blogs.nvidia.com/blog/snap-accelerated-data-processing/"><span style="font-weight: 400;">Snap</span></a><span style="font-weight: 400;"> is cutting the cost of large‑scale A/B testing by shifting data pipelines to GPU‑accelerated Spark on Google Cloud. </span><a target="_blank" href="https://www.youtube.com/watch?v=607ZZ0Zp5jo"><span style="font-weight: 400;">Schrödinger</span></a><span style="font-weight: 400;"> is shrinking weekslong drug discovery simulations into just hours with NVIDIA accelerated computing on Google Cloud.</span></p>
<p><span style="font-weight: 400;">Startups are orchestrating the next wave of AI innovation — building new agents and AI‑native applications using NVIDIA accelerated computing on Google Cloud. </span></p>
<p><span style="font-weight: 400;">As part of a </span><a target="_blank" href="https://cloud.google.com/blog/topics/startups/startups-are-building-the-agentic-future-with-google-cloud"><span style="font-weight: 400;">broader ecosystem</span></a><span style="font-weight: 400;"> highlighted through </span><a target="_blank" href="https://www.nvidia.com/en-us/startups/?ncid=ref-kc-319196-vt18&amp;sfdcid=Google"><span style="font-weight: 400;">NVIDIA Inception</span></a><span style="font-weight: 400;"> and Google for Startups, </span><a target="_blank" href="https://www.coderabbit.ai/blog/faster-code-reviews-with-nemotron-3-super"><span style="font-weight: 400;">CodeRabbit</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://factory.ai/"><span style="font-weight: 400;">Factory</span></a> <span style="font-weight: 400;">are using NVIDIA Nemotron‑based models on Google Cloud to power code review and autonomous software development agents, while </span><span style="font-weight: 400;">Aible</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Mantis AI</span><span style="font-weight: 400;">,</span> <a target="_blank" href="https://www.photoroom.com/inside-photoroom"><span style="font-weight: 400;">Photoroom</span></a><span style="font-weight: 400;"> and </span><span style="font-weight: 400;">Baseten</span><span style="font-weight: 400;"> are building enterprise data, video intelligence, generative imagery and managed inference solutions on the full‑stack NVIDIA platform on Google Cloud.</span></p>
<p><span style="font-weight: 400;">More than 90,000 developers have become a part of the joint NVIDIA and Google Cloud </span><a target="_blank" href="https://developers.google.com/community/nvidia?utm_source=linkedin&amp;utm_medium=unpaidsoc&amp;utm_campaign=fy25q2-googlecloud-blog-ai-in_feed-no-brand-global&amp;utm_content=-&amp;utm_term=-&amp;linkId=14552521"><span style="font-weight: 400;">developer community</span></a><span style="font-weight: 400;"> in just over a year, tapping this platform to build and scale new AI applications.</span></p>
<p><span style="font-weight: 400;">In addition, NVIDIA has been honored at Next as </span><a target="_blank" href="https://cloud.google.com/blog/topics/partners/2026-partners-of-the-year-winners-next26"><span style="font-weight: 400;">Google Cloud Partner of the Year</span></a><span style="font-weight: 400;"> in two categories — AI Global Technology Partner and Infra Modernization Compute — in recognition of deep technical expertise and go-to-market alignment.</span></p>
<p><span style="font-weight: 400;">Together, NVIDIA and Google Cloud are giving customers a cloud‑scale platform to turn experimental agents and simulations into production systems that review code, secure fleets, enable new AI applications and optimize factories in the real world.</span></p>
<p><i><span style="font-weight: 400;">Learn more about the companies’ collaboration by attending </span></i><a target="_blank" href="https://www.nvidia.com/en-us/events/google-cloud-next/"><i><span style="font-weight: 400;">NVIDIA sessions, demos and workshops</span></i></a><i><span style="font-weight: 400;"> at Google Cloud Next.</span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/google-cloud-nvidia.jpg" type="image/jpeg" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/google-cloud-nvidia-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[NVIDIA and Google Cloud Collaborate to Advance Agentic and Physical AI]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Autonomous AI at Scale: Adobe Agents Unlock Breakthrough Creative Intelligence With NVIDIA and WPP</title>
		<link>https://blogs.nvidia.com/blog/adobe-ai-agents-nvidia-wpp/</link>
		
		<dc:creator><![CDATA[Richard Kerris]]></dc:creator>
		<pubDate>Mon, 20 Apr 2026 13:00:50 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Pro Graphics]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Media and Entertainment]]></category>
		<category><![CDATA[Nemotron]]></category>
		<category><![CDATA[Omniverse]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92424</guid>

					<description><![CDATA[AI agents are transforming how work gets done across all industries, accelerating everything from content creation to decision-making. NVIDIA’s expanded strategic collaborations with Adobe and WPP are bringing agentic AI to the center of enterprise marketing operations across creative production and customer experience orchestration.  As demand for personalized customer experiences surges, brands require intelligent systems [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">AI agents are transforming how work gets done across all industries, accelerating everything from content creation to decision-making.</span></p>
<p><a target="_blank" href="https://www.youtube.com/watch?v=zwKl-Hf3xGU"><span style="font-weight: 400;">NVIDIA’s expanded strategic collaborations with Adobe and WPP</span></a><span style="font-weight: 400;"> are bringing agentic AI to the center of enterprise marketing operations across creative production and customer experience orchestration. </span></p>
<p><span style="font-weight: 400;">As demand for personalized customer experiences surges, brands require intelligent systems that can plan, create, produce and activate content continuously — without compromising control, governance or brand integrity.</span></p>
<p><span style="font-weight: 400;">Consider a global retailer delivering the right offer, image, copy and price, across millions of product, audience and channel combinations — updated in minutes instead of months. </span></p>
<p><span style="font-weight: 400;">For marketing and creative teams, that means moving from one-size-fits-all campaigns to tailored experiences that are always on, always relevant and on brand. All of it is powered by intelligent systems that continuously generate and deliver content without sacrificing control, governance or brand integrity.</span></p>
<p><span style="font-weight: 400;">The expanded collaborations bring together three complementary strengths: Adobe’s creative and customer experience platforms and the new Adobe CX Enterprise Coworker, WPP’s global media and marketing expertise, and NVIDIA’s accelerated computing and software stack, including </span><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/"><span style="font-weight: 400;">NVIDIA Nemotron</span></a><span style="font-weight: 400;"> open models, </span><a target="_blank" href="https://developer.nvidia.com/nemo-agent-toolkit"><span style="font-weight: 400;">NVIDIA Agent Toolkit</span></a><span style="font-weight: 400;"> and the </span><a target="_blank" href="https://build.nvidia.com/openshell"><span style="font-weight: 400;">NVIDIA OpenShell</span></a><span style="font-weight: 400;"> secure runtime for building and running secure agentic AI systems.</span></p>
<p><span style="font-weight: 400;">As these agents begin orchestrating multistep workflows, tapping sensitive data and triggering actions across marketing stacks, enterprises need a way to enforce clear rules of engagement so every operation remains compliant, on brand and within defined risk boundaries.</span></p>
<p><span style="font-weight: 400;">Powered by the NVIDIA OpenShell runtime, every agent operates within a secure, isolated environment, delivering enterprise-grade control, consistency and auditability across the entire marketing lifecycle, with verifiable policy management, answering the question, “What can the agent do?” and not just, “What policy is in place?” </span></p>
<p><span style="font-weight: 400;">In governed environments, enterprises can also keep key workflows and intelligence services inside their trust boundary, including securely invoking Adobe CX Intelligence as part of customer experience agents.</span></p>
<p><span style="font-weight: 400;">A live demo of CX Enterprise Coworker — powered by NVIDIA Agent Toolkit, including the OpenShell runtime and Nemotron models — will be featured during Adobe Summit’s day-two keynote taking place Tuesday, April 21, at 9 a.m. PT</span><span style="font-weight: 400;">. </span></p>
<p><span style="font-weight: 400;">The collaboration enables:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><b>End-to-end agentic workflows:</b><span style="font-weight: 400;"> Adobe is developing creative and marketing agents that can generate, adapt and version on-brand assets. Adobe’s CX Enterprise Coworker orchestrates downstream customer experience workflows from personalization to activation, closing the loop between content creation and customer engagement.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Controlled execution with NVIDIA OpenShell:</b><span style="font-weight: 400;"> Agents run in a policy-based, containerized sandbox designed to keep execution governed, observable and auditable, helping enterprises safely deploy long-running agentic workflows on premises or in the cloud.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Commercially safe content at scale:</b><span style="font-weight: 400;"> Adobe Firefly Foundry, accelerated by NVIDIA AI infrastructure, can help organizations deeply tune custom models on their proprietary assets, enabling agents to generate commercially safe content at scale and aligned to brand identity.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>A 3D digital twins solution for scalable marketing production:</b><span style="font-weight: 400;"> Adobe’s cloud-native 3D digital twin solution is now generally available, built on </span><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/"><span style="font-weight: 400;">NVIDIA Omniverse</span></a><span style="font-weight: 400;"> libraries and OpenUSD. 3D digital twins serve as persistent product identities that agents use to automate and scale high-fidelity content creation across formats, markets and configurations.</span></li>
</ul>
<h2><b>Creative Intelligence Meets Performance Intelligence With Policy-Governed Agents</b></h2>
<p><span style="font-weight: 400;">Governed environments such as the ones enabled by this collaboration act as a set of “guardrails” that keep AI operations observable and auditable, preventing the system from acting outside of a company’s specific data boundaries or brand rules.</span></p>
<p><span style="font-weight: 400;">By combining Adobe’s creative platforms, WPP’s media and marketing expertise and NVIDIA’s secure infrastructure with CX Enterprise Coworker, brands no longer have to choose between speed and safety. Autonomous agents can now generate, adapt and activate content at scale while operating within governed, policy-driven environments.</span></p>
<p><span style="font-weight: 400;">The result is a new foundation for agentic marketing — where creative intelligence, performance and trust are built in from the start and delivered at global scale.</span></p>
<p><a target="_blank" href="https://business.adobe.com/summit/2026/sessions/day-one-keynote-gs1.html"><i><span style="font-weight: 400;">Watch</span></i></a><i><span style="font-weight: 400;"> NVIDIA founder and CEO Jensen Huang’s Adobe Summit fireside chat with Adobe CEO Shantanu Narayen.</span></i></p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/adobe-ai-agents-wpp-nvidia.jpg" type="image/jpeg" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/adobe-ai-agents-wpp-nvidia-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Autonomous AI at Scale: Adobe Agents Unlock Breakthrough Creative Intelligence With NVIDIA and WPP]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>NVIDIA and Partners Showcase the Future of AI-Driven Manufacturing at Hannover Messe 2026</title>
		<link>https://blogs.nvidia.com/blog/ai-manufacturing-hannover-messe/</link>
		
		<dc:creator><![CDATA[James McKenna]]></dc:creator>
		<pubDate>Mon, 20 Apr 2026 08:30:55 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Cosmos]]></category>
		<category><![CDATA[Digital Twins]]></category>
		<category><![CDATA[Industrial and Manufacturing]]></category>
		<category><![CDATA[Metropolis]]></category>
		<category><![CDATA[NVIDIA in Europe]]></category>
		<category><![CDATA[Omniverse]]></category>
		<category><![CDATA[Sovereign AI]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92414</guid>

					<description><![CDATA[Manufacturing is at an inflection point. Across every major industrial economy, the pressure to do more with less — due to faster design cycles, leaner operations and strain on skilled labor pools — is accelerating the shift to AI-driven production.  The question is no longer whether to adopt AI, but how fast and at what [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400">Manufacturing is at an inflection point. Across every major industrial economy, the pressure to do more with less — due to faster design cycles, leaner operations and strain on skilled labor pools — is accelerating the shift to AI-driven production. </span></p>
<p><span style="font-weight: 400">The question is no longer whether to adopt AI, but how fast and at what scale. </span></p>
<p><span style="font-weight: 400">At </span><a target="_blank" href="https://www.nvidia.com/en-us/events/hannover-messe/"><span style="font-weight: 400">Hannover Messe 2026</span></a><span style="font-weight: 400">, running April 20-24 in Hannover, Germany, NVIDIA and its partners are demonstrating AI-driven manufacturing in action. Attendees will experience how advancements in accelerated computing, AI physics, agents and robotics are powering industrial innovation — from agentic design and engineering to real-time simulation, vision AI agents and humanoid robots operating in factories. </span></p>
<p><span style="font-weight: 400">The factory of the future isn’t just a concept. It’s being built now.</span></p>
<h2><b>AI Infrastructure: Powering Europe’s Next Industrial Era</b></h2>
<p><span style="font-weight: 400">Running AI at scale across the factories and supply chains that manufacturing output relies on requires the right underlying infrastructure. As AI becomes foundational to how products, processes and facilities are designed, built and optimized, manufacturers need a unified, sovereign foundation that’s secure, scalable and built for industrial scale.</span></p>
<p><span style="font-weight: 400">The </span><a href="https://blogs.nvidia.com/blog/germany-industrial-ai-cloud-launch/"><span style="font-weight: 400">Industrial AI Cloud</span></a><span style="font-weight: 400">, one of Europe’s largest AI factories built in Germany by </span><span style="font-weight: 400">Deutsche Telekom</span><span style="font-weight: 400"> on NVIDIA AI infrastructure, is a blueprint for the future. It provides a secure, sovereign foundation for accelerating AI and robotics across Europe’s industries. </span></p>
<p><span style="font-weight: 400">At the show, industry leaders, including </span><span style="font-weight: 400">Agile Robots</span><span style="font-weight: 400">, </span><span style="font-weight: 400">SAP</span><span style="font-weight: 400">, </span><span style="font-weight: 400">Siemens</span><span style="font-weight: 400">, </span><span style="font-weight: 400">PhysicsX</span><span style="font-weight: 400"> and </span><span style="font-weight: 400">Wandelbots</span><span style="font-weight: 400">, will share how they are using this sovereign AI platform to run AI-accelerated workloads ranging from AI physics-driven, real-time simulation to factory-scale digital twins and software-defined robotics. </span><span style="font-weight: 400">EDAG</span><span style="font-weight: 400">, a leading independent engineering service provider, also announced it will be running its industrial metaverse platform, metys, on the Industrial AI Cloud — bringing sovereign AI infrastructure to automotive and industrial engineering at scale.</span></p>
<p><span style="font-weight: 400">To support the increasing demand for AI infrastructure, </span><span style="font-weight: 400">Dell Technologies</span><span style="font-weight: 400">, </span><span style="font-weight: 400">IBM</span><span style="font-weight: 400">, </span><span style="font-weight: 400">Lenovo</span><span style="font-weight: 400"> and </span><span style="font-weight: 400">PNY</span><span style="font-weight: 400"> are also showcasing NVIDIA-accelerated systems, from the edge to data centers, enabling manufacturers to run faster simulations and develop and deploy computer vision, AI agents and robotics in production at scale.</span></p>
<h2><b>AI-Driven Engineering</b></h2>
<p><span style="font-weight: 400">As industrial systems grow more complex, the software that engineers rely on to design, simulate and test them is being transformed with AI physics and agentic AI to keep pace. At Hannover Messe, NVIDIA partners are showcasing how AI-accelerated design and simulation is unlocking new possibilities.</span></p>
<p><a target="_blank" href="https://www.cadence.com/en_US/home/company/newsroom/press-releases/pr/2026/cadence-and-nvidia-expand-partnership-to-reinvent-engineering.html"><span style="font-weight: 400">Cadence</span></a><span style="font-weight: 400">, </span><a target="_blank" href="https://blog.3ds.com/topics/company-news/industrial-ai-with-virtual-twins"><span style="font-weight: 400">Dassault Systèmes</span></a><span style="font-weight: 400">, </span><a target="_blank" href="https://news.siemens.com/en-us/siemens-fuse-eda-ai-agent/"><span style="font-weight: 400">Siemens</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://news.synopsys.com/2026-03-16-Synopsys-Showcases-NVIDIA-Partnership-Impact-and-Ecosystem-Innovation-at-GTC-2026"><span style="font-weight: 400">Synopsys</span></a><span style="font-weight: 400"> are integrating </span><a target="_blank" href="https://www.nvidia.com/en-us/technologies/cuda-x/"><span style="font-weight: 400">NVIDIA CUDA-X</span></a><span style="font-weight: 400">, AI physics and </span><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/"><span style="font-weight: 400">NVIDIA Omniverse</span></a><span style="font-weight: 400"> libraries, as well as </span><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/"><span style="font-weight: 400">NVIDIA Nemotron</span></a><span style="font-weight: 400"> open models, across their software — enabling real-time, physics-grounded simulation, AI-powered design exploration and agentic workflows that empower engineers.</span></p>
<h2><b>Real-Time Factory Simulation</b></h2>
<p><span style="font-weight: 400">Factory-scale digital twins are critical for unlocking process simulation, real-time operations, and the testing and orchestration of robot fleets. At Hannover Messe, partners across manufacturing, energy and automotive are showing how digital twins, built on Omniverse libraries and </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/openusd/"><span style="font-weight: 400">OpenUSD</span></a><span style="font-weight: 400">, enable their customers to design, stress-test and continuously optimize their operations.</span></p>
<p><a target="_blank" href="https://new.abb.com/news/detail/135121/abb-genix-advances-industrial-digital-twins-through-immersive-3d-visualization-with-nvidia-omniverse-and-microsoft-azure"><span style="font-weight: 400">ABB</span></a><span style="font-weight: 400"> will showcase how the integration of NVIDIA Omniverse libraries and </span><span style="font-weight: 400">Microsoft Azure cloud</span><span style="font-weight: 400"> services into its ABB Genix Industrial IoT and AI Suite enables operations teams to understand asset performance in full context and engage AI agents to accelerate root-cause analysis.</span></p>
<p><span style="font-weight: 400">Dassault Systèmes</span> <span style="font-weight: 400">will demonstrate how AI-driven factories of the future are powered by virtual twin experiences. Attendees will see how these virtual twins harness NVIDIA physical AI libraries to enable autonomous, software-defined production and smarter, agile manufacturing systems.</span></p>
<p><a target="_blank" href="https://kongsbergdigital.com/news/kongsberg-digital-advances-with-nvidia"><span style="font-weight: 400">Kongsberg Digital</span></a><span style="font-weight: 400"> will highlight how integrating NVIDIA Omniverse libraries into its Kognitwin platform delivers spatial intelligence across critical energy infrastructure. The combination of digital twin models, live operational data and AI agents enables its customers to analyze complex assets, test scenarios virtually and optimize performance before changes reach the physical world.</span></p>
<p><a target="_blank" href="https://www.microsoft.com/en-us/microsoft-cloud/blog/manufacturing/2026/04/16/industrial-intelligence-unlocked-microsoft-at-hannover-messe-2026/"><span style="font-weight: 400">Microsoft</span></a><span style="font-weight: 400"> is demonstrating how NVIDIA Omniverse libraries integrated with </span><span style="font-weight: 400">Microsoft</span><span style="font-weight: 400"> Fabric Real-Time Intelligence and IQ enable physically accurate, real-time simulations for organizations to design, simulate and optimize physical systems, while the </span><a target="_blank" href="https://github.com/microsoft/physical-ai-toolchain"><span style="font-weight: 400">Azure Physical AI Toolchain</span></a><span style="font-weight: 400"> — built on the </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-open-physical-ai-data-factory-blueprint-to-accelerate-robotics-vision-ai-agents-and-autonomous-vehicle-development"><span style="font-weight: 400">NVIDIA Physical AI Data Factory Blueprint</span></a><span style="font-weight: 400"> — accelerates the deployment of physical AI and autonomous robots into production.</span></p>
<p><a target="_blank" href="https://news.siemens.com/en-us/digital-twin-composer-ces-2026/"><span style="font-weight: 400">Siemens</span></a><span style="font-weight: 400"> will highlight how integrating NVIDIA Omniverse libraries into its Digital Twin Composer solution turns multi-domain engineering and operational data into a comprehensive, simulation-ready digital twin — helping its customers deliver throughput gains and identify production issues before physical changes.</span></p>
<p><span style="font-weight: 400">By combining the Wandelbots NOVA Platform with Omniverse libraries such as </span><a target="_blank" href="https://docs.nvidia.com/nurec/"><span style="font-weight: 400">NVIDIA Omniverse NuRec</span></a><span style="font-weight: 400">, </span><a target="_blank" href="https://www.wandelbots.com/news/platform-based-automation-solution-for-industrial-robotics-by-gessmann"><span style="font-weight: 400">Wandelbots</span></a><span style="font-weight: 400"> highlights a powerful pathway to digitalize real-world facilities into physically accurate simulations. For solutions like </span><span style="font-weight: 400">Gessmann’s GESSbot </span><span style="font-weight: 400">robots, this opens up future opportunities to accelerate commissioning and reduce deployment risks across complex industrial sites.</span></p>
<h2><b>Bringing AI Agents to the Factory Floor</b></h2>
<p><span style="font-weight: 400">Traditional AI answers problems under a rigid set of conditions. AI agents bring a new level of proactive and adaptive intelligence that provides the context on what’s seen and analyzes what’s happening before taking action. </span></p>
<p><span style="font-weight: 400">At the show, attendees will see how vision AI agents built on </span><a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/intelligent-video-analytics-platform/"><span style="font-weight: 400">NVIDIA Metropolis</span></a><span style="font-weight: 400"> libraries along with Nemotron and NVIDIA Cosmos open models are transforming industrial operations, combining multiple data streams with existing camera infrastructure to reach new levels of quality control, operational efficiency and worker safety.</span></p>
<p><a target="_blank" href="https://www.invisible.ai/blog/invisible-ai-launches-vision-execution-system-with-nvidia/"><span style="font-weight: 400">Invisible AI</span></a><span style="font-weight: 400"> is launching its Vision Execution System, a vision AI system that uses agents to capture, structure and analyze every production cycle on the factory floor in real time. Built with the</span><a target="_blank" href="https://build.nvidia.com/nvidia/video-search-and-summarization"> <span style="font-weight: 400">NVIDIA Metropolis VSS Blueprint</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://huggingface.co/nvidia/Cosmos-Reason2-8B"><span style="font-weight: 400">NVIDIA Cosmos Reason 2</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/"><span style="font-weight: 400">Nemotron</span></a><span style="font-weight: 400"> models, these autonomous AI agents surface actionable insights directly to operators before issues compound. This class of production intelligence is already driving measurable gains at some of the world’s largest automotive manufacturing factories like </span><span style="font-weight: 400">Toyota</span><span style="font-weight: 400">.</span></p>
<p><iframe loading="lazy" title="How AI is Transforming Manufacturing End-to-End" width="1200" height="675" src="https://www.youtube.com/embed/D8wSXABcW-A?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><a target="_blank" href="https://tulip.co/blog/terex-factory-playback-gtc"><span style="font-weight: 400">Tulip Interfaces</span></a><span style="font-weight: 400"> will showcase Factory Playback, which uses the VSS blueprint and Cosmos Reason 2 to synchronize machine telemetry, operator workflows, quality events and video into a searchable, contextualized timeline of operations. </span><span style="font-weight: 400">Terex</span><span style="font-weight: 400">, a global industrial equipment manufacturer with over 40 plants, uses the platform to gain valuable insights and is expected to achieve an estimated 3% increase in yield and 10% reduction in rework.</span></p>
<p><span style="font-weight: 400">Fogsphere</span><span style="font-weight: 400"> extends vision AI into some of the most demanding manufacturing and industrial environments. Its </span><span style="font-weight: 400">Vision Agent</span><span style="font-weight: 400"> platform — now supporting ARM-based edge deployment and training workflows built on NVIDIA Cosmos Reason 2 and the Metropolis VSS Blueprint — enables its customers to build and finetune visual AI agents. </span><span style="font-weight: 400">Saipem</span><span style="font-weight: 400">, an </span><span style="font-weight: 400">engineering services company in the energy and industrial ecosystem, is using the platform to build agents that can </span><span style="font-weight: 400">detect and respond in real time to high-risk safety and environmental events. </span></p>
<h2><b>Machines That Can Think</b></h2>
<p><span style="font-weight: 400">AI reasoning is breaking industrial robots free from single-task constraints and time-consuming reprogramming, giving them the ability to navigate unstructured environments, learn new tasks and act autonomously. At Hannover Messe, NVIDIA partners are demonstrating robots completing real production tasks and physical AI frameworks that put autonomous automation within reach of manufacturers of every size.</span></p>
<p><span style="font-weight: 400">At a </span><a target="_blank" href="https://sie.ag/4LyPvV"><span style="font-weight: 400">Siemens</span></a><span style="font-weight: 400"> blueprint autonomous electronics factory in Erlangen, Germany, </span><a target="_blank" href="https://thehumanoid.ai/siemens-and-humanoid-bring-physical-ai-to-the-factory-floor-deploying-humanoids-in-industrial-operations-with-nvidia/"><span style="font-weight: 400">Humanoid’s HMND 01</span></a><span style="font-weight: 400"> wheeled humanoid — running the </span><a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-thor/"><span style="font-weight: 400">NVIDIA Jetson Thor</span></a><span style="font-weight: 400"> edge AI module for on-robot compute and developed using </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400">Isaac Sim</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://developer.nvidia.com/isaac/lab"><span style="font-weight: 400">Isaac Lab</span></a><span style="font-weight: 400"> open frameworks for simulation and reinforcement learning — has completed autonomous logistics operations in a first proof of concept within the production environment. </span><span style="font-weight: 400">Humanoid’s</span><span style="font-weight: 400"> simulation-first development compressed what typically takes up to two years of hardware development down to just seven months.</span></p>
<p><a target="_blank" href="https://schunk.com/de/en/news/physical-ai-von-der-simulation-zum-shopfloor/38528"><span style="font-weight: 400">SCHUNK’s GROW</span></a><span style="font-weight: 400"> automation cell brings physical AI into production in a standardized, deployable form. NVIDIA Omniverse libraries and Isaac simulation frameworks enable robot behavior to be simulated, trained and validated before the cell goes live. </span><span style="font-weight: 400">Wandelbots’ NOVA platform</span><span style="font-weight: 400"><a target="_blank" href="https://www.wandelbots.com/news/wandelbots-schunk-and-ey-join-forces-to-scale-physical-ai"> connects simulation to the shop floor</a> for continuous refinement, while </span><span style="font-weight: 400">EY</span><span style="font-weight: 400"> designs the operating model to scale it across Europe’s small- and medium-sized enterprises.</span></p>
<p><span style="font-weight: 400">Using NVIDIA’s physical AI stack, including the </span><a target="_blank" href="http://nvidianews.nvidia.com/news/nvidia-announces-open-physical-ai-data-factory-blueprint-to-accelerate-robotics-vision-ai-agents-and-autonomous-vehicle-development"><span style="font-weight: 400">Physical AI Data Factory Blueprint</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.nvidia.com/en-us/edge-computing/products/igx/"><span style="font-weight: 400">NVIDIA IGX Thor</span></a><span style="font-weight: 400"> for industrial-grade edge compute with functional safety, </span><span style="font-weight: 400">Hexagon Robotics</span><span style="font-weight: 400"> is accelerating robot training, validation and deployment. The results are already taking shape, with </span><span style="font-weight: 400">AEON</span><span style="font-weight: 400"> set to perform assembly operations at a </span><span style="font-weight: 400">BMW Plant</span><span style="font-weight: 400"> in Leipzig — marking one of the first humanoid deployments in a German production environment.</span></p>
<p><span style="font-weight: 400">QNX</span><span style="font-weight: 400"> has expanded its collaboration with NVIDIA to power safety‑critical edge AI systems for robotics, medical and industrial applications, with QNX OS for Safety 8.0 now integrated on NVIDIA IGX Thor and the NVIDIA Halos safety stack</span><span style="font-weight: 400">.</span></p>
<p><i><span style="font-weight: 400">Explore </span></i><a target="_blank" href="https://www.nvidia.com/en-us/industries/industrial-sector/"><i><span style="font-weight: 400">NVIDIA AI technologies for industrial and manufacturing</span></i></a><i><span style="font-weight: 400"> by joining </span></i><a target="_blank" href="https://www.nvidia.com/en-us/events/hannover-messe/"><i><span style="font-weight: 400">NVIDIA at Hannover Messe</span></i></a><i><span style="font-weight: 400">.</span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/hmi-1920x1080-1.jpg" type="image/jpeg" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/hmi-1920x1080-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[NVIDIA and Partners Showcase the Future of AI-Driven Manufacturing at Hannover Messe 2026]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>No Need for Space Gear — Capcom’s ‘PRAGMATA’ Joins GeForce NOW on Launch Day</title>
		<link>https://blogs.nvidia.com/blog/geforce-now-thursday-pragmata/</link>
		
		<dc:creator><![CDATA[GeForce NOW Community]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 13:00:11 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Cloud Gaming]]></category>
		<category><![CDATA[GeForce NOW]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92373</guid>

					<description><![CDATA[Head straight for orbit with GeForce NOW — no space helmet required.  PRAGMATA, Capcom’s long-awaited sci-fi action adventure, touches down on GeForce NOW the same day it launches worldwide. The futuristic journey through a cold lunar station in the near future can be streamed instantly from the cloud to almost any device, no console or [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400">Head straight for orbit with </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce-now/"><span style="font-weight: 400">GeForce NOW</span></a><span style="font-weight: 400"> — no space helmet required. </span></p>
<p><i><span style="font-weight: 400">PRAGMATA</span></i><span style="font-weight: 400">, Capcom’s long-awaited sci-fi action adventure, touches down on GeForce NOW the same day it launches worldwide. The futuristic journey through a cold lunar station in the near future can be streamed instantly from the cloud to almost any device, no console or heavy hardware needed.</span></p>
<p><span style="font-weight: 400">That’s only the beginning. </span><span style="font-weight: 400">Five </span><span style="font-weight: 400">new titles join the cloud this week, expanding April’s gaming galaxy with fresh adventures and endless possibilities</span><i><span style="font-weight: 400">.</span></i><span style="font-weight: 400"> </span></p>
<p><span style="font-weight: 400">Plus, the GeForce NOW Ultimate membership </span><a target="_blank" href="https://x.com/NVIDIAGFN/status/2043894514570379303"><span style="font-weight: 400">comes to gamers in India</span></a><span style="font-weight: 400"> for the first time, with the service now available in beta and operated by NVIDIA. </span><span style="font-weight: 400">              </span></p>
<p><span style="font-weight: 400">Time to see what’s landing on GeForce NOW.</span></p>
<h2><b>A Mission Gone Wrong</b></h2>
<p><i><span style="font-weight: 400">PRAGMATA </span></i><span style="font-weight: 400">is Capcom’s newest sci-fi action adventure that blends heart, high-tech and a hauntingly quiet world set in the near future. Step into the boots of Hugh Williams, an investigator navigating a lunar research station gone silent and Diana, a young android. Armed with an arsenal of weapons and the ability to hack, every corridor and console becomes part of a cinematic experience filled with tense exploration and fast-paced action.</span></p>
<p><span style="font-weight: 400">The story unfolds amid the cold vacuum of the moon after a massive quake hits the station researching Lunafilament — a material said to be able to create anything given enough data. Awake, injured and disoriented, Hugh crosses paths with Diana, the mysterious android girl known as a Pragmata. Now, they must work together as they face the rogue station on their way back to Earth.</span></p>
<p><i><span style="font-weight: 400">PRAGMATA</span></i><span style="font-weight: 400"> shines in stunning clarity with ray-traced lighting and </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce/technologies/dlss/"><span style="font-weight: 400">NVIDIA DLSS 4</span></a><span style="font-weight: 400"> technology boosting frame rates and image quality. Stream it on launch day at full fidelity, even without the latest hardware — no need to wait on a large install or worry about hardware specs. Hugh and Diana’s lunar mystery is ready when the moment strikes.</span></p>
<h2><b>Let’s Play Today</b></h2>
<figure id="attachment_92378" aria-describedby="caption-attachment-92378" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-92378" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Fornite_Save_The_World-1680x840.jpg" alt="fortnite save the world on geforce now" width="1200" height="600" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Fornite_Save_The_World-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Fornite_Save_The_World-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Fornite_Save_The_World-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Fornite_Save_The_World-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Fornite_Save_The_World-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Fornite_Save_The_World.jpg 2048w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92378" class="wp-caption-text"><em>Heroes in the cloud don’t have to wait for updates.</em></figcaption></figure>
<p><i><span style="font-weight: 400">Fortnite: Save the World</span></i><span style="font-weight: 400"> is now free and ready to stream instantly on GeForce NOW. The storm hits hard and the heroes hit harder — jump into a co-op adventure that mixes base-building, looting and all-out action against waves of Husks. Craft the ultimate fort, set sneaky traps and team up to protect what’s left of the world — no waiting for updates or patches, just pure fight-and-build mayhem. The storm’s closing in, but thanks to the cloud, the party’s jumping right into the action. “Save the World” </span><a target="_blank" href="https://nvidia.custhelp.com/app/answers/detail/a_id/5235/"><span style="font-weight: 400">isn’t available on mobile devices</span></a><span style="font-weight: 400">, including tablets.</span></p>
<p><span style="font-weight: 400">In addition, members can look for the following:</span><i></i></p>
<ul>
<li><i><span style="font-weight: 400">REPLACED </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/1663850/REPLACED/"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.xbox.com/en-us/games/store/replaced/9nv3l234vgxd"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Game Pass, April 14, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">Windrose </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3041230/Windrose/"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 14, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">Cthulhu: The Cosmic Abyss </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2760560/Cthulhu_The_Cosmic_Abyss/"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 16, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">PRAGMATA </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3357650?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 16, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">PRAGMATA SKETCHBOOK &#8211; DEMO</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/4003800?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, GeForce RTX 5080-ready)</span></li>
</ul>
<p><span style="font-weight: 400">What are you planning to play this weekend? Let us know on </span><a target="_blank" href="https://www.twitter.com/nvidiagfn"><span style="font-weight: 400">X</span></a><span style="font-weight: 400"> or in the comments below.</span></p>
<blockquote class="twitter-tweet" data-width="550" data-dnt="true">
<p lang="en" dir="ltr">You’re loading into a game and can invite ONE person to your squad…</p>
<p>Who’s your pick?</p>
<p>&mdash; <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f329.png" alt="🌩" class="wp-smiley" style="height: 1em; max-height: 1em;" /> NVIDIA GeForce NOW (@NVIDIAGFN) <a target="_blank" href="https://twitter.com/NVIDIAGFN/status/2044445644815818811?ref_src=twsrc%5Etfw">April 15, 2026</a></p></blockquote>
<p><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-PRAGMATA.jpg" type="image/jpeg" width="2048" height="1024">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-PRAGMATA-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[No Need for Space Gear — Capcom’s ‘PRAGMATA’ Joins GeForce NOW on Launch Day]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Rethinking AI TCO: Why Cost per Token Is the Only Metric That Matters</title>
		<link>https://blogs.nvidia.com/blog/lowest-token-cost-ai-factories/</link>
		
		<dc:creator><![CDATA[Shruti Koparkar]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 15:00:26 +0000</pubDate>
				<category><![CDATA[AI Infrastructure]]></category>
		<category><![CDATA[Inference]]></category>
		<category><![CDATA[NVIDIA Blackwell]]></category>
		<category><![CDATA[Think SMART]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92229</guid>

					<description><![CDATA[Traditional data centers only stored, retrieved and processed data. In the generative and agentic AI era, these facilities have evolved into AI token factories. With AI inference becoming their primary workload, their primary output is intelligence manufactured in the form of tokens.  This transformation demands a corresponding shift in how the economics of AI infrastructure, [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">Traditional data centers only stored, retrieved and processed data. In the generative and agentic AI era, these facilities have evolved into AI token factories. With AI inference becoming their primary workload, their primary output is intelligence manufactured in the form of tokens. </span></p>
<p><span style="font-weight: 400;">This transformation demands a corresponding shift in how the economics of AI infrastructure, including total cost of ownership (TCO), is assessed. Enterprises evaluating AI infrastructure still too often focus on peak chip specifications, compute cost or floating point operations per second for every dollar spent, aka FLOPS per dollar. </span></p>
<p><span style="font-weight: 400;">The distinction that matters is this:</span></p>
<ul>
<li><b>Compute cost </b><span style="font-weight: 400;">is what enterprises pay for AI infrastructure, whether rented from cloud providers or owned on premises.</span></li>
<li><b>FLOPS per dollar</b><span style="font-weight: 400;"> is how much raw computing power an enterprise gets for every dollar spent, but raw compute and real-world token output are not the same thing. </span></li>
<li><b>Cost per token</b><span style="font-weight: 400;"> is an enterprise’s all-in cost to produce each delivered token, usually represented as cost per million tokens.</span></li>
</ul>
<p><span style="font-weight: 400;">The first two are merely input metrics. Optimizing for inputs while the business runs on output is a fundamental mismatch. </span></p>
<p><span style="font-weight: 400;">Cost per token determines whether enterprises can profitably scale AI. It’s the one TCO metric that directly accounts for hardware performance, software optimization, ecosystem support and real-world utilization — and NVIDIA delivers the lowest cost per token in the industry. </span></p>
<h2><b>What Are the Factors That Lower Token Cost?</b></h2>
<p><span style="font-weight: 400;">Understanding how to optimize token cost requires looking at the equation for calculating cost per million tokens.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-92324 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-equation-token-5115300-scaled.png" alt="An equation describing how to calculate cost per million tokens. Cost per million tokens = [cost per GPU per hour / (tokens per GPU per second x 60 seconds x 60 minutes) ] x 1 million." width="2048" height="1152" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-equation-token-5115300-scaled.png 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-equation-token-5115300-960x540.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-equation-token-5115300-1680x945.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-equation-token-5115300-1280x720.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-equation-token-5115300-1536x864.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-equation-token-5115300-1290x725.png 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-equation-token-5115300-630x354.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-equation-token-5115300-300x169.png 300w, https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-equation-token-5115300-400x225.png 400w" sizes="auto, (max-width: 2048px) 100vw, 2048px" /></p>
<p><span style="font-weight: 400;">In this equation, many enterprises evaluating AI infrastructure focus on the numerator: the cost per GPU per hour. For cloud deployments, this is the hourly rate paid to a cloud provider; for on-premises deployments, it’s the effective hourly cost derived from amortizing owned infrastructure. The real key to reducing token cost, however, lies in the denominator: maximizing the delivered token output.</span></p>
<p><span style="font-weight: 400;">That denominator carries two business implications.</span></p>
<ul>
<li><b>Minimize token cost</b><span style="font-weight: 400;">: When this increase in token output is reflected through the cost equation, it drives down cost per token, which is what grows the profit margin on every interaction served.</span></li>
<li><b>Maximize revenue</b><span style="font-weight: 400;">: More tokens delivered per second also translates to more tokens per megawatt, which means more intelligence to use in AI-powered products and services, generating more revenue from the same infrastructure investment.</span></li>
</ul>
<p><span style="font-weight: 400;">So focusing only on the numerator means missing what drives the denominator. Think of it as an “inference iceberg”: The numerator sits above the surface, visible and easy to compare. The denominator is everything beneath the surface, which represents key factors that determine real-world token output. Accurately evaluating AI infrastructure starts with asking what lies beneath. </span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-92321 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Inference-Iceberg-5115325_004-1-scaled.jpg" alt="Image describing the &quot;inference iceberg.&quot; The top of the iceberg is characterized by peak chip specifications such as FLOPS and high-bandwidth memory (cost per GPU per hour, FLOPS per dollar). The bottom of the iceberg is characterized by extreme codesign across compute, networking, software, memory, storage, software and ecosystem (cost per token, tokens per watt)." width="2048" height="1152" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/Inference-Iceberg-5115325_004-1-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Inference-Iceberg-5115325_004-1-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Inference-Iceberg-5115325_004-1-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Inference-Iceberg-5115325_004-1-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Inference-Iceberg-5115325_004-1-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Inference-Iceberg-5115325_004-1-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Inference-Iceberg-5115325_004-1-630x354.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Inference-Iceberg-5115325_004-1-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Inference-Iceberg-5115325_004-1-400x225.jpg 400w" sizes="auto, (max-width: 2048px) 100vw, 2048px" /></p>
<ul>
<li aria-level="1"><b>Surface-level inquiry:</b>
<ul>
<li><i><span style="font-weight: 400;">What is the cost per GPU hour?</span></i></li>
<li><i><span style="font-weight: 400;">What are the peak petaflops and high-bandwidth memory capacity?</span></i></li>
<li><i><span style="font-weight: 400;">What are the FLOPS per dollar?</span></i></li>
</ul>
</li>
<li aria-level="1"><b>In-depth cost analysis:</b>
<ul>
<li><i><span style="font-weight: 400;">What is the </span></i><a href="https://blogs.nvidia.com/blog/inference-open-source-models-blackwell-reduce-cost-per-token/"><i><span style="font-weight: 400;">cost per million tokens</span></i></a><i><span style="font-weight: 400;">? Specifically, what is the cost per million tokens for large-scale mixture-of-experts (MoE) reasoning models, which represent the most widely deployed type of AI models?</span></i></li>
<li><i><span style="font-weight: 400;">What is the delivered </span></i><a target="_blank" href="https://developer.nvidia.com/blog/scaling-token-factory-revenue-and-ai-efficiency-by-maximizing-performance-per-watt/"><i><span style="font-weight: 400;">token output per megawatt</span></i></a><i><span style="font-weight: 400;">? For on-premises deployments especially, where capital commitment to land, power and infrastructure is substantial, maximizing intelligence produced per megawatt is critical.</span></i></li>
<li><i><span style="font-weight: 400;">Can the </span></i><a href="https://blogs.nvidia.com/blog/mixture-of-experts-frontier-models/"><i><span style="font-weight: 400;">scale-up interconnect handle the “all-to-all” traffic of MoE</span></i></a><i><span style="font-weight: 400;"> models?</span></i></li>
<li><i><span style="font-weight: 400;">Is </span></i><a target="_blank" href="https://developer.nvidia.com/blog/introducing-nvfp4-for-efficient-and-accurate-low-precision-inference/"><i><span style="font-weight: 400;">FP4 precision supported</span></i></a><i><span style="font-weight: 400;">? Can the inference stack make use of FP4 while maintaining high accuracy?</span></i></li>
<li><i><span style="font-weight: 400;">Does the inference runtime support </span></i><a target="_blank" href="https://developer.nvidia.com/blog/an-introduction-to-speculative-decoding-for-reducing-latency-in-ai-inference/"><i><span style="font-weight: 400;">speculative decoding or multi-token prediction</span></i></a><i><span style="font-weight: 400;"> to increase user interactivity?</span></i></li>
<li><i><span style="font-weight: 400;">Does the serving layer support </span></i><a target="_blank" href="https://developer.nvidia.com/blog/nvidia-dynamo-1-production-ready/"><i><span style="font-weight: 400;">disaggregated serving, KV-aware routing, KV-cache offloading</span></i></a><i><span style="font-weight: 400;"> and other optimizations?</span></i></li>
<li><i><span style="font-weight: 400;">Does the platform support the unique workload requirements of agentic AI — including ultralow latency, high throughput and large input sequence lengths?</span></i></li>
<li><i><span style="font-weight: 400;">Does the platform support the full lifecycle, from training and post-training to high-scale inference, across all model architectures, to ensure infrastructure fungibility and high utilization?</span></i></li>
</ul>
</li>
</ul>
<p><span style="font-weight: 400;">Every one of these algorithmic, hardware and software optimizations must be active and integrated, or the denominator collapses. A “cheaper” GPU that delivers significantly fewer tokens per second results in a much higher cost per token. AI infrastructure that gets it right across the full stack ensures that every optimization enhances the others.</span></p>
<h2><b>Why Does Cost per Token Matter Much More Than FLOPS per Dollar?</b></h2>
<p><span style="font-weight: 400;">The following data for the DeepSeek-R1 AI model demonstrates the difference between theoretical and actual business outcomes.</span></p>
<p><span style="font-weight: 400;">Looking at compute cost alone, the NVIDIA Blackwell platform appears to cost roughly 2x more than NVIDIA Hopper — but compute cost says nothing about the output that investment buys. An analysis of mere FLOPS per dollar suggests a 2x NVIDIA Blackwell advantage compared with the NVIDIA Hopper architecture. However, the actual outcome is orders of magnitude different: Blackwell delivers more than 50x greater token output per watt than Hopper, resulting in nearly 35x lower cost per million tokens. </span></p>

<table id="tablepress-36" class="tablepress tablepress-id-36">
<thead>
<tr class="row-1">
	<th class="column-1"><strong>Metric</strong></th><th class="column-2"><strong>NVIDIA Hopper (HGX H200) </strong></th><th class="column-3"><strong>NVIDIA Blackwell (GB300 NVL72) </strong></th><th class="column-4"><strong>NVIDIA Blackwell Relative to Hopper</strong></th>
</tr>
</thead>
<tbody class="row-striping row-hover">
<tr class="row-2">
	<td class="column-1">Cost per GPU per Hour ($)</td><td class="column-2">$1.41 </td><td class="column-3">$2.65 </td><td class="column-4">2x</td>
</tr>
<tr class="row-3">
	<td class="column-1">FLOP per Dollar (PFLOPS) </td><td class="column-2">2.8</td><td class="column-3">5.6</td><td class="column-4">2x</td>
</tr>
<tr class="row-4">
	<td class="column-1">Tokens per Second per GPU</td><td class="column-2">90</td><td class="column-3">6,000</td><td class="column-4"><strong>65x</strong></td>
</tr>
<tr class="row-5">
	<td class="column-1">Tokens per Second per MW</td><td class="column-2">54K</td><td class="column-3">2.8M</td><td class="column-4"><strong>50x</strong></td>
</tr>
<tr class="row-6">
	<td class="column-1">Cost per Million Tokens ($)</td><td class="column-2">$4.20 </td><td class="column-3">$0.12 </td><td class="column-4"><strong>35x lower</strong></td>
</tr>
</tbody>
</table>
<!-- #tablepress-36 from cache -->
<p><i><span style="font-weight: 400;">Note: Data is sourced from NVIDIA analysis and the </span></i><a target="_blank" href="https://inferencex.semianalysis.com/inference"><i><span style="font-weight: 400;">SemiAnalysis InferenceX v2</span></i></a><i><span style="font-weight: 400;"> benchmark. </span></i></p>
<p><span style="font-weight: 400;">This massive divergence proves NVIDIA Blackwell delivers a massive leap in business value over the earlier Hopper generation that far outpaces any increase in system cost.</span></p>
<h2><b>How to Choose the Right AI Infrastructure</b></h2>
<p><span style="font-weight: 400;">Comparing AI infrastructure based on compute cost or theoretical FLOPS per dollar isn’t just insufficient; it doesn’t provide an accurate representation of inference economics. As the data demonstrates, an accurate evaluation of AI infrastructure’s revenue potential and profitability requires a shift from input metrics to cost per token and delivered token output.</span></p>
<p><span style="font-weight: 400;">NVIDIA delivers the industry’s lowest token cost and highest token throughput through </span><a href="https://blogs.nvidia.com/blog/blackwell-ai-inference/"><span style="font-weight: 400;">extreme codesign</span></a><span style="font-weight: 400;"> across compute, networking, memory, storage, software and partner technologies. Moreover, the constant optimization of open source inference software such as vLLM, SGLang, NVIDIA TensorRT-LLM and NVIDIA Dynamo built on the NVIDIA platform means that on existing NVIDIA infrastructure, token output continues to increase and the cost per token continues to decline long after it’s acquired.</span></p>
<p><span style="font-weight: 400;">Leading cloud providers and NVIDIA cloud partners are already delivering this advantage at scale. Partners such as</span><span style="font-weight: 400;"> <a target="_blank" href="https://x.com/NVIDIADC/status/2044514332508082515?s=20">CoreWeave</a>, <a target="_blank" href="https://x.com/NVIDIADC/status/2044514334437437687?s=20">Nebius</a>, <a target="_blank" href="https://x.com/NVIDIADC/status/2044514336303890477?s=20">Nscale</a> and <a target="_blank" href="https://x.com/NVIDIADC/status/2044514338132709762?s=20">Together AI</a> </span><span style="font-weight: 400;">have </span><a target="_blank" href="https://www.youtube.com/watch?v=jw_o0xr8MWU&amp;t=3982s"><span style="font-weight: 400;">deployed NVIDIA Blackwell infrastructure</span></a><span style="font-weight: 400;"> and optimized their stacks to bring enterprises the lowest token cost available today, with the full benefit of NVIDIA’s hardware, software and ecosystem codesign behind every interaction served.</span></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-blogheader-token-1920x1080-5144600_HEADLINE.jpg.jpg" type="image/jpeg" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/inference-blogheader-token-1920x1080-5144600_HEADLINE.jpg-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Rethinking AI TCO: Why Cost per Token Is the Only Metric That Matters]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>New Adobe Premiere Color Grading Mode Accelerated on NVIDIA GPUs</title>
		<link>https://blogs.nvidia.com/blog/rtx-ai-garage-nab-adobe-premiere-color-mode/</link>
		
		<dc:creator><![CDATA[Joel Pennington]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 13:00:38 +0000</pubDate>
				<category><![CDATA[Pro Graphics]]></category>
		<category><![CDATA[Art]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[NVIDIA RTX]]></category>
		<category><![CDATA[NVIDIA Studio]]></category>
		<category><![CDATA[RTX AI Garage]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92302</guid>

					<description><![CDATA[The NAB Show 2026 trade show, running April 18-22 in Las Vegas, is set to showcase a wave of new features and optimizations for top video editing applications. Bringing together over 60,000 content professionals from across the broadcast and media and entertainment industries, the event highlights how video editors, livestreamers and professional creators are exploring [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">The </span><a target="_blank" href="https://www.nvidia.com/en-us/events/nab/"><span style="font-weight: 400;">NAB Show 2026 trade show</span></a><span style="font-weight: 400;">, running April 18-22 in Las Vegas, is set to showcase a wave of new features and optimizations for top video editing applications. Bringing together over 60,000 content professionals from across the broadcast and media and entertainment industries, the event highlights how video editors, livestreamers and professional creators are exploring new tools, accelerated by </span><a target="_blank" href="https://www.nvidia.com/en-us/technologies/rtx/"><span style="font-weight: 400;">NVIDIA RTX</span></a><span style="font-weight: 400;"> technology, to enhance and streamline their creative workflows.</span></p>
<p><span style="font-weight: 400;">At the show, Adobe is announcing a new Adobe Premiere Color Mode in beta. </span></p>
<p><span style="font-weight: 400;">Designed to function as a dedicated grading environment nested directly within Premiere, it offers a clean, responsive interface that lets editors stay in their creative flow rather than relying on external tools for color correction. Tapping into GPU acceleration on NVIDIA GeForce RTX- and NVIDIA RTX PRO-equipped systems, this streamlined workflow, operating in 32-bit color depth for the first time, delivers significantly faster performance and quality.</span></p>
<p><span style="font-weight: 400;">NVIDIA also launched a new update to </span><a target="_blank" href="https://www.nvidia.com/en-us/software/nvidia-app/g-assist/"><span style="font-weight: 400;">NVIDIA Project G-Assist</span></a><span style="font-weight: 400;"> — an experimental AI assistant that helps tune, control and optimize GeForce RTX systems. </span></p>
<h2><b>Color Meets Compute</b></h2>
<p><span style="font-weight: 400;">Premiere’s Color Mode is a new clean, responsive interface within Adobe Premiere that enables editors to do color grading on native videos. Every element is designed to guide editors through the grading process without distractions. A large program monitor anchors the experience, providing immediate visual feedback as adjustments are made to enable faster decision-making and more precise control.</span></p>
<p><span style="font-weight: 400;">A clip grid view allows editors to visualize progression across shots in a sequence. This makes it easier to maintain consistency across scenes and ensure a cohesive look throughout a project. </span></p>
<p><span style="font-weight: 400;">Controls are organized into focused modules, each tailored to a specific aspect of color grading. Multiple modules can be active simultaneously, giving editors flexibility while maintaining clarity. Each control features a unique heads-up display (HUD), providing contextual guidance without cluttering the interface.</span></p>
<p><span style="font-weight: 400;">Color grading is one of the most computationally intensive tasks in post-production. Every adjustment — bidirectional controls, multi-zone tonal shaping and stacked color operations — runs on NVIDIA GPUs, accelerating playback, iteration and visual feedback.</span></p>
<p><span style="font-weight: 400;">Editors can work with up to six luminance adjustment zones, moving beyond traditional highlights, midtones and shadows models. This allows for more nuanced tonal control and finer adjustments across the image. </span></p>
<p><span style="font-weight: 400;">Visual scopes are context-aware, dynamically adapting based on the selected tool. HUD overlays provide visual cues directly within the scopes, helping editors understand how their adjustments affect the image without needing to interpret complex visual scopes and graphs.</span></p>
<p><span style="font-weight: 400;">The entire system now operates in 32-bit color depth precision, delivering maximum color fidelity and preventing unwanted clipping. Editors retain full control, with the ability to clip colors intentionally when needed for creative effect. Color styles can also be applied flexibly, at the sequence, clip, reel or custom group level, making it easier to manage looks across complex projects.</span></p>
<p><span style="font-weight: 400;">Download the </span><a target="_blank" href="https://www.adobe.com/products/premiere/campaign/pricing.html?sdid=SYBNLTYL&amp;mv=search&amp;mv2=paidsearch&amp;ef_id=Cj0KCQjws83OBhD4ARIsACblj19yyhxANqzUgnYW2DOCc0zN_bGPAU8_vEQJXJcdmj6SJJStbezgZEgaAvwQEALw_wcB:G:s&amp;s_kwcid=AL!3085!3!700626653693!e!!g!!adobe%20premiere!1712852043!83993219448&amp;mv=search&amp;gad_source=1&amp;gad_campaignid=1712852043&amp;gbraid=0AAAAAD5r4AzuOqYOtXJ3DYPgL3P-QeEap&amp;gclid=Cj0KCQjws83OBhD4ARIsACblj19yyhxANqzUgnYW2DOCc0zN_bGPAU8_vEQJXJcdmj6SJJStbezgZEgaAvwQEALw_wcB"><span style="font-weight: 400;">Adobe Premiere</span></a><span style="font-weight: 400;"> (beta) to get started with Color Mode. </span></p>
<h2><b>Project G-Assist: Enhanced Recommendations and Controls </b></h2>
<p><span style="font-weight: 400;">The NVIDIA Project G-Assist on-device AI assistant helps users get the most out of their hardware. Today’s update adds an advanced detection system for gaming settings, as well as an enhanced knowledge system, enabling G-Assist to deliver higher accuracy when providing advice or adjusting settings for esports and AAA gaming.</span></p>
<p><span style="font-weight: 400;">The assistant can also now control more settings across systems. It can configure advanced RTX features from the NVIDIA App, including NVIDIA DLSS Overrides, Smooth Motion, RTX HDR, Digital Vibrance and encoder settings.</span></p>
<p><span style="font-weight: 400;">Download Project G-Assist v0.2.1 from the </span><a target="_blank" href="https://www.nvidia.com/en-eu/software/nvidia-app/"><span style="font-weight: 400;">NVIDIA App</span></a><span style="font-weight: 400;">.</span></p>
<h2><b>#ICYMI: The Latest Updates for RTX AI PCs</b></h2>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f4f9.png" alt="📹" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Learn how visual effects shop </span><a target="_blank" href="https://www.youtube.com/watch?v=3Ploi723hg4"><span style="font-weight: 400;">Corridor Crew’s</span></a><span style="font-weight: 400;"> Niko Pueringer built his own </span><a target="_blank" href="https://github.com/nikopueringer/CorridorKey"><span style="font-weight: 400;">green screen key tool,</span></a><span style="font-weight: 400;"> powered by NVIDIA RTX GPUs, at NAB. Stop by the Puget Systems booth on Monday, April 20, at 1 p.m. PT for a special presentation, or tune in on </span><a target="_blank" href="https://www.youtube.com/@NVIDIA-Studio"><span style="font-weight: 400;">NVIDIA Studio’s YouTube channel</span></a><span style="font-weight: 400;"> on Tuesday, April 21, at 12 p.m. PT to watch the full session.</span></p>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f5bc.png" alt="🖼" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Also at NAB, join NVIDIA’s Sabour Amirazodi for a special presentation at the ASUS booth on Tuesday, April 21, at 11 a.m. PT. Amirazodi will showcase how guiding generative AI can produce creative outputs like storyboards or entire movie trailers — based on a single image input. </span></p>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f4fd.png" alt="📽" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Check out content creator </span><a target="_blank" href="https://www.youtube.com/@GavinHerman"><span style="font-weight: 400;">Gavin Herman’s</span></a><span style="font-weight: 400;"> Studio Session, “How to Edit Professional Talking Head Videos in DaVinci Resolve,” on the </span><a target="_blank" href="https://youtu.be/-LYs3h2RyeU?si=4vJLDP9ROv8IvDkt"><span style="font-weight: 400;">NVIDIA Studio YouTube channel</span></a><span style="font-weight: 400;">. Generative workflow specialists can watch this </span><a target="_blank" href="https://www.nvidia.com/en-us/on-demand/session/gtc26-dlit81948/"><span style="font-weight: 400;">two-hour, instructor-led workshop</span></a><span style="font-weight: 400;"> on how to use NVIDIA GPU acceleration for ComfyUI.</span></p>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f99e.png" alt="🦞" class="wp-smiley" style="height: 1em; max-height: 1em;" /> LM Studio is now an official OpenClaw provider. OpenClaw can now run local models through LM Studio on NVIDIA GPUs, unlocking faster on-device performance.</span></p>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f9a5.png" alt="🦥" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Unsloth and NVIDIA have teamed up to eliminate hidden bottlenecks that slow down fine-tuning on NVIDIA GPUs, improving fine-tuning performance by 15%. </span></p>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2728.png" alt="✨" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Google’s Gemma 4 family of omni-capable models are built for local AI across a wide range of devices. Google and NVIDIA have optimized Gemma 4 for NVIDIA GPUs, enabling efficient performance on NVIDIA RTX-powered PCs and workstations, NVIDIA DGX Spark personal AI supercomputers and NVIDIA Jetson Orin Nano edge AI modules.</span></p>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f4fd.png" alt="📽" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Check out this </span><a target="_blank" href="https://www.youtube.com/watch?v=RPGaakGQ6bg"><span style="font-weight: 400;">NVIDIA GTC session</span></a><span style="font-weight: 400;"> on how developers can build, run and optimize AI agents locally on NVIDIA GPUs, covering everything from quantization to backends like Ollama and applications like OpenClaw and ComfyUI.</span></p>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f440.png" alt="👀" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Wondershare Filmora has added a new feature for Eye Contact Correction based on the NVIDIA Broadcast Eye Contact feature. This feature runs on the cloud on NVIDIA GPUs, designed to refine the gaze of subjects in post production for a more natural, confident and camera-ready look, delivering polished, professional videos in seconds. </span></p>
<figure id="attachment_92306" aria-describedby="caption-attachment-92306" style="width: 1200px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/Filmora.png"><img loading="lazy" decoding="async" class="size-large wp-image-92306" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Filmora-1680x945.png" alt="" width="1200" height="675" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/Filmora-1680x945.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Filmora-960x540.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Filmora-1280x720.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Filmora-1536x864.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Filmora-1290x725.png 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Filmora-630x354.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Filmora-300x169.png 300w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Filmora-400x225.png 400w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Filmora.png 1920w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></a><figcaption id="caption-attachment-92306" class="wp-caption-text"><em>Filmora’s AI Eye Contact Correction feature powered in the cloud by NVIDIA GPUs.</em></figcaption></figure>
<p><i><span style="font-weight: 400;">Plug in to NVIDIA AI PC on </span></i><a target="_blank" href="https://www.facebook.com/NVIDIA.AI.PC/"><i><span style="font-weight: 400;">Facebook</span></i></a><i><span style="font-weight: 400;">, </span></i><a target="_blank" href="https://www.instagram.com/nvidia.ai.pc/"><i><span style="font-weight: 400;">Instagram</span></i></a><i><span style="font-weight: 400;">, </span></i><a target="_blank" href="https://www.tiktok.com/@nvidia_ai_pc"><i><span style="font-weight: 400;">TikTok</span></i></a><i><span style="font-weight: 400;"> and </span></i><a target="_blank" href="https://x.com/NVIDIA_AI_PC"><i><span style="font-weight: 400;">X</span></i></a><i><span style="font-weight: 400;"> — and stay informed by subscribing to the </span></i><a target="_blank" href="https://www.nvidia.com/en-us/ai-on-rtx/?modal=subscribe-ai"><i><span style="font-weight: 400;">RTX AI PC newsletter</span></i></a><i><span style="font-weight: 400;">.</span></i></p>
<p><i><span style="font-weight: 400;">Follow NVIDIA Workstation on </span></i><a target="_blank" href="https://www.linkedin.com/showcase/3761136/"><i><span style="font-weight: 400;">LinkedIn</span></i></a><i><span style="font-weight: 400;"> and </span></i><a target="_blank" href="https://x.com/NVIDIAworkstatn"><i><span style="font-weight: 400;">X</span></i></a><i><span style="font-weight: 400;">. </span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/nab-adobe-color-mode-nv-blog-1280x680-1.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/nab-adobe-color-mode-nv-blog-1280x680-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[New Adobe Premiere Color Grading Mode Accelerated on NVIDIA GPUs]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>National Robotics Week — Latest Physical AI Research, Breakthroughs and Resources</title>
		<link>https://blogs.nvidia.com/blog/national-robotics-week-2026/</link>
		
		<dc:creator><![CDATA[NVIDIA Writers]]></dc:creator>
		<pubDate>Fri, 10 Apr 2026 19:40:59 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Computer Vision]]></category>
		<category><![CDATA[Inception]]></category>
		<category><![CDATA[Physical AI]]></category>
		<category><![CDATA[Simulation and Design]]></category>
		<category><![CDATA[Synthetic Data Generation]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92122</guid>

					<description><![CDATA[This National Robotics Week, NVIDIA is highlighting the breakthroughs that are bringing AI into the physical world — as well as the growing wave of robots transforming industries, from agricultural and manufacturing to energy and beyond. Advancements in robot learning, simulation and foundation models are accelerating development, enabling robots to move from training in virtual [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">This </span><a target="_blank" href="https://www.nationalroboticsweek.org/"><span style="font-weight: 400;">National Robotics Week</span></a><span style="font-weight: 400;">, NVIDIA is highlighting the breakthroughs that are bringing AI into the physical world — as well as the growing wave of robots transforming industries, from agricultural and manufacturing to energy and beyond.</span></p>
<p><span style="font-weight: 400;">Advancements in robot learning, simulation and foundation models are accelerating development, enabling robots to move from training in virtual environments to real-world deployment faster than ever.</span></p>
<p><span style="font-weight: 400;">With NVIDIA platforms for </span><a target="_blank" href="https://www.nvidia.com/en-us/use-cases/robotics-simulation/"><span style="font-weight: 400;">simulation</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://www.nvidia.com/en-us/use-cases/synthetic-data-physical-ai/"><span style="font-weight: 400;">synthetic data</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://www.nvidia.com/en-us/use-cases/robot-learning/"><span style="font-weight: 400;">AI-powered robot learning</span></a><span style="font-weight: 400;">, developers now have the tools to build machines that can perceive, reason and act in complex environments.</span></p>
<h2 id="gtc" class="wp-block-heading" style="font-size: 24px;"><b>Building the Next Generation of AI Robots <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#gtc"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">At </span><a target="_blank" href="https://www.nvidia.com/gtc/"><span style="font-weight: 400;">NVIDIA GTC</span></a><span style="font-weight: 400;"> last month, </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-and-global-robotics-leaders-take-physical-ai-to-the-real-world"><span style="font-weight: 400;">a new wave of technologies was introduced</span></a><span style="font-weight: 400;"> to accelerate the development of AI-powered robots.</span></p>
<p><span style="font-weight: 400;">At the core is a full-stack, cloud-to-robot workflow that connects simulation, robot learning and edge computing — making it faster to build, train and deploy intelligent machines.</span></p>
<div style="width: 1200px;" class="wp-video"><video class="wp-video-shortcode" id="video-92122-1" width="1200" height="675" preload="metadata" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/GTC26-Robots_16x9_v3-2-1.mp4?_=1" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/GTC26-Robots_16x9_v3-2-1.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/04/GTC26-Robots_16x9_v3-2-1.mp4</a></video></div>
<p><span style="font-weight: 400;"><br />
Key announcements include:</span></p>
<ul>
<li><span style="font-weight: 400;">New </span><a target="_blank" href="https://developer.nvidia.com/isaac/gr00t"><span style="font-weight: 400;">NVIDIA Isaac GR00T open models</span></a><span style="font-weight: 400;"> enable robots to understand natural language instructions and perform complex, multistep tasks using vision language action reasoning.</span></li>
<li><span style="font-weight: 400;">New </span><a target="_blank" href="https://www.nvidia.com/en-us/ai/cosmos/"><span style="font-weight: 400;">NVIDIA Cosmos world models</span></a><span style="font-weight: 400;"> for generating synthetic data and training robots at scale help systems learn more efficiently and generalize across environments.</span></li>
<li><span style="font-weight: 400;">The </span><a target="_blank" href="https://developer.nvidia.com/blog/newton-adds-contact-rich-manipulation-and-locomotion-capabilities-for-industrial-robotics/"><span style="font-weight: 400;">general availability</span></a><span style="font-weight: 400;"> of open source physics engine </span><a target="_blank" href="https://developer.nvidia.com/newton-physics"><span style="font-weight: 400;">Newton 1.0</span></a><span style="font-weight: 400;"> provides a fast and reliable foundation for dexterous robot manipulation with accurate collision detection, realistic object contact and stable simulation of complex systems with both rigid and flexible parts.</span></li>
<li><span style="font-weight: 400;">Expanded simulation capabilities with the general availability of </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400;">NVIDIA Isaac Sim 6.0</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://developer.nvidia.com/isaac/lab"><span style="font-weight: 400;">Isaac Lab 3.0</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://docs.nvidia.com/nurec/"><span style="font-weight: 400;">Omniverse NuRec technologies</span></a><span style="font-weight: 400;"> allow developers to model real-world scenarios and validate robotic systems before deployment.</span></li>
</ul>
<p><span style="font-weight: 400;">Watch </span><a target="_blank" href="https://www.nvidia.com/en-us/on-demand/search/?facet.event_name[]=GTC%20San%20Jose&amp;facet.event_year[]=2026&amp;facet.mimetype[]=event%20session&amp;headerText=All%20Sessions&amp;layout=list&amp;ncid=no-ncid&amp;page=1&amp;q=-&amp;regcode=no-ncid&amp;sort=relevance&amp;sortDir=desc"><span style="font-weight: 400;">on-demand sessions</span></a><span style="font-weight: 400;"> from the </span><a target="_blank" href="https://www.nvidia.com/gtc/"><span style="font-weight: 400;">NVIDIA GTC</span></a><span style="font-weight: 400;"> global AI conference to catch up on recent breakthroughs in robotics, showcased by leading experts in the field.</span></p>
<h2 id="peritas" class="wp-block-heading" style="font-size: 24px;"><b>Driving Breakthroughs in Surgical Precision <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#peritas"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">PeritasAI is advancing a new generation of surgical robotics by integrating <a target="_blank" href="https://www.nvidia.com/en-us/glossary/generative-physical-ai/">physical AI</a> into real-world operating environments. Using </span><a target="_blank" href="https://isaac-for-healthcare.github.io/"><span style="font-weight: 400;">NVIDIA Isaac for Healthcare</span></a><span style="font-weight: 400;"> and the </span><a target="_blank" href="https://isaac-for-healthcare.github.io/workflows/rheo/"><span style="font-weight: 400;">Rheo</span></a><span style="font-weight: 400;"> blueprint for hospital automation, the company is developing multi-agent intelligence that can sense, coordinate and act in real time.</span></p>
<p><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/hospital-automation-1024x576-1.png"><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-92286" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/hospital-automation-1024x576-1.png" alt="" width="1024" height="576" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/hospital-automation-1024x576-1.png 1024w, https://blogs.nvidia.com/wp-content/uploads/2026/04/hospital-automation-1024x576-1-960x540.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/hospital-automation-1024x576-1-630x354.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/hospital-automation-1024x576-1-300x169.png 300w, https://blogs.nvidia.com/wp-content/uploads/2026/04/hospital-automation-1024x576-1-400x225.png 400w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p><span style="font-weight: 400;">In collaboration with Lightwheel and Advent Health Hospitals, this work brings embodied intelligence into the operating room — supporting surgical teams with situational awareness, sterile coordination and intelligent management of instruments, implants and workflows.</span></p>
<h2 id="isaac-sim" class="wp-block-heading" style="font-size: 24px;"><b>From Words to Motion: NVIDIA NemoClaw Brings Natural Language Commands to Isaac Sim <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#isaac-sim"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">NVIDIA Omniverse developer Umang Chudasama has </span><a target="_blank" href="https://www.linkedin.com/posts/umang-chudasama_robotics-nvidia-isaacsim-ugcPost-7446513116416487424-TqQq?utm_source=social_share_send&amp;utm_medium=member_desktop_web&amp;rcm=ACoAAAn6OHUBER_-OWSbHjZyVn985_NH2TCiwtI"><span style="font-weight: 400;">integrated</span></a><span style="font-weight: 400;"> NVIDIA NemoClaw with </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400;">NVIDIA Isaac Sim</span></a><span style="font-weight: 400;"> to navigate a Nova Carter autonomous robot using plain natural language commands — no manual coding required. NemoClaw translates text instructions (like “move two meters forward”) into executable Python scripts, which are then sent to Isaac Sim via a custom REST application programming interface in real time.</span></p>
<div style="width: 1200px;" class="wp-video"><video class="wp-video-shortcode" id="video-92122-2" width="1200" height="675" preload="metadata" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/nemoclaw-isaacsim.mp4?_=2" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/nemoclaw-isaacsim.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/04/nemoclaw-isaacsim.mp4</a></video></div>
<p><span style="font-weight: 400;"><br />
The entire system runs within Isaac Sim, giving the robot a realistic, physics-accurate warehouse environment to operate in before ever touching the real world. Pairing Isaac Sim with NemoClaw means faster development, safer testing and a smarter path to deployment. Rather than programming robots line by line, developers can now simply talk to them, marking a meaningful shift toward truly collaborative, language-driven robotics.</span></p>
<h2 id="oceansim" class="wp-block-heading" style="font-size: 24px;"><b>OceanSim: A GPU-Accelerated Underwater Robot Perception Simulation Framework <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#oceansim"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">Underwater simulators are crucial for developing reliable perception systems, but they still struggle with accurate physics‑based sensor modeling and fast rendering. </span></p>
<p><span style="font-weight: 400;">Helping close this gap is </span><a target="_blank" href="https://umfieldrobotics.github.io/OceanSim/"><span style="font-weight: 400;">OceanSim</span></a><span style="font-weight: 400;">, a GPU‑accelerated, high‑fidelity simulator developed by researchers at the University of Michigan. It uses advanced physics‑based rendering techniques to make synthetic underwater images look more realistic. Using GPUs, the simulator can render imaging sonar in real time and generate synthetic data quickly. </span></p>
<p><iframe loading="lazy" title="OceanSim: A GPU-Accelerated Underwater Robot Perception Simulation Framework" width="1200" height="675" src="https://www.youtube.com/embed/2_1MYjeZ9lY?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><span style="font-weight: 400;">OceanSim uses </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400;">NVIDIA Isaac Sim</span></a><span style="font-weight: 400;"> and plugs into </span><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/"><span style="font-weight: 400;">NVIDIA Omniverse</span></a><span style="font-weight: 400;"> libraries, creating a seamless link between robot‑learning research and underwater robotics. This integration lets developers easily develop and deploy embodied AI techniques for underwater applications.</span></p>
<h2 id="robolab" class="wp-block-heading" style="font-size: 24px;"><b>RoboLab: Benchmarking the Next Generation of Generalist Robots <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#robolab"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><a target="_blank" href="https://github.com/NVLabs/RoboLab"><span style="font-weight: 400;">RoboLab</span></a><span style="font-weight: 400;"> is a high-fidelity simulation benchmark for developing and evaluating generalist robot policies — powering systems designed to perform diverse tasks across environments.</span></p>
<div style="width: 864px;" class="wp-video"><video class="wp-video-shortcode" id="video-92122-3" width="864" height="480" preload="metadata" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Put_the_onion_in_the_wood_bowl_0_viewport_3X.mp4?_=3" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/Put_the_onion_in_the_wood_bowl_0_viewport_3X.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/04/Put_the_onion_in_the_wood_bowl_0_viewport_3X.mp4</a></video></div>
<p><span style="font-weight: 400;"><br />
Built on </span><a target="_blank" href="https://developer.nvidia.com/isaac"><span style="font-weight: 400;">NVIDIA Isaac</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/"><span style="font-weight: 400;">NVIDIA Omniverse</span></a><span style="font-weight: 400;"> simulation technologies, RoboLab taps into photorealistic environments and physics-based modeling to train and test robotic policies at scale. This enables researchers to measure how well behaviors learned in simulation transfer to the real world as tasks grow in complexity. </span></p>
<p><span style="font-weight: 400;">By combining advanced simulation with structured evaluation, RoboLab accelerates the path from virtual training to real-world deployment.</span></p>
<p><span style="font-weight: 400;">RoboLab</span><span style="font-weight: 400;"> features will be incorporated into the roadmap of </span><a target="_blank" href="https://developer.nvidia.com/isaac/lab-arena"><span style="font-weight: 400;">NVIDIA Isaac Lab-Arena</span></a><span style="font-weight: 400;">, an open source framework for large-scale policy setup and evaluation.</span></p>
<h2 id="cosmos" class="wp-block-heading" style="font-size: 24px;"><b>Smarter Palletizing With AI-Driven Reasoning <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#cosmos"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">In warehouse environments, palletizing robots typically follow fixed rules — handling boxes the same way regardless of contents, condition or fragility. A project developed by Doosan Robotics introduces a more adaptive approach using </span><a target="_blank" href="https://docs.nvidia.com/cosmos/latest/reason2/index.html"><span style="font-weight: 400;">NVIDIA Cosmos Reason</span></a><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;">By analyzing a single camera image, the system can infer box contents, detect damage and adjust how each item is handled — such as placement, speed and grip — based on estimated weight and fragility. This reduces common issues like incorrectly stacking damaged or fragile goods.</span></p>
<p><iframe loading="lazy" title="Nvidia Cosmos Cookoff - See How It Thinks: Mixed Palletizing with Explainable Visual Reasoning" width="1200" height="675" src="https://www.youtube.com/embed/4Yq0ESmKPPw?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><span style="font-weight: 400;">To build robots that understand the physical world before they ever deploy in it, robotics researchers and developers are building policy models powered by </span><a target="_blank" href="https://www.nvidia.com/en-us/ai/cosmos/"><span style="font-weight: 400;">NVIDIA Cosmos world foundation models</span></a><span style="font-weight: 400;"> (WFMs). </span><a target="_blank" href="https://www.linkedin.com/posts/toyota-research-institute_researchers-from-tris-robotics-lfv-learning-activity-7439452924168073216-05qL/"><span style="font-weight: 400;">Toyota Research Institute</span></a><span style="font-weight: 400;"> customizes Cosmos WFMs for their own world model to achieve state-of-the-art results across dynamic view synthesis, state-of-the-art teleoperation data augmentation and navigation world models.</span></p>
<div style="width: 960px;" class="wp-video"><video class="wp-video-shortcode" id="video-92122-4" width="960" height="640" preload="metadata" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/nvidia_cosmos_accelerates_ai_training_for_robotics.mp4?_=4" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/nvidia_cosmos_accelerates_ai_training_for_robotics.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/04/nvidia_cosmos_accelerates_ai_training_for_robotics.mp4</a></video></div>
<p><a target="_blank" href="https://mimic-video.github.io/"><span style="font-weight: 400;">Mimic robotics</span></a><span style="font-weight: 400;"> takes a different angle with mimic-video, a video-action model that pairs a pretrained internet-scale video model with a flow-matching action decoder, replacing the static image-language backbones of traditional VLAs with video-learned physical dynamics — achieving 10x better sample efficiency and 2x faster convergence on real-world manipulation tasks. </span></p>
<p><span style="font-weight: 400;">Together, both teams demonstrate a fundamental shift: robots trained on world models that capture physics and causality need dramatically less real-world data to perform reliably in conditions they&#8217;ve never seen.</span></p>
<h2 id="openclaw" class="wp-block-heading" style="font-size: 24px;"><b>Open, Intelligent Robotics on NVIDIA Jetson: Community Innovations Powering the Next Wave of Physical AI <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#openclaw"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">This National Robotics Week, OpenClaw running on the </span><a target="_blank" href="http://nvidia.com/en-us/autonomous-machines/embedded-systems/"><span style="font-weight: 400;">NVIDIA Jetson</span></a><span style="font-weight: 400;"> platform showcases how quickly open source innovation is evolving into real-world, intelligent robotics. </span></p>
<p><span style="font-weight: 400;">From practical applications to innovative projects, the robotics community is building what’s next — and fast. </span></p>
<p><span style="font-weight: 400;">Developers are pushing the boundaries of autonomy — including </span><a target="_blank" href="https://www.linkedin.com/posts/marco-pastorio_isaacsim-isaacros-nvfp4-activity-7435793888109527040-ffw5?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAACIoNTMBsMKQgXfIdyJvm7NsaP70ieqO9Tc"><span style="font-weight: 400;">hardware-in-the-loop testing</span></a><span style="font-weight: 400;"> powered by </span><a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-thor/"><span style="font-weight: 400;">Jetson Thor</span></a><span style="font-weight: 400;">, evaluating camera streams from </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400;">NVIDIA Isaac Sim</span></a><span style="font-weight: 400;"> and even building systems that can generate their own code to complete tasks.</span></p>
<div style="width: 1200px;" class="wp-video"><video class="wp-video-shortcode" id="video-92122-5" width="1200" height="675" preload="metadata" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/oss-on-jetson-models-nrw.mp4?_=5" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/oss-on-jetson-models-nrw.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/04/oss-on-jetson-models-nrw.mp4</a></video></div>
<p><span style="font-weight: 400;"><br />
In addition, OpenClaw </span><a target="_blank" href="https://www.jetson-ai-lab.com/tutorials/openclaw/"><span style="font-weight: 400;">now running entirely locally on NVIDIA Jetson Thor</span></a><span style="font-weight: 400;"> — powered by optimized </span><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/"><span style="font-weight: 400;">NVIDIA Nemotron</span></a><span style="font-weight: 400;"> open models and the vLLM open inference library — marks a major leap toward private, low-latency edge AI for robotics. And innovations like the </span><a target="_blank" href="https://www.nvidia.com/en-us/ai/nemoclaw/"><span style="font-weight: 400;">NVIDIA NemoClaw</span></a><span style="font-weight: 400;"> stack on Jetson are expanding what’s possible at the intersection of open source and high-performance robotics platforms. </span></p>
<p><iframe loading="lazy" title="First Look: NemoClaw on Jetson with a Local LLM" width="1200" height="675" src="https://www.youtube.com/embed/7HQSFgP6vOE?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<h2 id="Skyentific" class="wp-block-heading" style="font-size: 24px;"><b>Training and Refining Movement in Simulation <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#Skyentific"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">Gennady Plyushchev, a robotics creator known as Skyentific, is documenting the process of building a walking bipedal robot, from simulation and design to real-world deployment — showcasing a simulation-first approach to robot development.</span></p>
<p><iframe loading="lazy" title="From Simulation to Reality: Swiss Bipedal Robot (+ NVIDIA Jetson raffle)" width="1200" height="675" src="https://www.youtube.com/embed/67S9-MrqWJg?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><span style="font-weight: 400;">By using </span><a target="_blank" href="https://developer.nvidia.com/isaac"><span style="font-weight: 400;">NVIDIA Isaac</span></a><span style="font-weight: 400;"> based simulation workflows alongside </span><a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/"><span style="font-weight: 400;">NVIDIA Jetson</span></a><span style="font-weight: 400;"> for on-device AI and control, the project demonstrates how developers can rapidly iterate in virtual environments before deploying to physical systems. </span></p>
<p><span style="font-weight: 400;">The result highlights a broader shift in robotics: using AI, simulation and </span><a target="_blank" href="https://www.nvidia.com/en-us/edge-computing/"><span style="font-weight: 400;">edge computing</span></a><span style="font-weight: 400;"> to accelerate development and bring increasingly capable </span><a target="_blank" href="https://www.nvidia.com/en-us/use-cases/humanoid-robots/"><span style="font-weight: 400;">humanoid robots</span></a><span style="font-weight: 400;"> to life.</span></p>
<h2 id="university-of-maryland" class="wp-block-heading" style="font-size: 24px;"><b>University of Maryland Researchers Develop Robots for Complex Household Tasks <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#university-of-maryland"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">To bring robots into everyday life, researchers at the </span><a target="_blank" href="https://www.umiacs.umd.edu/news-events/news/umd-researchers-advance-robotics-perform-complex-household-tasks"><span style="font-weight: 400;">University of Maryland</span></a><span style="font-weight: 400;">, recipients of a grant from the <a target="_blank" href="https://www.umiacs.umd.edu/news-events/news/umd-researchers-advance-robotics-perform-complex-household-tasks">NVIDIA Academic Grant Program</a>, are developing AI-powered humanoid systems capable of performing complex household tasks with greater autonomy.</span></p>
<p><span style="font-weight: 400;">The project centers on building robot foundation models that unify perception, planning and control. Using the </span><a target="_blank" href="https://developer.nvidia.com/isaac?size=n_6_n&amp;sort-field=featured&amp;sort-direction=desc"><span style="font-weight: 400;">NVIDIA Isaac</span></a><span style="font-weight: 400;"> open robotics development platform, researchers can create photorealistic, high-fidelity virtual home environments populated with diverse objects and layouts, allowing robots to practice millions of task variations and safely test rare or complex scenarios.</span></p>
<p><a target="_blank" href="https://www.nvidia.com/en-us/products/workstations/professional-desktop-gpus/rtx-pro-6000-family/"><span style="font-weight: 400;">NVIDIA RTX PRO 6000 Blackwell GPUs</span></a><span style="font-weight: 400;"> for training large models and </span><a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-thor/"><span style="font-weight: 400;">NVIDIA Jetson AGX Thor</span></a><span style="font-weight: 400;"> developer kits for efficient deployment on physical robots help bridge the gap between research and real-world applications.</span></p>
<p><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-scaled.jpg"><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-92225" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-1680x1122.jpg" alt="" width="1200" height="801" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-1680x1122.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-960x641.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-1280x855.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-1536x1026.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-630x421.jpg 630w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></a></p>
<p><span style="font-weight: 400;">By combining advancements in generative AI, sequential decision-making and scalable computing, the work represents a key step toward general-purpose robots that can support people in homes, healthcare settings and beyond.</span></p>
<h2 id="fellowship" class="wp-block-heading" style="font-size: 24px;"><b>Announcing the MassRobotics Fellowship <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#fellowship"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">The </span><a target="_blank" href="https://www.massrobotics.org/physical-ai-fellowship-cohort-2/"><span style="font-weight: 400;">second cohort</span></a><span style="font-weight: 400;"> of the Amazon Web Services (AWS) MassRobotics fellowship comprises startups being recognized for compelling industrial use cases harnessing </span><a target="_blank" href="https://www.nvidia.com/en-us/industries/robotics/"><span style="font-weight: 400;">robotics</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/intelligent-video-analytics-platform/"><span style="font-weight: 400;">computer vision</span></a><span style="font-weight: 400;">. They will receive access to technical resources and AWS cloud credits.</span></p>
<p><span style="font-weight: 400;">The cohort includes </span><a target="_blank" href="https://www.nvidia.com/en-us/startups/"><span style="font-weight: 400;">NVIDIA Inception members</span></a><span style="font-weight: 400;"> Burro, Config Intelligence, Deltia, Haply Robotics, Luminous Robotics, Roboto AI, Telexistence, Terra Robotics and WiRobotics, each developing technologies spanning humanoid robotics, industrial automation, haptics and agricultural systems.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-92213 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Burro-Collaborative-Robots.jpg" alt="" width="1000" height="667" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/Burro-Collaborative-Robots.jpg 1000w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Burro-Collaborative-Robots-960x640.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Burro-Collaborative-Robots-630x420.jpg 630w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></p>
<p><b>Burro</b><span style="font-weight: 400;"> creates autonomous agricultural robots for tasks like grape harvesting and crop scouting.</span></p>
<p><b>Config Intelligence</b><span style="font-weight: 400;"> builds data infrastructure for general-purpose bimanual robotics to enable reliable two-handed tasks in real-world settings.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-92204 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Deltia.jpeg" alt="" width="1280" height="854" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/Deltia.jpeg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Deltia-960x641.jpeg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Deltia-630x420.jpeg 630w" sizes="auto, (max-width: 1280px) 100vw, 1280px" /></p>
<p><b>Deltia</b><span style="font-weight: 400;"> provides AI-driven manufacturing intelligence that optimizes assembly lines using computer vision and analytics.</span></p>
<p><b>Haply Robotics</b><span style="font-weight: 400;"> designs haptic control devices that serve as “steering wheels” for physical AI systems across industries.</span></p>
<p><b>Luminous Robotics</b><span style="font-weight: 400;"> deploys AI-powered robotic systems for fast, low-cost solar-panel installation and maintenance.</span></p>
<p><b>Roboto AI</b><span style="font-weight: 400;"> offers a data-analytics platform that accelerates robot development by managing and analyzing robotics data.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-92201 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Telexistence-2.jpg" alt="" width="960" height="510" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/Telexistence-2.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Telexistence-2-630x335.jpg 630w" sizes="auto, (max-width: 960px) 100vw, 960px" /></p>
<p><b>Telexistence</b><span style="font-weight: 400;"> develops AI-powered humanoid robots and remote-controlled systems for retail and logistics. </span></p>
<p><b>Terra Robotics</b><span style="font-weight: 400;"> develops laser-weeding agricultural robots to automate sustainable farming.</span></p>
<p><b>WiRobotics</b><span style="font-weight: 400;"> creates wearable walking-assist and humanoid robots to enhance mobility and physical interaction, using training data from assisted products to train its humanoids.</span></p>
<h2 id="maximo" class="wp-block-heading" style="font-size: 24px;"><b>Accelerating How Utility-Scale Solar Projects Are Built in the Field <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#maximo"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><a target="_blank" href="https://maxrobotics.ai/"><span style="font-weight: 400;">Maximo</span></a><span style="font-weight: 400;">, a solar robotics business incubated within The AES Corporation, recently completed a 100-megawatt solar installation using its robot fleet. Developed with NVIDIA accelerated computing, </span><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/"><span style="font-weight: 400;">NVIDIA Omniverse libraries</span></a><span style="font-weight: 400;"> and the </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400;">NVIDIA Isaac Sim framework</span></a><span style="font-weight: 400;">, Maximo demonstrated that autonomous installations can operate reliably for utility-scale projects.</span></p>
<div style="width: 1200px;" class="wp-video"><video class="wp-video-shortcode" id="video-92122-6" width="1200" height="675" preload="metadata" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/maximo-nrw.mp4?_=6" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/maximo-nrw.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/04/maximo-nrw.mp4</a></video></div>
<p><span style="font-weight: 400;"><br />
The solution improves installation speed, safety and consistency, helping close the gap between rising demand for faster time to power and construction capacity.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-92169" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-960x540.jpg" alt="" width="960" height="540" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-630x354.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-400x225.jpg 400w" sizes="auto, (max-width: 960px) 100vw, 960px" /></p>
<p><span style="font-weight: 400;">As solar expansion faces ongoing labor constraints and rising demand, AI-driven field robotics systems like Maximo are helping accelerate infrastructure buildout, reduce costs and redefine how energy projects are delivered.</span></p>
<h2 id="aigen" class="wp-block-heading" style="font-size: 24px;"><b>Aigen Advances Sustainable Farming With Agricultural Robotics <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#aigen"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">To help regenerate the Earth, </span><a target="_blank" href="https://www.linkedin.com/posts/nvidia-for-startups_aigen-regenerating-earth-with-robotics-and-activity-7444752936771170304-UMpl/"><span style="font-weight: 400;">Aigen’s solar-powered autonomous robots</span></a><span style="font-weight: 400;"> are breaking farmers’ dependency on chemicals through precision weed control powered by vision AI.</span></p>
<p><span style="font-weight: 400;">The <a target="_blank" href="https://www.nvidia.com/en-us/startups/">NVIDIA Inception</a> startup is building a new kind of farming system that’s powered by clean energy and continuously enriched by data. Aigen’s fleet of solar-driven rovers uses advanced computer vision to identify and remove weeds, dramatically reducing the need for herbicides.</span></p>
<p><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/Copy-of-Synthetic_Data.gif"><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-92139" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Copy-of-Synthetic_Data.gif" alt="" width="1280" height="720" /></a></p>
<p><span style="font-weight: 400;">Farming has no standard environment. Every field is different — different crops, different soil, different equipment, weeds, growth stages and geographies. That fragmentation makes real-world data collection slow, expensive and inconsistent. By post-training </span><a target="_blank" href="https://www.nvidia.com/en-us/ai/cosmos/"><span style="font-weight: 400;">NVIDIA Cosmos open world foundation models</span></a><span style="font-weight: 400;"> on their specialized data and harnessing </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400;">NVIDIA Isaac Sim</span></a><span style="font-weight: 400;"> pipelines, Aigen is building the system that generalizes for millions of agriculture scenarios.</span></p>
<p><span style="font-weight: 400;">On the ground, each rover runs inference using an </span><a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/"><span style="font-weight: 400;">NVIDIA Jetson Orin</span></a><span style="font-weight: 400;"> edge AI module to distinguish crops from weeds in real time.</span></p>
<div style="width: 1200px;" class="wp-video"><video class="wp-video-shortcode" id="video-92122-7" width="1200" height="673" preload="metadata" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Aigen_Element_Weeding_1.mp4?_=7" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/Aigen_Element_Weeding_1.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/04/Aigen_Element_Weeding_1.mp4</a></video></div>
<p><span style="font-weight: 400;"><br />
Using these rovers, farmers can grow crops more sustainably and profitably, using regenerative practices that heal the land and foster ecological balance.</span></p>
]]></content:encoded>
					
		
		<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/04/Aigen_Element_Weeding_1.mp4" length="4713492" type="video/mp4" />
<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/04/maximo-nrw.mp4" length="18590393" type="video/mp4" />
<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/04/oss-on-jetson-models-nrw.mp4" length="12405774" type="video/mp4" />
<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/04/nvidia_cosmos_accelerates_ai_training_for_robotics.mp4" length="2338942" type="video/mp4" />
<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/04/Put_the_onion_in_the_wood_bowl_0_viewport_3X.mp4" length="824207" type="video/mp4" />
<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/04/nemoclaw-isaacsim.mp4" length="13912313" type="video/mp4" />
<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/04/GTC26-Robots_16x9_v3-2-1.mp4" length="17798316" type="video/mp4" />

				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/robotics-tech-blog-nrw-rolling-blog-1280x680-1.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/robotics-tech-blog-nrw-rolling-blog-1280x680-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[National Robotics Week — Latest Physical AI Research, Breakthroughs and Resources]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Strength and Destiny Collide: ‘Samson: A Tyndalston Story’ Arrives in the Cloud</title>
		<link>https://blogs.nvidia.com/blog/geforce-now-thursday-samson-a-tyndalston-story/</link>
		
		<dc:creator><![CDATA[GeForce NOW Community]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 13:00:57 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Cloud Gaming]]></category>
		<category><![CDATA[GeForce NOW]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92248</guid>

					<description><![CDATA[A timeless story of grit, faith and rebellion takes center stage as Samson: A Tyndalston Story joins the GeForce NOW library today.  The highly anticipated release from Liquid Swords can now be streamed on nearly any device with GeForce NOW bringing cinematic intensity and mythic storytelling to the cloud. Catch it as part of four [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400">A timeless story of grit, faith and rebellion takes center stage as </span><i><span style="font-weight: 400">Samson: A Tyndalston Story</span></i><span style="font-weight: 400"> joins the </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce-now/"><span style="font-weight: 400">GeForce NOW</span></a><span style="font-weight: 400"> library today. </span></p>
<p><span style="font-weight: 400">The highly anticipated release from Liquid Swords can now be streamed on nearly any device with GeForce NOW bringing cinematic intensity and mythic storytelling to the cloud.</span></p>
<p><span style="font-weight: 400">Catch it as part of </span><span style="font-weight: 400">four </span><span style="font-weight: 400">new games in the cloud this week.</span></p>
<h2><b>Stream the Power</b></h2>
<figure id="attachment_92256" aria-describedby="caption-attachment-92256" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-92256" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-1680x945.jpg" alt="Samson on GeForce NOW" width="1200" height="675" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-630x354.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-400x225.jpg 400w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92256" class="wp-caption-text"><em>A new legend rises.</em></figcaption></figure>
<p><span style="font-weight: 400">Tyndalston is a city built on debt, muscle and memory. </span><i><span style="font-weight: 400">Samson: A Tyndalston Story</span></i><span style="font-weight: 400"> from Liquid Swords follows Samson, a former enforcer pulled back to the streets that made him. Violence is currency as every fight is personal, every hit carries history and every escape feels earned in a city that never forgives.</span></p>
<p><span style="font-weight: 400">Gameplay blends cinematic melee action with choice-driven narrative progression. Every confrontation — from shadowed alley brawls to large-scale set pieces — feels purposeful, reflecting Samson’s internal struggle between vengeance and redemption. Brawls hit fast and close. Cars aren’t set pieces — they’re weapons. Momentum and terrain decide if the player walks away or falls harder. Every job, debt and decision cuts toward freedom or collapse.</span></p>
<p><span style="font-weight: 400">The game takes full advantage of ray-traced global illumination, reflections and shadows, creating a city that feels cinematic and alive. </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce/technologies/dlss/"><span style="font-weight: 400">NVIDIA DLSS</span></a><span style="font-weight: 400"> 3.5 boosts performance, while </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce/technologies/reflex/"><span style="font-weight: 400">NVIDIA Reflex</span></a><span style="font-weight: 400"> technology cuts down latency to keep controls razor-sharp during split-second fights. With GeForce NOW, the experience streams instantly at maximum fidelity, even without the latest hardware. No waiting around for downloads or worrying about system specs, just dive straight into the grit and glow of Tyndalston.</span></p>
<h2><b>Celebrate New Games</b></h2>
<figure id="attachment_92253" aria-describedby="caption-attachment-92253" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-92253 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/iceman-tw-li-2048x1024-3-1680x840.jpg" alt="No arms, no problem." width="1200" height="600" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/iceman-tw-li-2048x1024-3-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/iceman-tw-li-2048x1024-3-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/iceman-tw-li-2048x1024-3-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/iceman-tw-li-2048x1024-3-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/iceman-tw-li-2048x1024-3-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/iceman-tw-li-2048x1024-3.jpg 2048w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92253" class="wp-caption-text"><em>No arms, no problem.</em></figcaption></figure>
<p><span style="font-weight: 400">Celebrate three decades of Rayman with the definitive edition of the platforming classic in </span><i><span style="font-weight: 400">Rayman 30th Anniversary Edition</span></i><span style="font-weight: 400">, featuring five versions from iconic consoles, over 120 additional levels and an exclusive documentary that explores the creation of the limbless hero. Stream it on GeForce NOW without having to wait around for downloads or updates. </span></p>
<p><span style="font-weight: 400">In addition, members can look for the following:</span><i></i></p>
<ul>
<li><i><span style="font-weight: 400">Samson </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3634520?utm_source=nvidia&amp;utm_campaign=geforce_now'"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 8, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">Morbid Metal</span></i><span style="font-weight: 400"> (New release on </span><a target="_blank" href="https://store.steampowered.com/app/1866130?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 8, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">DayZ</span></i><span style="font-weight: 400"> (New release on </span><a target="_blank" href="https://www.xbox.com/games/store/dayz/bsr9nlhvf1kl?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Game Pass, April 9) </span></li>
<li><i><span style="font-weight: 400">Rayman: 30th Anniversary Edition</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/4094670?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://store.ubi.com/69683af797044c480eb79e03.html?ucid=AFL-ID_152062&amp;maltcode=geforcenow_convst_AFL_geforcenow_vg__STORE____&amp;addinfo="><span style="font-weight: 400">Ubisoft</span></a><span style="font-weight: 400">)</span></li>
</ul>
<p><span style="font-weight: 400">GeForce RTX 5080-ready game this week, in addition to </span><i><span style="font-weight: 400">Samson </span></i><span style="font-weight: 400">and </span><i><span style="font-weight: 400">Morbid Metal:</span></i><i></i></p>
<ul>
<li><i><span style="font-weight: 400">Starfield</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/1716740?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.xbox.com/games/store/starfield-standard-edition/9NTFM8RXLJF9?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Game Pass)</span></li>
</ul>
<p><span style="font-weight: 400">What are you planning to play this weekend? Let us know on </span><a target="_blank" href="https://www.twitter.com/nvidiagfn"><span style="font-weight: 400">X</span></a><span style="font-weight: 400"> or in the comments below.</span></p>
<blockquote class="twitter-tweet" data-width="550" data-dnt="true">
<p lang="en" dir="ltr">Who&#39;s the most iconic animal from a video game? Drop a pic or gif! <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f63b.png" alt="😻" class="wp-smiley" style="height: 1em; max-height: 1em;" /> (this post isn&#39;t just to farm cute animal pics, no idea what you&#39;re talking about).</p>
<p>&mdash; <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f329.png" alt="🌩" class="wp-smiley" style="height: 1em; max-height: 1em;" /> NVIDIA GeForce NOW (@NVIDIAGFN) <a target="_blank" href="https://twitter.com/NVIDIAGFN/status/2041546542545322416?ref_src=twsrc%5Etfw">April 7, 2026</a></p></blockquote>
<p><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-thursday-4-9-nv-blog-1280x680-logo.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-thursday-4-9-nv-blog-1280x680-logo-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Strength and Destiny Collide: ‘Samson: A Tyndalston Story’ Arrives in the Cloud]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>From RTX to Spark: NVIDIA Accelerates Gemma 4 for Local Agentic AI</title>
		<link>https://blogs.nvidia.com/blog/rtx-ai-garage-open-models-google-gemma-4/</link>
		
		<dc:creator><![CDATA[Michael Fukuyama]]></dc:creator>
		<pubDate>Thu, 02 Apr 2026 16:15:58 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Conversational AI]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[NVIDIA RTX]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[RTX AI Garage]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92019</guid>

					<description><![CDATA[Open models are driving a new wave of on-device AI, extending innovation beyond the cloud to everyday devices. As these models advance, their value increasingly depends on access to local, real-time context that can turn meaningful insights into action.  Designed for this shift, Google’s latest additions to the Gemma 4 family introduce a class of small, fast and omni-capable models built for efficient local execution across a wide range [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span data-contrast="none">Open models are driving a new wave of on-device AI, extending innovation beyond the cloud to everyday devices. As these models advance, their value increasingly depends on access to local, real-time context that can turn meaningful insights into action.</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span data-contrast="none">Designed for this shift, Google’s latest additions to the </span>Gemma 4 family <span data-contrast="none">introduce a class of small, fast and omni-capable models built for efficient local execution across a wide range of devices. </span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p>Google <span data-contrast="none">and NVIDIA have collaborated to optimize Gemma 4</span><b><span data-contrast="none"> </span></b><span data-contrast="none">for NVIDIA GPUs, enabling efficient performance across a range of systems — from data center deployments to NVIDIA RTX-powered PCs and workstations, the <a target="_blank" href="https://www.nvidia.com/en-us/products/workstations/dgx-spark/">NVIDIA DGX Spark</a> personal AI supercomputer and <a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/product-development/">NVIDIA Jetson Orin Nano</a> edge AI modules.</span></p>
<h2><b><span data-contrast="none">Gemma 4: Compact Models Optimized for NVIDIA GPUs</span></b><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></h2>
<p><span data-contrast="none">The latest additions to the </span>Gemma 4 family of open models<span data-contrast="none">—</span><span data-contrast="none"> spanning E2B, E4B, 26B and 31B variants </span><span data-contrast="none">—</span><span data-contrast="none"> are designed for efficient deployment from edge devices to high-performance GPUs. </span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<figure id="attachment_92036" aria-describedby="caption-attachment-92036" style="width: 1149px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/gemma-4-perf-chart-desktop-light-1.png"><img loading="lazy" decoding="async" class="size-full wp-image-92036" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/gemma-4-perf-chart-desktop-light-1.png" alt="" width="1149" height="489" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/gemma-4-perf-chart-desktop-light-1.png 1149w, https://blogs.nvidia.com/wp-content/uploads/2026/04/gemma-4-perf-chart-desktop-light-1-960x409.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/gemma-4-perf-chart-desktop-light-1-630x268.png 630w" sizes="auto, (max-width: 1149px) 100vw, 1149px" /></a><figcaption id="caption-attachment-92036" class="wp-caption-text">All configurations measured using Q4_K_M quantizations BS = 1, ISL = 4096 and OSL = 128 on NVIDIA GeForce RTX 5090 and Mac M3 Ultra desktops. Token generation throughput measured on llama.cpp b7789, using the llama-bench tool.</figcaption></figure>
<p><span data-contrast="none">This new generation of compact models supports a range of tasks, including:</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<ul>
<li><b><span data-contrast="none">Reasoning: </span></b><span data-contrast="none">Strong performance on complex problem-solving tasks. </span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></li>
<li><b><span data-contrast="auto">Coding: </span></b><span data-contrast="auto">Code generation and debugging for developer workflows.  </span><span data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;335559738&quot;:240,&quot;335559739&quot;:240}"> </span></li>
<li><b><span data-contrast="auto">Agents: </span></b><span data-contrast="auto">Native support for structured tool use (function calling). </span><span data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;335559738&quot;:240,&quot;335559739&quot;:240}"> </span></li>
<li><b><span data-contrast="auto">Vision, Video and Audio Capabilities: </span></b><span data-contrast="auto">E</span><span data-contrast="auto">nables rich multimodal interactions for object recognition, automated speech recognition, and document or video intelligence.</span><span data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;335559738&quot;:240,&quot;335559739&quot;:240}"> </span></li>
<li><b><span data-contrast="auto">Interleaved Multimodal Input: </span></b><span data-contrast="auto">M</span><span data-contrast="auto">ix text and images in any order within a single prompt. </span><span data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;335559738&quot;:240,&quot;335559739&quot;:240}"> </span></li>
<li><b><span data-contrast="auto">Multilingual: </span></b><span data-contrast="auto">Out-of-the-box support for 35+ languages, pretrained on 140+ languages.</span><span data-ccp-props="{}"> </span></li>
</ul>
<p><span data-contrast="none">The </span>E2B and E4B models<span data-contrast="none"> are built for ultraefficient, low-latency inference at the edge, running completely offline with near-zero latency across many devices including Jetson Nano modules. </span></p>
<p><span data-contrast="none">The </span>26B and 31B models<span data-contrast="none">are designed for high-performance reasoning and developer-centric workflows, making them well suited for agentic AI. Optimized to deliver state-of-the-art, accessible reasoning, these models run efficiently on NVIDIA RTX GPUs and DGX Spark — powering development environments, coding assistants and agent-driven workflows. </span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span data-contrast="none">As local agentic AI continues to gain momentum, applications like </span>OpenClaw<span data-contrast="none"> are enabling always-on AI assistants on RTX PCs, workstations and DGX Spark. The latest Gemma 4 models are compatible with OpenClaw, allowing users to build capable local agents that draw context from personal files, applications and workflows to automate tasks. Learn how to run </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce/news/open-claw-rtx-gpu-dgx-spark-guide/"><span data-contrast="none">OpenClaw for free on RTX GPUs and DGX Spark</span></a><span data-contrast="none"> or using the </span><a target="_blank" href="https://build.nvidia.com/spark/openclaw"><span data-contrast="none">DGX Spark OpenClaw playbook</span></a><span data-contrast="auto">.</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span class="NormalTextRun CommentStart CommentHighlightPipeRest CommentHighlightRest SCXW107558427 BCX0">Check out the <a target="_blank" href="https://blog.google/innovation-and-ai/technology/developers-tools/gemma-4/">Google DeepMind announcement blog</a> to l</span><span class="NormalTextRun CommentHighlightRest SCXW107558427 BCX0">earn more about the</span><span class="NormalTextRun CommentHighlightRest SCXW107558427 BCX0"> </span><span class="NormalTextRun CommentHighlightRest SCXW107558427 BCX0">latest additions to </span><span class="NormalTextRun CommentHighlightRest SCXW107558427 BCX0">Gemma 4 </span><span class="NormalTextRun CommentHighlightRest SCXW107558427 BCX0">family.</span></p>
<h2><b><span data-contrast="none">Getting Started: Gemma 4 on RTX GPUs and DGX Spark</span></b><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></h2>
<p><span data-contrast="none">NVIDIA has collaborated with Ollama and llama.cpp to provide the best local deployment experience for each of the Gemma 4 models.   </span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span data-contrast="none">To use Gemma 4 locally, users can </span><span data-contrast="none">download Ollama</span><span data-contrast="none"> to run Gemma 4 models </span><span data-contrast="none">or</span><span data-contrast="none"> install </span><span data-contrast="none">llama.cpp</span><span data-contrast="none"> and pair it with the Gemma 4 GGUF Hugging Face checkpoint. </span><span data-contrast="auto">Additionally, </span><span data-contrast="none">Unsloth provides day-one support with optimized and quantized models for efficient local fine-tuning and deployment via Unsloth Studio. Start </span><span data-contrast="auto">running and </span><span data-contrast="none">fine-tuning</span><span data-contrast="auto"> Gemma 4 in Unsloth Studio today.</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span data-contrast="none">Running open models like the Gemma 4 family on NVIDIA GPUs achieves optimal performance because NVIDIA Tensor Cores accelerate AI inference workloads to deliver higher throughput and lower latency for local execution. Plus, the CUDA software stack ensures broad compatibility across leading frameworks and tools, enabling new models to run efficiently from day one. </span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span data-contrast="none">This combination allows open models like Gemma 4 to scale across a wide range of systems — from Jetson Orin Nano at the edge to RTX PCs, workstations and DGX Spark — without requiring extensive optimization.</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span data-contrast="none">Check out </span><span data-contrast="none">the </span><a target="_blank" href="https://developer.nvidia.com/blog/bringing-ai-closer-to-the-edge-and-on-device-with-gemma-4/"><span data-contrast="none">NVIDIA technical blog</span></a><span data-contrast="none"> </span><span data-contrast="none">for more details on how to get started with Gemma 4 on NVIDIA GPUs and learn more about</span><span data-contrast="none"> NVIDIA’s work on </span><a href="https://blogs.nvidia.com/blog/ai-future-open-and-proprietary/"><span data-contrast="none">open models</span></a><span data-contrast="none">.</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<h2><b><span data-contrast="none">#ICYMI: The Latest Updates for RTX AI PCs</span></b><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></h2>
<p><span data-contrast="none"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2728.png" alt="✨" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Catch up on </span><a href="https://blogs.nvidia.com/blog/rtx-ai-garage-gtc-2026-nemoclaw"><span data-contrast="none">RTX AI Garage</span></a><span data-contrast="none"> blogs for a host of agentic AI announcements from NVIDIA GTC, such as new open models for local agents. These models include NVIDIA Nemotron 3 Nano 4B and Nemotron 3 Super 120B, and optimizations for Qwen 3.5 and Mistral Small 4.</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span data-contrast="none"> NVIDIA recently introduced </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-nemoclaw"><span data-contrast="none">NVIDIA NemoClaw,</span></a><span data-contrast="none"> an open source stack that optimizes OpenClaw experiences on NVIDIA devices by increasing security and supporting local models. </span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><b><span data-contrast="none"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f680.png" alt="🚀" class="wp-smiley" style="height: 1em; max-height: 1em;" /></span></b><b><span data-contrast="none"> </span></b><a target="_blank" href="https://accomplish.ai/"><span data-contrast="none">Accomplish.ai</span></a><span data-contrast="none"> announced Accomplish FREE, a no-cost version of its open source desktop AI agent with built-in models. It harnesses NVIDIA GPUs to run open weight models locally, while a hybrid router dynamically balances workloads between local RTX hardware and the cloud — enabling fast, private, zero-configuration execution without requiring an application programming interface key.</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><i><span data-contrast="none">Plug in to NVIDIA AI PC on </span></i><a target="_blank" href="https://www.facebook.com/NVIDIA.AI.PC/"><i><span data-contrast="none">Facebook</span></i></a><i><span data-contrast="none">, </span></i><a target="_blank" href="https://www.instagram.com/nvidia.ai.pc/"><i><span data-contrast="none">Instagram</span></i></a><i><span data-contrast="none">, </span></i><a target="_blank" href="https://www.tiktok.com/@nvidia_ai_pc"><i><span data-contrast="none">TikTok</span></i></a><i><span data-contrast="none"> and </span></i><a target="_blank" href="https://x.com/NVIDIA_AI_PC"><i><span data-contrast="none">X</span></i></a><i><span data-contrast="none"> — and stay informed by subscribing to the </span></i><a target="_blank" href="https://www.nvidia.com/en-us/ai-on-rtx/?modal=subscribe-ai"><i><span data-contrast="none">RTX AI PC newsletter</span></i></a><i><span data-contrast="none">.</span></i><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><i><span data-contrast="none">Follow NVIDIA Workstation on </span></i><a target="_blank" href="https://www.linkedin.com/showcase/3761136/"><i><span data-contrast="none">LinkedIn</span></i></a><i><span data-contrast="none"> and </span></i><a target="_blank" href="https://x.com/NVIDIAworkstatn"><i><span data-contrast="none">X</span></i></a><i><span data-contrast="none">. </span></i><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/eevee-nv-blog-1280x680-1.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/eevee-nv-blog-1280x680-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[From RTX to Spark: NVIDIA Accelerates Gemma 4 for Local Agentic AI]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Press Start on April: GeForce NOW Brings 10 Games to the Cloud</title>
		<link>https://blogs.nvidia.com/blog/geforce-now-thursday-april-2026-games-list/</link>
		
		<dc:creator><![CDATA[GeForce NOW Community]]></dc:creator>
		<pubDate>Thu, 02 Apr 2026 13:00:09 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Cloud Gaming]]></category>
		<category><![CDATA[GeForce NOW]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92005</guid>

					<description><![CDATA[No joke — GFN Thursday is skipping the tricks and heading straight into the games. April kicks off with ten new titles, bringing fresh adventures to GeForce NOW, including the launch of Capcom’s highly anticipated PRAGMATA. A dozen new games are available to stream this week, including Arknights: Endfield, which expands the acclaimed series into a full [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400">No joke — GFN Thursday is skipping the tricks and heading straight into the games. April kicks off with </span><span style="font-weight: 400">ten</span><span style="font-weight: 400"> new titles, bringing fresh adventures to </span><a target="_blank" href="https://geforcenow.com"><span style="font-weight: 400">GeForce NOW</span></a><span style="font-weight: 400">, including the launch of Capcom’s highly anticipated </span><i><span style="font-weight: 400">PRAGMATA.</span></i></p>
<p><span style="font-weight: 400">A dozen </span><span style="font-weight: 400">new games are available to stream this week, including </span><i><span style="font-weight: 400">Arknights: Endfield</span></i><span style="font-weight: 400">, which expands the acclaimed series into a full 3D real‑time strategy adventure. On GeForce NOW, every battle flows with precision and every mission looks sharper than ever.</span></p>
<p><span style="font-weight: 400">So gear up, grab a controller or gaming device of choice, and get ready to stream — another month of great gaming is now underway.</span></p>
<h2><b>Command the Frontier</b></h2>
<figure id="attachment_92010" aria-describedby="caption-attachment-92010" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-92010" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-1680x945.jpg" alt="Arknights Endfield on GeForce NOW" width="1200" height="675" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-630x354.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-400x225.jpg 400w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92010" class="wp-caption-text"><em>Reclaim the frontier using cloud technology.</em></figcaption></figure>
<p><i><span style="font-weight: 400">Arknights: Endfield</span></i><span style="font-weight: 400"> from Hypergryph expands the acclaimed </span><i><span style="font-weight: 400">Arknights </span></i><span style="font-weight: 400">universe into a full, 3D, real‑time strategy role-playing game. Blending tactical planning with sleek sci‑fi aesthetics, the title invites players into a world featuring terraformed settlements, advanced technology and looming threats beneath the planet’s surface.</span></p>
<p><span style="font-weight: 400">Set on the perilous planet Talos‑II, </span><i><span style="font-weight: 400">Endfield </span></i><span style="font-weight: 400">follows a group of pioneers uncovering lost secrets and battling hostile factions. The game seamlessly merges base‑building, exploration and combat — with squads of operators coordinating in real time to overcome environmental hazards and powerful enemies. Every decision impacts survival, progress and the unfolding mystery of the world.</span></p>
<p><span style="font-weight: 400">On GeForce NOW, </span><i><span style="font-weight: 400">Arknights: Endfield</span></i><span style="font-weight: 400"> can be played at the highest settings from virtually any device, enabling crisp visuals and high performance without compromise. GeForce RTX rendering brings the game’s metallic skylines and glowing wastelands to life, while ultralow-latency streaming ensures every tactical command lands with precision. </span></p>
<h2><b>Spring Into April</b></h2>
<figure id="attachment_92013" aria-describedby="caption-attachment-92013" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-92013" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_Man_Star_Force_Legacy_Collection-1680x840.jpg" alt="MegaMan Star Force Legacy Collection" width="1200" height="600" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_Man_Star_Force_Legacy_Collection-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_Man_Star_Force_Legacy_Collection-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_Man_Star_Force_Legacy_Collection-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_Man_Star_Force_Legacy_Collection-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_Man_Star_Force_Legacy_Collection-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_Man_Star_Force_Legacy_Collection.jpg 2048w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92013" class="wp-caption-text"><em>He’s back.</em></figcaption></figure>
<p><span style="font-weight: 400">Capcom’s </span><i><span style="font-weight: 400">Mega Man Star Force Legacy Collection</span></i><span style="font-weight: 400"> includes seven games and additional features, including a gallery of illustrations and music. Eleven‑year‑old Geo Stelar is a grieving boy who isolates himself after the mysterious disappearance of his astronaut father. His life changes when he encounters an extraterrestrial being named Omega‑Xis, granting him the power to become Mega Man. The collection streams instantly with GeForce NOW, turning any device into a </span><i><span style="font-weight: 400">Star Force</span></i><span style="font-weight: 400"> terminal ready to save the world once more.</span></p>
<p><span style="font-weight: 400">Check out what else is available this week:</span><i></i></p>
<ul>
<li><i><span style="font-weight: 400">Hozy </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3326230?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, March 30)</span></li>
<li><i><span style="font-weight: 400">Cooking Simulator 2: Better Together </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2455360?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, March 31)</span></li>
<li><i><span style="font-weight: 400">Legacy of Kain: Ascendance </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/4233530?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, March 31)</span></li>
<li><i><span style="font-weight: 400">Subliminal </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2300840?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, March 31)</span></li>
<li><i><span style="font-weight: 400">Super Meat Boy 3D </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3288210?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, March 31)</span></li>
<li><i><span style="font-weight: 400">I Am Jesus Christ </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/1198970?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 2)</span></li>
<li><i><span style="font-weight: 400">ALL WILL FALL </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2706020?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 3, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">Arknights: Endfield </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://endfield.gryphline.com/?utm_source=nvidia&amp;utm_medium=referral&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Official Site</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Mega Man Star Force Legacy Collection</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/3500390?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Nova Roma </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/2426530?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.xbox.com/games/store/nova-roma-game-preview/9nbnfbq546dt?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Game Pass)</span></li>
<li><i><span style="font-weight: 400">RuneScape: Dragonwilds </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/1374490?utm_source=nvidia&amp;utm_campaign=geforce_now'"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Way of the Hunter 2</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/2543830?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, GeForce RTX 5080-ready)</span></li>
</ul>
<p><span style="font-weight: 400">And look forward to the games coming throughout the month:</span><i></i></p>
<ul>
<li><i><span style="font-weight: 400">Vampire Crawlers: The Turbo Wildcard from Vampire Survivors </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3265700?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 21)</span></li>
<li><i><span style="font-weight: 400">Samson </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3634520?utm_source=nvidia&amp;utm_campaign=geforce_now'"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 8)</span></li>
<li><i><span style="font-weight: 400">Replaced</span></i><span style="font-weight: 400"> (New release on </span><a target="_blank" href="https://store.steampowered.com/app/1663850?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and Xbox, available on Game Pass, April 14)</span></li>
<li><i><span style="font-weight: 400">Cthulhu: The Cosmic Abyss </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2760560?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 16)</span></li>
<li><i><span style="font-weight: 400">PRAGMATA </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3357650?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 17)</span></li>
<li><i><span style="font-weight: 400">Outbound </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2681030?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 23)</span></li>
<li><i><span style="font-weight: 400">Heroes of Might and Magic: Olden Era</span></i><span style="font-weight: 400"> (New release on </span><a target="_blank" href="https://store.steampowered.com/app/3105440?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 30)</span></li>
<li><i><span style="font-weight: 400">Bus Bound </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2095420?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 30)</span></li>
</ul>
<h2><b>More of March</b></h2>
<p><span style="font-weight: 400">In addition to the 15 games announced last month, </span><span style="font-weight: 400">a dozen </span><span style="font-weight: 400">more joined the </span><a target="_blank" href="https://play.geforcenow.com"><span style="font-weight: 400">GeForce NOW library</span></a><span style="font-weight: 400">:</span></p>
<ul>
<li><i><span style="font-weight: 400">1348 Ex Voto </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/1895900?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">BATTLETECH </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://www.xbox.com/games/store/battletech/9NQVDQS2BC10?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Game Pass)</span></li>
<li><i><span style="font-weight: 400">Cooking Simulator 2: Better Together </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/2455360?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Despot’s Game </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://www.xbox.com/games/store/despots-game/9P5ZDVMCJMFD?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Microsoft)</span></li>
<li><i><span style="font-weight: 400">Diablo II: Resurrected</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/2536520?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Hozy </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/3326230?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">King’s Quest</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.ubisoft.com/ubisoftplus?ucid=AFL-272089&amp;addinfo=&amp;bi="><span style="font-weight: 400">Ubisoft</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Monster Hunter Stories 3: Twisted Reflection </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/2852190?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">Super Meat Boy 3D </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/3288210?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Warcraft I: Remastered </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.ubisoft.com/ubisoftplus?ucid=AFL-272089&amp;addinfo=&amp;bi="><span style="font-weight: 400">Ubisoft</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Warcraft II: Remastered </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.ubisoft.com/ubisoftplus?ucid=AFL-272089&amp;addinfo=&amp;bi="><span style="font-weight: 400">Ubisoft</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Way of the Hunter 2</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/2543830?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
</ul>
<p><span style="font-weight: 400">What are you planning to play this weekend? Check out </span><i><span style="font-weight: 400">Crimson Desert</span></i><span style="font-weight: 400"> on GeForce NOW in Anytime Anywhere Gaming’s </span><a target="_blank" href="https://www.youtube.com/watch?v=cq0ZdXjx_2k"><span style="font-weight: 400">YouTube review</span></a><span style="font-weight: 400">.</span></p>
<p>&nbsp;</p>
<p><iframe loading="lazy" title="Can GeForce Now Handle Crimson Desert At 5K MAX Settings?" width="1200" height="675" src="https://www.youtube.com/embed/cq0ZdXjx_2k?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-thursday-4-2-nv-blog-1280x680-logo-1.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-thursday-4-2-nv-blog-1280x680-logo-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Press Start on April: GeForce NOW Brings 10 Games to the Cloud]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
	</channel>
</rss>
