<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:media="http://search.yahoo.com/mrss/">

<channel>
	<title>NVIDIA Blog</title>
	<atom:link href="https://blogs.nvidia.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://blogs.nvidia.com/</link>
	<description></description>
	<lastBuildDate>Mon, 13 Apr 2026 17:56:51 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>National Robotics Week — Latest Physical AI Research, Breakthroughs and Resources</title>
		<link>https://blogs.nvidia.com/blog/national-robotics-week-2026/</link>
		
		<dc:creator><![CDATA[NVIDIA Writers]]></dc:creator>
		<pubDate>Fri, 10 Apr 2026 19:40:59 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Computer Vision]]></category>
		<category><![CDATA[Inception]]></category>
		<category><![CDATA[Physical AI]]></category>
		<category><![CDATA[Simulation and Design]]></category>
		<category><![CDATA[Synthetic Data Generation]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92122</guid>

					<description><![CDATA[This National Robotics Week, NVIDIA is highlighting the breakthroughs that are bringing AI into the physical world — as well as the growing wave of robots transforming industries, from agricultural and manufacturing to energy and beyond. Advancements in robot learning, simulation and foundation models are accelerating development, enabling robots to move from training in virtual [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">This </span><a target="_blank" href="https://www.nationalroboticsweek.org/"><span style="font-weight: 400;">National Robotics Week</span></a><span style="font-weight: 400;">, NVIDIA is highlighting the breakthroughs that are bringing AI into the physical world — as well as the growing wave of robots transforming industries, from agricultural and manufacturing to energy and beyond.</span></p>
<p><span style="font-weight: 400;">Advancements in robot learning, simulation and foundation models are accelerating development, enabling robots to move from training in virtual environments to real-world deployment faster than ever.</span></p>
<p><span style="font-weight: 400;">With NVIDIA platforms for </span><a target="_blank" href="https://www.nvidia.com/en-us/use-cases/robotics-simulation/"><span style="font-weight: 400;">simulation</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://www.nvidia.com/en-us/use-cases/synthetic-data-physical-ai/"><span style="font-weight: 400;">synthetic data</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://www.nvidia.com/en-us/use-cases/robot-learning/"><span style="font-weight: 400;">AI-powered robot learning</span></a><span style="font-weight: 400;">, developers now have the tools to build machines that can perceive, reason and act in complex environments.</span></p>
<h2 id="gtc" class="wp-block-heading" style="font-size: 24px;"><b>Building the Next Generation of AI Robots <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#gtc"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">At </span><a target="_blank" href="https://www.nvidia.com/gtc/"><span style="font-weight: 400;">NVIDIA GTC</span></a><span style="font-weight: 400;"> last month, </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-and-global-robotics-leaders-take-physical-ai-to-the-real-world"><span style="font-weight: 400;">a new wave of technologies was introduced</span></a><span style="font-weight: 400;"> to accelerate the development of AI-powered robots.</span></p>
<p><span style="font-weight: 400;">At the core is a full-stack, cloud-to-robot workflow that connects simulation, robot learning and edge computing — making it faster to build, train and deploy intelligent machines.</span></p>
<div style="width: 1200px;" class="wp-video"><video class="wp-video-shortcode" id="video-92122-1" width="1200" height="675" preload="metadata" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/GTC26-Robots_16x9_v3-2-1.mp4?_=1" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/GTC26-Robots_16x9_v3-2-1.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/04/GTC26-Robots_16x9_v3-2-1.mp4</a></video></div>
<p><span style="font-weight: 400;"><br />
Key announcements include:</span></p>
<ul>
<li><span style="font-weight: 400;">New </span><a target="_blank" href="https://developer.nvidia.com/isaac/gr00t"><span style="font-weight: 400;">NVIDIA Isaac GR00T open models</span></a><span style="font-weight: 400;"> enable robots to understand natural language instructions and perform complex, multistep tasks using vision language action reasoning.</span></li>
<li><span style="font-weight: 400;">New </span><a target="_blank" href="https://www.nvidia.com/en-us/ai/cosmos/"><span style="font-weight: 400;">NVIDIA Cosmos world models</span></a><span style="font-weight: 400;"> for generating synthetic data and training robots at scale help systems learn more efficiently and generalize across environments.</span></li>
<li><span style="font-weight: 400;">The </span><a target="_blank" href="https://developer.nvidia.com/blog/newton-adds-contact-rich-manipulation-and-locomotion-capabilities-for-industrial-robotics/"><span style="font-weight: 400;">general availability</span></a><span style="font-weight: 400;"> of open source physics engine </span><a target="_blank" href="https://developer.nvidia.com/newton-physics"><span style="font-weight: 400;">Newton 1.0</span></a><span style="font-weight: 400;"> provides a fast and reliable foundation for dexterous robot manipulation with accurate collision detection, realistic object contact and stable simulation of complex systems with both rigid and flexible parts.</span></li>
<li><span style="font-weight: 400;">Expanded simulation capabilities with the general availability of </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400;">NVIDIA Isaac Sim 6.0</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://developer.nvidia.com/isaac/lab"><span style="font-weight: 400;">Isaac Lab 3.0</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://docs.nvidia.com/nurec/"><span style="font-weight: 400;">Omniverse NuRec technologies</span></a><span style="font-weight: 400;"> allow developers to model real-world scenarios and validate robotic systems before deployment.</span></li>
</ul>
<p><span style="font-weight: 400;">Watch </span><a target="_blank" href="https://www.nvidia.com/en-us/on-demand/search/?facet.event_name[]=GTC%20San%20Jose&amp;facet.event_year[]=2026&amp;facet.mimetype[]=event%20session&amp;headerText=All%20Sessions&amp;layout=list&amp;ncid=no-ncid&amp;page=1&amp;q=-&amp;regcode=no-ncid&amp;sort=relevance&amp;sortDir=desc"><span style="font-weight: 400;">on-demand sessions</span></a><span style="font-weight: 400;"> from the </span><a target="_blank" href="https://www.nvidia.com/gtc/"><span style="font-weight: 400;">NVIDIA GTC</span></a><span style="font-weight: 400;"> global AI conference to catch up on recent breakthroughs in robotics, showcased by leading experts in the field.</span></p>
<h2 id="peritas" class="wp-block-heading" style="font-size: 24px;"><b>Driving Breakthroughs in Surgical Precision <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#peritas"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">PeritasAI is advancing a new generation of surgical robotics by integrating <a target="_blank" href="https://www.nvidia.com/en-us/glossary/generative-physical-ai/">physical AI</a> into real-world operating environments. Using </span><a target="_blank" href="https://isaac-for-healthcare.github.io/"><span style="font-weight: 400;">NVIDIA Isaac for Healthcare</span></a><span style="font-weight: 400;"> and the </span><a target="_blank" href="https://isaac-for-healthcare.github.io/workflows/rheo/"><span style="font-weight: 400;">Rheo</span></a><span style="font-weight: 400;"> blueprint for hospital automation, the company is developing multi-agent intelligence that can sense, coordinate and act in real time.</span></p>
<p><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/hospital-automation-1024x576-1.png"><img fetchpriority="high" decoding="async" class="aligncenter size-full wp-image-92286" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/hospital-automation-1024x576-1.png" alt="" width="1024" height="576" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/hospital-automation-1024x576-1.png 1024w, https://blogs.nvidia.com/wp-content/uploads/2026/04/hospital-automation-1024x576-1-960x540.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/hospital-automation-1024x576-1-630x354.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/hospital-automation-1024x576-1-300x169.png 300w, https://blogs.nvidia.com/wp-content/uploads/2026/04/hospital-automation-1024x576-1-400x225.png 400w" sizes="(max-width: 1024px) 100vw, 1024px" /></a></p>
<p><span style="font-weight: 400;">In collaboration with Lightwheel and Advent Health Hospitals, this work brings embodied intelligence into the operating room — supporting surgical teams with situational awareness, sterile coordination and intelligent management of instruments, implants and workflows.</span></p>
<h2 id="isaac-sim" class="wp-block-heading" style="font-size: 24px;"><b>From Words to Motion: NVIDIA NemoClaw Brings Natural Language Commands to Isaac Sim <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#isaac-sim"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">NVIDIA Omniverse developer Umang Chudasama has </span><a target="_blank" href="https://www.linkedin.com/posts/umang-chudasama_robotics-nvidia-isaacsim-ugcPost-7446513116416487424-TqQq?utm_source=social_share_send&amp;utm_medium=member_desktop_web&amp;rcm=ACoAAAn6OHUBER_-OWSbHjZyVn985_NH2TCiwtI"><span style="font-weight: 400;">integrated</span></a><span style="font-weight: 400;"> NVIDIA NemoClaw with </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400;">NVIDIA Isaac Sim</span></a><span style="font-weight: 400;"> to navigate a Nova Carter autonomous robot using plain natural language commands — no manual coding required. NemoClaw translates text instructions (like “move two meters forward”) into executable Python scripts, which are then sent to Isaac Sim via a custom REST application programming interface in real time.</span></p>
<div style="width: 1200px;" class="wp-video"><video class="wp-video-shortcode" id="video-92122-2" width="1200" height="675" preload="metadata" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/nemoclaw-isaacsim.mp4?_=2" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/nemoclaw-isaacsim.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/04/nemoclaw-isaacsim.mp4</a></video></div>
<p><span style="font-weight: 400;"><br />
The entire system runs within Isaac Sim, giving the robot a realistic, physics-accurate warehouse environment to operate in before ever touching the real world. Pairing Isaac Sim with NemoClaw means faster development, safer testing and a smarter path to deployment. Rather than programming robots line by line, developers can now simply talk to them, marking a meaningful shift toward truly collaborative, language-driven robotics.</span></p>
<h2 id="oceansim" class="wp-block-heading" style="font-size: 24px;"><b>OceanSim: A GPU-Accelerated Underwater Robot Perception Simulation Framework <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#oceansim"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">Underwater simulators are crucial for developing reliable perception systems, but they still struggle with accurate physics‑based sensor modeling and fast rendering. </span></p>
<p><span style="font-weight: 400;">Helping close this gap is </span><a target="_blank" href="https://umfieldrobotics.github.io/OceanSim/"><span style="font-weight: 400;">OceanSim</span></a><span style="font-weight: 400;">, a GPU‑accelerated, high‑fidelity simulator developed by researchers at the University of Michigan. It uses advanced physics‑based rendering techniques to make synthetic underwater images look more realistic. Using GPUs, the simulator can render imaging sonar in real time and generate synthetic data quickly. </span></p>
<p><iframe title="OceanSim: A GPU-Accelerated Underwater Robot Perception Simulation Framework" width="1200" height="675" src="https://www.youtube.com/embed/2_1MYjeZ9lY?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><span style="font-weight: 400;">OceanSim uses </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400;">NVIDIA Isaac Sim</span></a><span style="font-weight: 400;"> and plugs into </span><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/"><span style="font-weight: 400;">NVIDIA Omniverse</span></a><span style="font-weight: 400;"> libraries, creating a seamless link between robot‑learning research and underwater robotics. This integration lets developers easily develop and deploy embodied AI techniques for underwater applications.</span></p>
<h2 id="robolab" class="wp-block-heading" style="font-size: 24px;"><b>RoboLab: Benchmarking the Next Generation of Generalist Robots <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#robolab"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><a target="_blank" href="https://github.com/NVLabs/RoboLab"><span style="font-weight: 400;">RoboLab</span></a><span style="font-weight: 400;"> is a high-fidelity simulation benchmark for developing and evaluating generalist robot policies — powering systems designed to perform diverse tasks across environments.</span></p>
<div style="width: 864px;" class="wp-video"><video class="wp-video-shortcode" id="video-92122-3" width="864" height="480" preload="metadata" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Put_the_onion_in_the_wood_bowl_0_viewport_3X.mp4?_=3" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/Put_the_onion_in_the_wood_bowl_0_viewport_3X.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/04/Put_the_onion_in_the_wood_bowl_0_viewport_3X.mp4</a></video></div>
<p><span style="font-weight: 400;"><br />
Built on </span><a target="_blank" href="https://developer.nvidia.com/isaac"><span style="font-weight: 400;">NVIDIA Isaac</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/"><span style="font-weight: 400;">NVIDIA Omniverse</span></a><span style="font-weight: 400;"> simulation technologies, RoboLab taps into photorealistic environments and physics-based modeling to train and test robotic policies at scale. This enables researchers to measure how well behaviors learned in simulation transfer to the real world as tasks grow in complexity. </span></p>
<p><span style="font-weight: 400;">By combining advanced simulation with structured evaluation, RoboLab accelerates the path from virtual training to real-world deployment.</span></p>
<p><span style="font-weight: 400;">RoboLab</span><span style="font-weight: 400;"> features will be incorporated into the roadmap of </span><a target="_blank" href="https://developer.nvidia.com/isaac/lab-arena"><span style="font-weight: 400;">NVIDIA Isaac Lab-Arena</span></a><span style="font-weight: 400;">, an open source framework for large-scale policy setup and evaluation.</span></p>
<h2 id="cosmos" class="wp-block-heading" style="font-size: 24px;"><b>Smarter Palletizing With AI-Driven Reasoning <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#cosmos"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">In warehouse environments, palletizing robots typically follow fixed rules — handling boxes the same way regardless of contents, condition or fragility. A project developed by Doosan Robotics introduces a more adaptive approach using </span><a target="_blank" href="https://docs.nvidia.com/cosmos/latest/reason2/index.html"><span style="font-weight: 400;">NVIDIA Cosmos Reason</span></a><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;">By analyzing a single camera image, the system can infer box contents, detect damage and adjust how each item is handled — such as placement, speed and grip — based on estimated weight and fragility. This reduces common issues like incorrectly stacking damaged or fragile goods.</span></p>
<p><iframe title="Nvidia Cosmos Cookoff - See How It Thinks: Mixed Palletizing with Explainable Visual Reasoning" width="1200" height="675" src="https://www.youtube.com/embed/4Yq0ESmKPPw?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><span style="font-weight: 400;">To build robots that understand the physical world before they ever deploy in it, robotics researchers and developers are building policy models powered by </span><a target="_blank" href="https://www.nvidia.com/en-us/ai/cosmos/"><span style="font-weight: 400;">NVIDIA Cosmos world foundation models</span></a><span style="font-weight: 400;"> (WFMs). </span><a target="_blank" href="https://www.linkedin.com/posts/toyota-research-institute_researchers-from-tris-robotics-lfv-learning-activity-7439452924168073216-05qL/"><span style="font-weight: 400;">Toyota Research Institute</span></a><span style="font-weight: 400;"> customizes Cosmos WFMs for their own world model to achieve state-of-the-art results across dynamic view synthesis, state-of-the-art teleoperation data augmentation and navigation world models.</span></p>
<div style="width: 960px;" class="wp-video"><video class="wp-video-shortcode" id="video-92122-4" width="960" height="640" preload="metadata" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/nvidia_cosmos_accelerates_ai_training_for_robotics.mp4?_=4" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/nvidia_cosmos_accelerates_ai_training_for_robotics.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/04/nvidia_cosmos_accelerates_ai_training_for_robotics.mp4</a></video></div>
<p><a target="_blank" href="https://mimic-video.github.io/"><span style="font-weight: 400;">Mimic robotics</span></a><span style="font-weight: 400;"> takes a different angle with mimic-video, a video-action model that pairs a pretrained internet-scale video model with a flow-matching action decoder, replacing the static image-language backbones of traditional VLAs with video-learned physical dynamics — achieving 10x better sample efficiency and 2x faster convergence on real-world manipulation tasks. </span></p>
<p><span style="font-weight: 400;">Together, both teams demonstrate a fundamental shift: robots trained on world models that capture physics and causality need dramatically less real-world data to perform reliably in conditions they&#8217;ve never seen.</span></p>
<h2 id="openclaw" class="wp-block-heading" style="font-size: 24px;"><b>Open, Intelligent Robotics on NVIDIA Jetson: Community Innovations Powering the Next Wave of Physical AI <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#openclaw"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">This National Robotics Week, OpenClaw running on the </span><a target="_blank" href="http://nvidia.com/en-us/autonomous-machines/embedded-systems/"><span style="font-weight: 400;">NVIDIA Jetson</span></a><span style="font-weight: 400;"> platform showcases how quickly open source innovation is evolving into real-world, intelligent robotics. </span></p>
<p><span style="font-weight: 400;">From practical applications to innovative projects, the robotics community is building what’s next — and fast. </span></p>
<p><span style="font-weight: 400;">Developers are pushing the boundaries of autonomy — including </span><a target="_blank" href="https://www.linkedin.com/posts/marco-pastorio_isaacsim-isaacros-nvfp4-activity-7435793888109527040-ffw5?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAACIoNTMBsMKQgXfIdyJvm7NsaP70ieqO9Tc"><span style="font-weight: 400;">hardware-in-the-loop testing</span></a><span style="font-weight: 400;"> powered by </span><a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-thor/"><span style="font-weight: 400;">Jetson Thor</span></a><span style="font-weight: 400;">, evaluating camera streams from </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400;">NVIDIA Isaac Sim</span></a><span style="font-weight: 400;"> and even building systems that can generate their own code to complete tasks.</span></p>
<div style="width: 1200px;" class="wp-video"><video class="wp-video-shortcode" id="video-92122-5" width="1200" height="675" preload="metadata" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/oss-on-jetson-models-nrw.mp4?_=5" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/oss-on-jetson-models-nrw.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/04/oss-on-jetson-models-nrw.mp4</a></video></div>
<p><span style="font-weight: 400;"><br />
In addition, OpenClaw </span><a target="_blank" href="https://www.jetson-ai-lab.com/tutorials/openclaw/"><span style="font-weight: 400;">now running entirely locally on NVIDIA Jetson Thor</span></a><span style="font-weight: 400;"> — powered by optimized </span><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/"><span style="font-weight: 400;">NVIDIA Nemotron</span></a><span style="font-weight: 400;"> open models and the vLLM open inference library — marks a major leap toward private, low-latency edge AI for robotics. And innovations like the </span><a target="_blank" href="https://www.nvidia.com/en-us/ai/nemoclaw/"><span style="font-weight: 400;">NVIDIA NemoClaw</span></a><span style="font-weight: 400;"> stack on Jetson are expanding what’s possible at the intersection of open source and high-performance robotics platforms. </span></p>
<p><iframe loading="lazy" title="First Look: NemoClaw on Jetson with a Local LLM" width="1200" height="675" src="https://www.youtube.com/embed/7HQSFgP6vOE?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<h2 id="Skyentific" class="wp-block-heading" style="font-size: 24px;"><b>Training and Refining Movement in Simulation <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#Skyentific"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">Gennady Plyushchev, a robotics creator known as Skyentific, is documenting the process of building a walking bipedal robot, from simulation and design to real-world deployment — showcasing a simulation-first approach to robot development.</span></p>
<p><iframe loading="lazy" title="From Simulation to Reality: Swiss Bipedal Robot (+ NVIDIA Jetson raffle)" width="1200" height="675" src="https://www.youtube.com/embed/67S9-MrqWJg?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><span style="font-weight: 400;">By using </span><a target="_blank" href="https://developer.nvidia.com/isaac"><span style="font-weight: 400;">NVIDIA Isaac</span></a><span style="font-weight: 400;"> based simulation workflows alongside </span><a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/"><span style="font-weight: 400;">NVIDIA Jetson</span></a><span style="font-weight: 400;"> for on-device AI and control, the project demonstrates how developers can rapidly iterate in virtual environments before deploying to physical systems. </span></p>
<p><span style="font-weight: 400;">The result highlights a broader shift in robotics: using AI, simulation and </span><a target="_blank" href="https://www.nvidia.com/en-us/edge-computing/"><span style="font-weight: 400;">edge computing</span></a><span style="font-weight: 400;"> to accelerate development and bring increasingly capable </span><a target="_blank" href="https://www.nvidia.com/en-us/use-cases/humanoid-robots/"><span style="font-weight: 400;">humanoid robots</span></a><span style="font-weight: 400;"> to life.</span></p>
<h2 id="university-of-maryland" class="wp-block-heading" style="font-size: 24px;"><b>University of Maryland Researchers Develop Robots for Complex Household Tasks <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#university-of-maryland"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">To bring robots into everyday life, researchers at the </span><a target="_blank" href="https://www.umiacs.umd.edu/news-events/news/umd-researchers-advance-robotics-perform-complex-household-tasks"><span style="font-weight: 400;">University of Maryland</span></a><span style="font-weight: 400;">, recipients of a grant from the <a target="_blank" href="https://www.umiacs.umd.edu/news-events/news/umd-researchers-advance-robotics-perform-complex-household-tasks">NVIDIA Academic Grant Program</a>, are developing AI-powered humanoid systems capable of performing complex household tasks with greater autonomy.</span></p>
<p><span style="font-weight: 400;">The project centers on building robot foundation models that unify perception, planning and control. Using the </span><a target="_blank" href="https://developer.nvidia.com/isaac?size=n_6_n&amp;sort-field=featured&amp;sort-direction=desc"><span style="font-weight: 400;">NVIDIA Isaac</span></a><span style="font-weight: 400;"> open robotics development platform, researchers can create photorealistic, high-fidelity virtual home environments populated with diverse objects and layouts, allowing robots to practice millions of task variations and safely test rare or complex scenarios.</span></p>
<p><a target="_blank" href="https://www.nvidia.com/en-us/products/workstations/professional-desktop-gpus/rtx-pro-6000-family/"><span style="font-weight: 400;">NVIDIA RTX PRO 6000 Blackwell GPUs</span></a><span style="font-weight: 400;"> for training large models and </span><a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-thor/"><span style="font-weight: 400;">NVIDIA Jetson AGX Thor</span></a><span style="font-weight: 400;"> developer kits for efficient deployment on physical robots help bridge the gap between research and real-world applications.</span></p>
<p><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-scaled.jpg"><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-92225" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-1680x1122.jpg" alt="" width="1200" height="801" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-1680x1122.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-960x641.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-1280x855.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-1536x1026.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Seungjae-Lee-UMIACS-630x421.jpg 630w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></a></p>
<p><span style="font-weight: 400;">By combining advancements in generative AI, sequential decision-making and scalable computing, the work represents a key step toward general-purpose robots that can support people in homes, healthcare settings and beyond.</span></p>
<h2 id="fellowship" class="wp-block-heading" style="font-size: 24px;"><b>Announcing the MassRobotics Fellowship <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#fellowship"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">The </span><a target="_blank" href="https://www.massrobotics.org/physical-ai-fellowship-cohort-2/"><span style="font-weight: 400;">second cohort</span></a><span style="font-weight: 400;"> of the Amazon Web Services (AWS) MassRobotics fellowship comprises startups being recognized for compelling industrial use cases harnessing </span><a target="_blank" href="https://www.nvidia.com/en-us/industries/robotics/"><span style="font-weight: 400;">robotics</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/intelligent-video-analytics-platform/"><span style="font-weight: 400;">computer vision</span></a><span style="font-weight: 400;">. They will receive access to technical resources and AWS cloud credits.</span></p>
<p><span style="font-weight: 400;">The cohort includes </span><a target="_blank" href="https://www.nvidia.com/en-us/startups/"><span style="font-weight: 400;">NVIDIA Inception members</span></a><span style="font-weight: 400;"> Burro, Config Intelligence, Deltia, Haply Robotics, Luminous Robotics, Roboto AI, Telexistence, Terra Robotics and WiRobotics, each developing technologies spanning humanoid robotics, industrial automation, haptics and agricultural systems.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-92213 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Burro-Collaborative-Robots.jpg" alt="" width="1000" height="667" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/Burro-Collaborative-Robots.jpg 1000w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Burro-Collaborative-Robots-960x640.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Burro-Collaborative-Robots-630x420.jpg 630w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></p>
<p><b>Burro</b><span style="font-weight: 400;"> creates autonomous agricultural robots for tasks like grape harvesting and crop scouting.</span></p>
<p><b>Config Intelligence</b><span style="font-weight: 400;"> builds data infrastructure for general-purpose bimanual robotics to enable reliable two-handed tasks in real-world settings.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-92204 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Deltia.jpeg" alt="" width="1280" height="854" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/Deltia.jpeg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Deltia-960x641.jpeg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Deltia-630x420.jpeg 630w" sizes="auto, (max-width: 1280px) 100vw, 1280px" /></p>
<p><b>Deltia</b><span style="font-weight: 400;"> provides AI-driven manufacturing intelligence that optimizes assembly lines using computer vision and analytics.</span></p>
<p><b>Haply Robotics</b><span style="font-weight: 400;"> designs haptic control devices that serve as “steering wheels” for physical AI systems across industries.</span></p>
<p><b>Luminous Robotics</b><span style="font-weight: 400;"> deploys AI-powered robotic systems for fast, low-cost solar-panel installation and maintenance.</span></p>
<p><b>Roboto AI</b><span style="font-weight: 400;"> offers a data-analytics platform that accelerates robot development by managing and analyzing robotics data.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-92201 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Telexistence-2.jpg" alt="" width="960" height="510" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/Telexistence-2.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Telexistence-2-630x335.jpg 630w" sizes="auto, (max-width: 960px) 100vw, 960px" /></p>
<p><b>Telexistence</b><span style="font-weight: 400;"> develops AI-powered humanoid robots and remote-controlled systems for retail and logistics. </span></p>
<p><b>Terra Robotics</b><span style="font-weight: 400;"> develops laser-weeding agricultural robots to automate sustainable farming.</span></p>
<p><b>WiRobotics</b><span style="font-weight: 400;"> creates wearable walking-assist and humanoid robots to enhance mobility and physical interaction, using training data from assisted products to train its humanoids.</span></p>
<h2 id="maximo" class="wp-block-heading" style="font-size: 24px;"><b>Accelerating How Utility-Scale Solar Projects Are Built in the Field <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#maximo"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><a target="_blank" href="https://maxrobotics.ai/"><span style="font-weight: 400;">Maximo</span></a><span style="font-weight: 400;">, a solar robotics business incubated within The AES Corporation, recently completed a 100-megawatt solar installation using its robot fleet. Developed with NVIDIA accelerated computing, </span><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/"><span style="font-weight: 400;">NVIDIA Omniverse libraries</span></a><span style="font-weight: 400;"> and the </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400;">NVIDIA Isaac Sim framework</span></a><span style="font-weight: 400;">, Maximo demonstrated that autonomous installations can operate reliably for utility-scale projects.</span></p>
<div style="width: 1200px;" class="wp-video"><video class="wp-video-shortcode" id="video-92122-6" width="1200" height="675" preload="metadata" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/maximo-nrw.mp4?_=6" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/maximo-nrw.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/04/maximo-nrw.mp4</a></video></div>
<p><span style="font-weight: 400;"><br />
The solution improves installation speed, safety and consistency, helping close the gap between rising demand for faster time to power and construction capacity.</span></p>
<p><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-92169" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-960x540.jpg" alt="" width="960" height="540" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-630x354.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/04/Sunset_4Robots-400x225.jpg 400w" sizes="auto, (max-width: 960px) 100vw, 960px" /></p>
<p><span style="font-weight: 400;">As solar expansion faces ongoing labor constraints and rising demand, AI-driven field robotics systems like Maximo are helping accelerate infrastructure buildout, reduce costs and redefine how energy projects are delivered.</span></p>
<h2 id="aigen" class="wp-block-heading" style="font-size: 24px;"><b>Aigen Advances Sustainable Farming With Agricultural Robotics <a href="https://blogs.nvidia.com/blog/national-robotics-week-2026/#aigen"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f517.png" alt="🔗" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a></b></h2>
<p><span style="font-weight: 400;">To help regenerate the Earth, </span><a target="_blank" href="https://www.linkedin.com/posts/nvidia-for-startups_aigen-regenerating-earth-with-robotics-and-activity-7444752936771170304-UMpl/"><span style="font-weight: 400;">Aigen’s solar-powered autonomous robots</span></a><span style="font-weight: 400;"> are breaking farmers’ dependency on chemicals through precision weed control powered by vision AI.</span></p>
<p><span style="font-weight: 400;">The <a target="_blank" href="https://www.nvidia.com/en-us/startups/">NVIDIA Inception</a> startup is building a new kind of farming system that’s powered by clean energy and continuously enriched by data. Aigen’s fleet of solar-driven rovers uses advanced computer vision to identify and remove weeds, dramatically reducing the need for herbicides.</span></p>
<p><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/Copy-of-Synthetic_Data.gif"><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-92139" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Copy-of-Synthetic_Data.gif" alt="" width="1280" height="720" /></a></p>
<p><span style="font-weight: 400;">Farming has no standard environment. Every field is different — different crops, different soil, different equipment, weeds, growth stages and geographies. That fragmentation makes real-world data collection slow, expensive and inconsistent. By post-training </span><a target="_blank" href="https://www.nvidia.com/en-us/ai/cosmos/"><span style="font-weight: 400;">NVIDIA Cosmos open world foundation models</span></a><span style="font-weight: 400;"> on their specialized data and harnessing </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400;">NVIDIA Isaac Sim</span></a><span style="font-weight: 400;"> pipelines, Aigen is building the system that generalizes for millions of agriculture scenarios.</span></p>
<p><span style="font-weight: 400;">On the ground, each rover runs inference using an </span><a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/"><span style="font-weight: 400;">NVIDIA Jetson Orin</span></a><span style="font-weight: 400;"> edge AI module to distinguish crops from weeds in real time.</span></p>
<div style="width: 1200px;" class="wp-video"><video class="wp-video-shortcode" id="video-92122-7" width="1200" height="673" preload="metadata" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/Aigen_Element_Weeding_1.mp4?_=7" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/Aigen_Element_Weeding_1.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/04/Aigen_Element_Weeding_1.mp4</a></video></div>
<p><span style="font-weight: 400;"><br />
Using these rovers, farmers can grow crops more sustainably and profitably, using regenerative practices that heal the land and foster ecological balance.</span></p>
]]></content:encoded>
					
		
		<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/04/Aigen_Element_Weeding_1.mp4" length="4713492" type="video/mp4" />
<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/04/maximo-nrw.mp4" length="18590393" type="video/mp4" />
<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/04/oss-on-jetson-models-nrw.mp4" length="12405774" type="video/mp4" />
<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/04/nvidia_cosmos_accelerates_ai_training_for_robotics.mp4" length="2338942" type="video/mp4" />
<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/04/Put_the_onion_in_the_wood_bowl_0_viewport_3X.mp4" length="824207" type="video/mp4" />
<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/04/nemoclaw-isaacsim.mp4" length="13912313" type="video/mp4" />
<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/04/GTC26-Robots_16x9_v3-2-1.mp4" length="17798316" type="video/mp4" />

				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/robotics-tech-blog-nrw-rolling-blog-1280x680-1.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/robotics-tech-blog-nrw-rolling-blog-1280x680-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[National Robotics Week — Latest Physical AI Research, Breakthroughs and Resources]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Strength and Destiny Collide: ‘Samson: A Tyndalston Story’ Arrives in the Cloud</title>
		<link>https://blogs.nvidia.com/blog/geforce-now-thursday-samson-a-tyndalston-story/</link>
		
		<dc:creator><![CDATA[GeForce NOW Community]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 13:00:57 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Cloud Gaming]]></category>
		<category><![CDATA[GeForce NOW]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92248</guid>

					<description><![CDATA[A timeless story of grit, faith and rebellion takes center stage as Samson: A Tyndalston Story joins the GeForce NOW library today.  The highly anticipated release from Liquid Swords can now be streamed on nearly any device with GeForce NOW bringing cinematic intensity and mythic storytelling to the cloud. Catch it as part of four [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400">A timeless story of grit, faith and rebellion takes center stage as </span><i><span style="font-weight: 400">Samson: A Tyndalston Story</span></i><span style="font-weight: 400"> joins the </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce-now/"><span style="font-weight: 400">GeForce NOW</span></a><span style="font-weight: 400"> library today. </span></p>
<p><span style="font-weight: 400">The highly anticipated release from Liquid Swords can now be streamed on nearly any device with GeForce NOW bringing cinematic intensity and mythic storytelling to the cloud.</span></p>
<p><span style="font-weight: 400">Catch it as part of </span><span style="font-weight: 400">four </span><span style="font-weight: 400">new games in the cloud this week.</span></p>
<h2><b>Stream the Power</b></h2>
<figure id="attachment_92256" aria-describedby="caption-attachment-92256" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-92256" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-1680x945.jpg" alt="Samson on GeForce NOW" width="1200" height="675" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-630x354.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_sdfsdfsf-400x225.jpg 400w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92256" class="wp-caption-text"><em>A new legend rises.</em></figcaption></figure>
<p><span style="font-weight: 400">Tyndalston is a city built on debt, muscle and memory. </span><i><span style="font-weight: 400">Samson: A Tyndalston Story</span></i><span style="font-weight: 400"> from Liquid Swords follows Samson, a former enforcer pulled back to the streets that made him. Violence is currency as every fight is personal, every hit carries history and every escape feels earned in a city that never forgives.</span></p>
<p><span style="font-weight: 400">Gameplay blends cinematic melee action with choice-driven narrative progression. Every confrontation — from shadowed alley brawls to large-scale set pieces — feels purposeful, reflecting Samson’s internal struggle between vengeance and redemption. Brawls hit fast and close. Cars aren’t set pieces — they’re weapons. Momentum and terrain decide if the player walks away or falls harder. Every job, debt and decision cuts toward freedom or collapse.</span></p>
<p><span style="font-weight: 400">The game takes full advantage of ray-traced global illumination, reflections and shadows, creating a city that feels cinematic and alive. </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce/technologies/dlss/"><span style="font-weight: 400">NVIDIA DLSS</span></a><span style="font-weight: 400"> 3.5 boosts performance, while </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce/technologies/reflex/"><span style="font-weight: 400">NVIDIA Reflex</span></a><span style="font-weight: 400"> technology cuts down latency to keep controls razor-sharp during split-second fights. With GeForce NOW, the experience streams instantly at maximum fidelity, even without the latest hardware. No waiting around for downloads or worrying about system specs, just dive straight into the grit and glow of Tyndalston.</span></p>
<h2><b>Celebrate New Games</b></h2>
<figure id="attachment_92253" aria-describedby="caption-attachment-92253" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-92253 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/iceman-tw-li-2048x1024-3-1680x840.jpg" alt="No arms, no problem." width="1200" height="600" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/iceman-tw-li-2048x1024-3-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/iceman-tw-li-2048x1024-3-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/iceman-tw-li-2048x1024-3-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/iceman-tw-li-2048x1024-3-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/iceman-tw-li-2048x1024-3-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/iceman-tw-li-2048x1024-3.jpg 2048w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92253" class="wp-caption-text"><em>No arms, no problem.</em></figcaption></figure>
<p><span style="font-weight: 400">Celebrate three decades of Rayman with the definitive edition of the platforming classic in </span><i><span style="font-weight: 400">Rayman 30th Anniversary Edition</span></i><span style="font-weight: 400">, featuring five versions from iconic consoles, over 120 additional levels and an exclusive documentary that explores the creation of the limbless hero. Stream it on GeForce NOW without having to wait around for downloads or updates. </span></p>
<p><span style="font-weight: 400">In addition, members can look for the following:</span><i></i></p>
<ul>
<li><i><span style="font-weight: 400">Samson </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3634520?utm_source=nvidia&amp;utm_campaign=geforce_now'"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 8, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">Morbid Metal</span></i><span style="font-weight: 400"> (New release on </span><a target="_blank" href="https://store.steampowered.com/app/1866130?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 8, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">DayZ</span></i><span style="font-weight: 400"> (New release on </span><a target="_blank" href="https://www.xbox.com/games/store/dayz/bsr9nlhvf1kl?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Game Pass, April 9) </span></li>
<li><i><span style="font-weight: 400">Rayman: 30th Anniversary Edition</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/4094670?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://store.ubi.com/69683af797044c480eb79e03.html?ucid=AFL-ID_152062&amp;maltcode=geforcenow_convst_AFL_geforcenow_vg__STORE____&amp;addinfo="><span style="font-weight: 400">Ubisoft</span></a><span style="font-weight: 400">)</span></li>
</ul>
<p><span style="font-weight: 400">GeForce RTX 5080-ready game this week, in addition to </span><i><span style="font-weight: 400">Samson </span></i><span style="font-weight: 400">and </span><i><span style="font-weight: 400">Morbid Metal:</span></i><i></i></p>
<ul>
<li><i><span style="font-weight: 400">Starfield</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/1716740?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.xbox.com/games/store/starfield-standard-edition/9NTFM8RXLJF9?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Game Pass)</span></li>
</ul>
<p><span style="font-weight: 400">What are you planning to play this weekend? Let us know on </span><a target="_blank" href="https://www.twitter.com/nvidiagfn"><span style="font-weight: 400">X</span></a><span style="font-weight: 400"> or in the comments below.</span></p>
<blockquote class="twitter-tweet" data-width="550" data-dnt="true">
<p lang="en" dir="ltr">Who&#39;s the most iconic animal from a video game? Drop a pic or gif! <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f63b.png" alt="😻" class="wp-smiley" style="height: 1em; max-height: 1em;" /> (this post isn&#39;t just to farm cute animal pics, no idea what you&#39;re talking about).</p>
<p>&mdash; <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f329.png" alt="🌩" class="wp-smiley" style="height: 1em; max-height: 1em;" /> NVIDIA GeForce NOW (@NVIDIAGFN) <a target="_blank" href="https://twitter.com/NVIDIAGFN/status/2041546542545322416?ref_src=twsrc%5Etfw">April 7, 2026</a></p></blockquote>
<p><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-thursday-4-9-nv-blog-1280x680-logo.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-thursday-4-9-nv-blog-1280x680-logo-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Strength and Destiny Collide: ‘Samson: A Tyndalston Story’ Arrives in the Cloud]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>From RTX to Spark: NVIDIA Accelerates Gemma 4 for Local Agentic AI</title>
		<link>https://blogs.nvidia.com/blog/rtx-ai-garage-open-models-google-gemma-4/</link>
		
		<dc:creator><![CDATA[Michael Fukuyama]]></dc:creator>
		<pubDate>Thu, 02 Apr 2026 16:15:58 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Conversational AI]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[NVIDIA RTX]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[RTX AI Garage]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92019</guid>

					<description><![CDATA[Open models are driving a new wave of on-device AI, extending innovation beyond the cloud to everyday devices. As these models advance, their value increasingly depends on access to local, real-time context that can turn meaningful insights into action.  Designed for this shift, Google’s latest additions to the Gemma 4 family introduce a class of small, fast and omni-capable models built for efficient local execution across a wide range [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span data-contrast="none">Open models are driving a new wave of on-device AI, extending innovation beyond the cloud to everyday devices. As these models advance, their value increasingly depends on access to local, real-time context that can turn meaningful insights into action.</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span data-contrast="none">Designed for this shift, Google’s latest additions to the </span>Gemma 4 family <span data-contrast="none">introduce a class of small, fast and omni-capable models built for efficient local execution across a wide range of devices. </span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p>Google <span data-contrast="none">and NVIDIA have collaborated to optimize Gemma 4</span><b><span data-contrast="none"> </span></b><span data-contrast="none">for NVIDIA GPUs, enabling efficient performance across a range of systems — from data center deployments to NVIDIA RTX-powered PCs and workstations, the <a target="_blank" href="https://www.nvidia.com/en-us/products/workstations/dgx-spark/">NVIDIA DGX Spark</a> personal AI supercomputer and <a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/product-development/">NVIDIA Jetson Orin Nano</a> edge AI modules.</span></p>
<h2><b><span data-contrast="none">Gemma 4: Compact Models Optimized for NVIDIA GPUs</span></b><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></h2>
<p><span data-contrast="none">The latest additions to the </span>Gemma 4 family of open models<span data-contrast="none">—</span><span data-contrast="none"> spanning E2B, E4B, 26B and 31B variants </span><span data-contrast="none">—</span><span data-contrast="none"> are designed for efficient deployment from edge devices to high-performance GPUs. </span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<figure id="attachment_92036" aria-describedby="caption-attachment-92036" style="width: 1149px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2026/04/gemma-4-perf-chart-desktop-light-1.png"><img loading="lazy" decoding="async" class="size-full wp-image-92036" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/gemma-4-perf-chart-desktop-light-1.png" alt="" width="1149" height="489" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/gemma-4-perf-chart-desktop-light-1.png 1149w, https://blogs.nvidia.com/wp-content/uploads/2026/04/gemma-4-perf-chart-desktop-light-1-960x409.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/gemma-4-perf-chart-desktop-light-1-630x268.png 630w" sizes="auto, (max-width: 1149px) 100vw, 1149px" /></a><figcaption id="caption-attachment-92036" class="wp-caption-text">All configurations measured using Q4_K_M quantizations BS = 1, ISL = 4096 and OSL = 128 on NVIDIA GeForce RTX 5090 and Mac M3 Ultra desktops. Token generation throughput measured on llama.cpp b7789, using the llama-bench tool.</figcaption></figure>
<p><span data-contrast="none">This new generation of compact models supports a range of tasks, including:</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<ul>
<li><b><span data-contrast="none">Reasoning: </span></b><span data-contrast="none">Strong performance on complex problem-solving tasks. </span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></li>
<li><b><span data-contrast="auto">Coding: </span></b><span data-contrast="auto">Code generation and debugging for developer workflows.  </span><span data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;335559738&quot;:240,&quot;335559739&quot;:240}"> </span></li>
<li><b><span data-contrast="auto">Agents: </span></b><span data-contrast="auto">Native support for structured tool use (function calling). </span><span data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;335559738&quot;:240,&quot;335559739&quot;:240}"> </span></li>
<li><b><span data-contrast="auto">Vision, Video and Audio Capabilities: </span></b><span data-contrast="auto">E</span><span data-contrast="auto">nables rich multimodal interactions for object recognition, automated speech recognition, and document or video intelligence.</span><span data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;335559738&quot;:240,&quot;335559739&quot;:240}"> </span></li>
<li><b><span data-contrast="auto">Interleaved Multimodal Input: </span></b><span data-contrast="auto">M</span><span data-contrast="auto">ix text and images in any order within a single prompt. </span><span data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;335559738&quot;:240,&quot;335559739&quot;:240}"> </span></li>
<li><b><span data-contrast="auto">Multilingual: </span></b><span data-contrast="auto">Out-of-the-box support for 35+ languages, pretrained on 140+ languages.</span><span data-ccp-props="{}"> </span></li>
</ul>
<p><span data-contrast="none">The </span>E2B and E4B models<span data-contrast="none"> are built for ultraefficient, low-latency inference at the edge, running completely offline with near-zero latency across many devices including Jetson Nano modules. </span></p>
<p><span data-contrast="none">The </span>26B and 31B models<span data-contrast="none">are designed for high-performance reasoning and developer-centric workflows, making them well suited for agentic AI. Optimized to deliver state-of-the-art, accessible reasoning, these models run efficiently on NVIDIA RTX GPUs and DGX Spark — powering development environments, coding assistants and agent-driven workflows. </span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span data-contrast="none">As local agentic AI continues to gain momentum, applications like </span>OpenClaw<span data-contrast="none"> are enabling always-on AI assistants on RTX PCs, workstations and DGX Spark. The latest Gemma 4 models are compatible with OpenClaw, allowing users to build capable local agents that draw context from personal files, applications and workflows to automate tasks. Learn how to run </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce/news/open-claw-rtx-gpu-dgx-spark-guide/"><span data-contrast="none">OpenClaw for free on RTX GPUs and DGX Spark</span></a><span data-contrast="none"> or using the </span><a target="_blank" href="https://build.nvidia.com/spark/openclaw"><span data-contrast="none">DGX Spark OpenClaw playbook</span></a><span data-contrast="auto">.</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span class="NormalTextRun CommentStart CommentHighlightPipeRest CommentHighlightRest SCXW107558427 BCX0">Check out the <a target="_blank" href="https://blog.google/innovation-and-ai/technology/developers-tools/gemma-4/">Google DeepMind announcement blog</a> to l</span><span class="NormalTextRun CommentHighlightRest SCXW107558427 BCX0">earn more about the</span><span class="NormalTextRun CommentHighlightRest SCXW107558427 BCX0"> </span><span class="NormalTextRun CommentHighlightRest SCXW107558427 BCX0">latest additions to </span><span class="NormalTextRun CommentHighlightRest SCXW107558427 BCX0">Gemma 4 </span><span class="NormalTextRun CommentHighlightRest SCXW107558427 BCX0">family.</span></p>
<h2><b><span data-contrast="none">Getting Started: Gemma 4 on RTX GPUs and DGX Spark</span></b><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></h2>
<p><span data-contrast="none">NVIDIA has collaborated with Ollama and llama.cpp to provide the best local deployment experience for each of the Gemma 4 models.   </span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span data-contrast="none">To use Gemma 4 locally, users can </span><span data-contrast="none">download Ollama</span><span data-contrast="none"> to run Gemma 4 models </span><span data-contrast="none">or</span><span data-contrast="none"> install </span><span data-contrast="none">llama.cpp</span><span data-contrast="none"> and pair it with the Gemma 4 GGUF Hugging Face checkpoint. </span><span data-contrast="auto">Additionally, </span><span data-contrast="none">Unsloth provides day-one support with optimized and quantized models for efficient local fine-tuning and deployment via Unsloth Studio. Start </span><span data-contrast="auto">running and </span><span data-contrast="none">fine-tuning</span><span data-contrast="auto"> Gemma 4 in Unsloth Studio today.</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span data-contrast="none">Running open models like the Gemma 4 family on NVIDIA GPUs achieves optimal performance because NVIDIA Tensor Cores accelerate AI inference workloads to deliver higher throughput and lower latency for local execution. Plus, the CUDA software stack ensures broad compatibility across leading frameworks and tools, enabling new models to run efficiently from day one. </span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span data-contrast="none">This combination allows open models like Gemma 4 to scale across a wide range of systems — from Jetson Orin Nano at the edge to RTX PCs, workstations and DGX Spark — without requiring extensive optimization.</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span data-contrast="none">Check out </span><span data-contrast="none">the </span><a target="_blank" href="https://developer.nvidia.com/blog/bringing-ai-closer-to-the-edge-and-on-device-with-gemma-4/"><span data-contrast="none">NVIDIA technical blog</span></a><span data-contrast="none"> </span><span data-contrast="none">for more details on how to get started with Gemma 4 on NVIDIA GPUs and learn more about</span><span data-contrast="none"> NVIDIA’s work on </span><a href="https://blogs.nvidia.com/blog/ai-future-open-and-proprietary/"><span data-contrast="none">open models</span></a><span data-contrast="none">.</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<h2><b><span data-contrast="none">#ICYMI: The Latest Updates for RTX AI PCs</span></b><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></h2>
<p><span data-contrast="none"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2728.png" alt="✨" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Catch up on </span><a href="https://blogs.nvidia.com/blog/rtx-ai-garage-gtc-2026-nemoclaw"><span data-contrast="none">RTX AI Garage</span></a><span data-contrast="none"> blogs for a host of agentic AI announcements from NVIDIA GTC, such as new open models for local agents. These models include NVIDIA Nemotron 3 Nano 4B and Nemotron 3 Super 120B, and optimizations for Qwen 3.5 and Mistral Small 4.</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><span data-contrast="none"> NVIDIA recently introduced </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-nemoclaw"><span data-contrast="none">NVIDIA NemoClaw,</span></a><span data-contrast="none"> an open source stack that optimizes OpenClaw experiences on NVIDIA devices by increasing security and supporting local models. </span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><b><span data-contrast="none"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f680.png" alt="🚀" class="wp-smiley" style="height: 1em; max-height: 1em;" /></span></b><b><span data-contrast="none"> </span></b><a target="_blank" href="https://accomplish.ai/"><span data-contrast="none">Accomplish.ai</span></a><span data-contrast="none"> announced Accomplish FREE, a no-cost version of its open source desktop AI agent with built-in models. It harnesses NVIDIA GPUs to run open weight models locally, while a hybrid router dynamically balances workloads between local RTX hardware and the cloud — enabling fast, private, zero-configuration execution without requiring an application programming interface key.</span><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><i><span data-contrast="none">Plug in to NVIDIA AI PC on </span></i><a target="_blank" href="https://www.facebook.com/NVIDIA.AI.PC/"><i><span data-contrast="none">Facebook</span></i></a><i><span data-contrast="none">, </span></i><a target="_blank" href="https://www.instagram.com/nvidia.ai.pc/"><i><span data-contrast="none">Instagram</span></i></a><i><span data-contrast="none">, </span></i><a target="_blank" href="https://www.tiktok.com/@nvidia_ai_pc"><i><span data-contrast="none">TikTok</span></i></a><i><span data-contrast="none"> and </span></i><a target="_blank" href="https://x.com/NVIDIA_AI_PC"><i><span data-contrast="none">X</span></i></a><i><span data-contrast="none"> — and stay informed by subscribing to the </span></i><a target="_blank" href="https://www.nvidia.com/en-us/ai-on-rtx/?modal=subscribe-ai"><i><span data-contrast="none">RTX AI PC newsletter</span></i></a><i><span data-contrast="none">.</span></i><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
<p><i><span data-contrast="none">Follow NVIDIA Workstation on </span></i><a target="_blank" href="https://www.linkedin.com/showcase/3761136/"><i><span data-contrast="none">LinkedIn</span></i></a><i><span data-contrast="none"> and </span></i><a target="_blank" href="https://x.com/NVIDIAworkstatn"><i><span data-contrast="none">X</span></i></a><i><span data-contrast="none">. </span></i><span data-ccp-props="{&quot;335559739&quot;:0}"> </span></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/eevee-nv-blog-1280x680-1.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/eevee-nv-blog-1280x680-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[From RTX to Spark: NVIDIA Accelerates Gemma 4 for Local Agentic AI]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Press Start on April: GeForce NOW Brings 10 Games to the Cloud</title>
		<link>https://blogs.nvidia.com/blog/geforce-now-thursday-april-2026-games-list/</link>
		
		<dc:creator><![CDATA[GeForce NOW Community]]></dc:creator>
		<pubDate>Thu, 02 Apr 2026 13:00:09 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Cloud Gaming]]></category>
		<category><![CDATA[GeForce NOW]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=92005</guid>

					<description><![CDATA[No joke — GFN Thursday is skipping the tricks and heading straight into the games. April kicks off with ten new titles, bringing fresh adventures to GeForce NOW, including the launch of Capcom’s highly anticipated PRAGMATA. A dozen new games are available to stream this week, including Arknights: Endfield, which expands the acclaimed series into a full [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400">No joke — GFN Thursday is skipping the tricks and heading straight into the games. April kicks off with </span><span style="font-weight: 400">ten</span><span style="font-weight: 400"> new titles, bringing fresh adventures to </span><a target="_blank" href="https://geforcenow.com"><span style="font-weight: 400">GeForce NOW</span></a><span style="font-weight: 400">, including the launch of Capcom’s highly anticipated </span><i><span style="font-weight: 400">PRAGMATA.</span></i></p>
<p><span style="font-weight: 400">A dozen </span><span style="font-weight: 400">new games are available to stream this week, including </span><i><span style="font-weight: 400">Arknights: Endfield</span></i><span style="font-weight: 400">, which expands the acclaimed series into a full 3D real‑time strategy adventure. On GeForce NOW, every battle flows with precision and every mission looks sharper than ever.</span></p>
<p><span style="font-weight: 400">So gear up, grab a controller or gaming device of choice, and get ready to stream — another month of great gaming is now underway.</span></p>
<h2><b>Command the Frontier</b></h2>
<figure id="attachment_92010" aria-describedby="caption-attachment-92010" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-92010" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-1680x945.jpg" alt="Arknights Endfield on GeForce NOW" width="1200" height="675" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-630x354.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Arknights_Endfield-400x225.jpg 400w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92010" class="wp-caption-text"><em>Reclaim the frontier using cloud technology.</em></figcaption></figure>
<p><i><span style="font-weight: 400">Arknights: Endfield</span></i><span style="font-weight: 400"> from Hypergryph expands the acclaimed </span><i><span style="font-weight: 400">Arknights </span></i><span style="font-weight: 400">universe into a full, 3D, real‑time strategy role-playing game. Blending tactical planning with sleek sci‑fi aesthetics, the title invites players into a world featuring terraformed settlements, advanced technology and looming threats beneath the planet’s surface.</span></p>
<p><span style="font-weight: 400">Set on the perilous planet Talos‑II, </span><i><span style="font-weight: 400">Endfield </span></i><span style="font-weight: 400">follows a group of pioneers uncovering lost secrets and battling hostile factions. The game seamlessly merges base‑building, exploration and combat — with squads of operators coordinating in real time to overcome environmental hazards and powerful enemies. Every decision impacts survival, progress and the unfolding mystery of the world.</span></p>
<p><span style="font-weight: 400">On GeForce NOW, </span><i><span style="font-weight: 400">Arknights: Endfield</span></i><span style="font-weight: 400"> can be played at the highest settings from virtually any device, enabling crisp visuals and high performance without compromise. GeForce RTX rendering brings the game’s metallic skylines and glowing wastelands to life, while ultralow-latency streaming ensures every tactical command lands with precision. </span></p>
<h2><b>Spring Into April</b></h2>
<figure id="attachment_92013" aria-describedby="caption-attachment-92013" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-92013" src="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_Man_Star_Force_Legacy_Collection-1680x840.jpg" alt="MegaMan Star Force Legacy Collection" width="1200" height="600" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_Man_Star_Force_Legacy_Collection-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_Man_Star_Force_Legacy_Collection-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_Man_Star_Force_Legacy_Collection-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_Man_Star_Force_Legacy_Collection-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_Man_Star_Force_Legacy_Collection-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/04/GFN_Thursday-Mega_Man_Star_Force_Legacy_Collection.jpg 2048w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-92013" class="wp-caption-text"><em>He’s back.</em></figcaption></figure>
<p><span style="font-weight: 400">Capcom’s </span><i><span style="font-weight: 400">Mega Man Star Force Legacy Collection</span></i><span style="font-weight: 400"> includes seven games and additional features, including a gallery of illustrations and music. Eleven‑year‑old Geo Stelar is a grieving boy who isolates himself after the mysterious disappearance of his astronaut father. His life changes when he encounters an extraterrestrial being named Omega‑Xis, granting him the power to become Mega Man. The collection streams instantly with GeForce NOW, turning any device into a </span><i><span style="font-weight: 400">Star Force</span></i><span style="font-weight: 400"> terminal ready to save the world once more.</span></p>
<p><span style="font-weight: 400">Check out what else is available this week:</span><i></i></p>
<ul>
<li><i><span style="font-weight: 400">Hozy </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3326230?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, March 30)</span></li>
<li><i><span style="font-weight: 400">Cooking Simulator 2: Better Together </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2455360?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, March 31)</span></li>
<li><i><span style="font-weight: 400">Legacy of Kain: Ascendance </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/4233530?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, March 31)</span></li>
<li><i><span style="font-weight: 400">Subliminal </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2300840?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, March 31)</span></li>
<li><i><span style="font-weight: 400">Super Meat Boy 3D </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3288210?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, March 31)</span></li>
<li><i><span style="font-weight: 400">I Am Jesus Christ </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/1198970?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 2)</span></li>
<li><i><span style="font-weight: 400">ALL WILL FALL </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2706020?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 3, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">Arknights: Endfield </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://endfield.gryphline.com/?utm_source=nvidia&amp;utm_medium=referral&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Official Site</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Mega Man Star Force Legacy Collection</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/3500390?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Nova Roma </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/2426530?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.xbox.com/games/store/nova-roma-game-preview/9nbnfbq546dt?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Game Pass)</span></li>
<li><i><span style="font-weight: 400">RuneScape: Dragonwilds </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/1374490?utm_source=nvidia&amp;utm_campaign=geforce_now'"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Way of the Hunter 2</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/2543830?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, GeForce RTX 5080-ready)</span></li>
</ul>
<p><span style="font-weight: 400">And look forward to the games coming throughout the month:</span><i></i></p>
<ul>
<li><i><span style="font-weight: 400">Vampire Crawlers: The Turbo Wildcard from Vampire Survivors </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3265700?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 21)</span></li>
<li><i><span style="font-weight: 400">Samson </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3634520?utm_source=nvidia&amp;utm_campaign=geforce_now'"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 8)</span></li>
<li><i><span style="font-weight: 400">Replaced</span></i><span style="font-weight: 400"> (New release on </span><a target="_blank" href="https://store.steampowered.com/app/1663850?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400"> and Xbox, available on Game Pass, April 14)</span></li>
<li><i><span style="font-weight: 400">Cthulhu: The Cosmic Abyss </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2760560?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 16)</span></li>
<li><i><span style="font-weight: 400">PRAGMATA </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3357650?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 17)</span></li>
<li><i><span style="font-weight: 400">Outbound </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2681030?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 23)</span></li>
<li><i><span style="font-weight: 400">Heroes of Might and Magic: Olden Era</span></i><span style="font-weight: 400"> (New release on </span><a target="_blank" href="https://store.steampowered.com/app/3105440?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 30)</span></li>
<li><i><span style="font-weight: 400">Bus Bound </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2095420?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, April 30)</span></li>
</ul>
<h2><b>More of March</b></h2>
<p><span style="font-weight: 400">In addition to the 15 games announced last month, </span><span style="font-weight: 400">a dozen </span><span style="font-weight: 400">more joined the </span><a target="_blank" href="https://play.geforcenow.com"><span style="font-weight: 400">GeForce NOW library</span></a><span style="font-weight: 400">:</span></p>
<ul>
<li><i><span style="font-weight: 400">1348 Ex Voto </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/1895900?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">BATTLETECH </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://www.xbox.com/games/store/battletech/9NQVDQS2BC10?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Game Pass)</span></li>
<li><i><span style="font-weight: 400">Cooking Simulator 2: Better Together </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/2455360?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Despot’s Game </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://www.xbox.com/games/store/despots-game/9P5ZDVMCJMFD?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Xbox</span></a><span style="font-weight: 400">, available on Microsoft)</span></li>
<li><i><span style="font-weight: 400">Diablo II: Resurrected</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/2536520?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Hozy </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/3326230?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">King’s Quest</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.ubisoft.com/ubisoftplus?ucid=AFL-272089&amp;addinfo=&amp;bi="><span style="font-weight: 400">Ubisoft</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Monster Hunter Stories 3: Twisted Reflection </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/2852190?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">Super Meat Boy 3D </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/3288210?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Warcraft I: Remastered </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.ubisoft.com/ubisoftplus?ucid=AFL-272089&amp;addinfo=&amp;bi="><span style="font-weight: 400">Ubisoft</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Warcraft II: Remastered </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.ubisoft.com/ubisoftplus?ucid=AFL-272089&amp;addinfo=&amp;bi="><span style="font-weight: 400">Ubisoft</span></a><span style="font-weight: 400">)</span></li>
<li><i><span style="font-weight: 400">Way of the Hunter 2</span></i><span style="font-weight: 400"> (</span><a target="_blank" href="https://store.steampowered.com/app/2543830?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
</ul>
<p><span style="font-weight: 400">What are you planning to play this weekend? Check out </span><i><span style="font-weight: 400">Crimson Desert</span></i><span style="font-weight: 400"> on GeForce NOW in Anytime Anywhere Gaming’s </span><a target="_blank" href="https://www.youtube.com/watch?v=cq0ZdXjx_2k"><span style="font-weight: 400">YouTube review</span></a><span style="font-weight: 400">.</span></p>
<p>&nbsp;</p>
<p><iframe loading="lazy" title="Can GeForce Now Handle Crimson Desert At 5K MAX Settings?" width="1200" height="675" src="https://www.youtube.com/embed/cq0ZdXjx_2k?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-thursday-4-2-nv-blog-1280x680-logo-1.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/04/gfn-thursday-4-2-nv-blog-1280x680-logo-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Press Start on April: GeForce NOW Brings 10 Games to the Cloud]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Efficiency at Scale: NVIDIA, Energy Leaders Accelerating Power‑Flexible AI Factories to Fortify the Grid</title>
		<link>https://blogs.nvidia.com/blog/energy-efficiency-ai-factories-grid/</link>
		
		<dc:creator><![CDATA[Vladimir Troy]]></dc:creator>
		<pubDate>Tue, 31 Mar 2026 15:00:51 +0000</pubDate>
				<category><![CDATA[AI Infrastructure]]></category>
		<category><![CDATA[Corporate]]></category>
		<category><![CDATA[AI for Good]]></category>
		<category><![CDATA[Omniverse Enterprise]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=91796</guid>

					<description><![CDATA[CERAWeek — dubbed the Davos of energy — is where policymakers, producers, technologists and financiers gather to discuss how the world powers itself next.  NVIDIA and Emerald AI unveiled at the conference last week a new way forward — treating AI factories not as static power loads but as flexible, intelligent grid assets. This collaboration [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">CERAWeek — dubbed the Davos of energy — is where policymakers, producers, technologists and financiers gather to discuss how the world powers itself next. </span></p>
<p><span style="font-weight: 400;">NVIDIA and Emerald AI </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-and-emerald-ai-join-leading-energy-companies-to-pioneer-flexible-ai-factories-as-grid-assets"><span style="font-weight: 400;">unveiled</span></a><span style="font-weight: 400;"> at the conference last week a new way forward — treating AI factories not as static power loads but as flexible, intelligent grid assets. This collaboration unifies accelerated computing, AI factory reference architectures and real‑time energy orchestration, helping large AI deployments connect to the grid faster, operate more efficiently and fortify system reliability.</span></p>
<p><span style="font-weight: 400;">Built on the </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-releases-vera-rubin-dsx-ai-factory-reference-design-and-omniverse-dsx-digital-twin-blueprint-with-broad-industry-support"><span style="font-weight: 400;">NVIDIA Vera Rubin DSX AI Factory reference design</span></a><span style="font-weight: 400;"> and Emerald AI’s Conductor platform, the approach brings together compute, power networking and control into a single architecture. The result is an AI factory that can generate high‑value AI tokens while dynamically responding to grid conditions — flexing when needed, supporting reliability and reducing the need to overbuild infrastructure for peak demand.</span><a target="_blank" href="https://nvidia-my.sharepoint.com/personal/scmartin_nvidia_com/Documents/Microsoft%20Copilot%20Chat%20Files/CERAWeek2026.pdf"><span style="font-weight: 400;"> </span></a></p>
<p><span style="font-weight: 400;">AES, Constellation, Invenergy, NextEra Energy, Nscale Energy &amp; Power and Vistra are working to build the energy generation capacity needed to meet rapidly growing power demand. The companies plan to collaborate on optimized generation strategies to support AI factories built on the NVIDIA and Emerald AI architecture, including hybrid projects that use co‑located power to accelerate time to power while delivering value to the broader grid. By pairing large AI loads with flexible operations, new generation resources and intelligent controls, this approach strengthens grid reliability. </span></p>
<p><span style="font-weight: 400;">It’s an important milestone in grid resilience, supported by an ecosystem for advanced AI factories. This new computing infrastructure paradigm — described by NVIDIA founder and CEO Jensen Huang as a </span><a href="https://blogs.nvidia.com/blog/ai-5-layer-cake/"><span style="font-weight: 400;">five-layer AI cake</span></a><span style="font-weight: 400;"> — has energy as its foundational layer. </span></p>
<h2><b>Driving Improvements in Tokens Per Second Per Watt</b></h2>
<p><span style="font-weight: 400;">Power constraints are reshaping AI data centers, with energy efficiency or </span><a target="_blank" href="https://developer.nvidia.com/blog/scaling-token-factory-revenue-and-ai-efficiency-by-maximizing-performance-per-watt/"><span style="font-weight: 400;">performance per watt</span></a><span style="font-weight: 400;">, specifically tokens per second per watt, the defining metric of our modern computing infrastructure. By prioritizing computational efficiency, organizations can lower operating costs, maximize revenue and create a resilient digital infrastructure for businesses and consumers across America and worldwide. </span></p>
<p><span style="font-weight: 400;">“Power is a concern, but it’s not the only concern,” Huang </span><a target="_blank" href="https://www.youtube.com/watch?v=vif8NQcjVf0"><span style="font-weight: 400;">said</span></a><span style="font-weight: 400;"> on a recent Lex Fridman podcast. “That’s the reason why we’re pushing so hard on extreme codesign, so that we can improve the tokens per second per watt orders of magnitude every single year.” </span></p>
<p><span style="font-weight: 400;">NVIDIA has a long history of driving performance and energy efficiency. From the NVIDIA Kepler GPU in 2012 to the NVIDIA Vera Rubin platform this year, the number of tokens generated within the same power budget has increased by more than 1 million times. </span></p>
<p><span style="font-weight: 400;">It takes industry collaboration across the five-layer AI cake — from energy to chips, infrastructure, models and applications — to make this happen.</span></p>
<h2><b>Robotics, Digital Twins and AI Upskilling Drive Energy Advances</b></h2>
<p><span style="font-weight: 400;">NVIDIA ecosystem partners showcased at the event how AI, simulation and workforce innovation are accelerating the energy infrastructure needed to support the intelligence era. Announcements from Maximo, TerraPower and Adaptive Construction Solutions exemplify how AI is compressing timelines across construction, power generation and talent development.</span></p>
<p><b>Maximo</b><span style="font-weight: 400;">, a solar robotics company incubated at AES, announced the completion of a 100‑megawatt robotic solar installation at AES’ Bellefield site. Using AI‑driven robotics developed with NVIDIA accelerated computing, NVIDIA Omniverse libraries and the NVIDIA Isaac Sim framework, Maximo demonstrated that autonomous installations can now operate reliably at utility scale. The approach improves installation speed, safety and consistency, helping close the gap between rising electricity demand and construction capacity.</span></p>
<p><b>TerraPower</b><span style="font-weight: 400;">, working with SoftServe, previewed an NVIDIA Omniverse‑powered digital twin platform designed to dramatically shorten advanced nuclear plant siting and design timelines. By applying AI and simulation to early‑stage engineering, the platform reduces design cycles from years to months, accelerating deployment of TerraPower’s Natrium energy plants while improving design and grid integration.</span></p>
<p><b>Adaptive Construction Solutions</b><span style="font-weight: 400;"> announced a national registered apprenticeship initiative, in collaboration with NVIDIA, to help build the skilled workforce required for AI factories and energy infrastructure. The program aims to scale training for critical trades, expanding access to high‑demand careers while supporting the rapid buildout of AI‑driven power systems.</span></p>
<p><span style="font-weight: 400;">The efforts articulated how AI, digital twins and workforce innovation are converging to deliver faster, more resilient energy infrastructure.</span></p>
<h2><b>Coming Together on Scaling AI Factories for Grid Reliability </b></h2>
<p><span style="font-weight: 400;">GE Vernova, Schneider Electric and Vertiv highlighted how digital twins, validated reference designs and converged infrastructure are becoming essential to scaling AI factories as reliable grid participants. The announcements address the “power‑to‑rack” challenge — designing AI infrastructure as an integrated energy and compute system from day one.</span><a target="_blank" href="https://outlook.office365.com/owa/?ItemID=AAMkADliYmM1NzE1LTI1ZWQtNDZiZi04ODdiLTliNTY2ZTcwNGIyZgBGAAAAAAAr7b9T6Xr4QL8ZMgM62W00BwAs%2frWe23lgS7hsVqui3ao8AAAAAAEMAACT3dXC5pPvTJzcVsaDJqwAAAedCWKUAAA%3d&amp;exvsurl=1&amp;viewmodel=ReadMessageItem"><span style="font-weight: 400;"> </span></a></p>
<p><b>GE Vernova</b><span style="font-weight: 400;"> outlined how high‑fidelity digital twins aligned with the </span><a target="_blank" href="https://build.nvidia.com/nvidia/omniverse-dsx-blueprint-for-ai-factories"><span style="font-weight: 400;">NVIDIA Omniverse DSX Blueprint</span></a><span style="font-weight: 400;"> enable utilities and developers to simulate grid behavior, substations and AI factory loads together before deployment. Such system‑level modeling helps validate interconnection strategies, reduce risk and accelerate time to power in constrained grid environments.</span></p>
<p><b>Schneider Electric</b><span style="font-weight: 400;"> announced new validated NVIDIA Vera Rubin reference designs and lifecycle digital twin architectures developed with AVEVA. By simulating power, cooling and controls in Omniverse, Schneider enables operators to optimize performance per watt, validate designs before buildout and operate AI factories more efficiently and predictably at scale.</span></p>
<p><iframe loading="lazy" title="NVIDIA GTC Studio with Insights from Schneider Electric" width="1200" height="675" src="https://www.youtube.com/embed/lBNHFR41GgY?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><b>Vertiv </b><span style="font-weight: 400;">outlined converged, simulation‑ready physical infrastructure built on repeatable power and cooling building blocks. Integrated with the Vera Rubin DSX reference design, Vertiv’s approach reduces design and deployment complexity while supporting faster, more confident scaling of AI factories.</span></p>
<p><iframe loading="lazy" title="NVIDIA GTC Studio with Insights from Vertiv" width="1200" height="675" src="https://www.youtube.com/embed/MyNEr7tf-WQ?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><span style="font-weight: 400;">Together, these industry efforts provide a digital path forward, including the validated architectures and physical infrastructure needed to turn AI factories into flexible, grid‑aware assets for efficiently powering the world.</span></p>
<p><i><span style="font-weight: 400;">Learn more about how NVIDIA and its partners are advancing </span></i><a target="_blank" href="https://www.nvidia.com/en-us/industries/energy/"><i><span style="font-weight: 400;">energy solutions with AI and high-performance computing</span></i></a><i><span style="font-weight: 400;">.</span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/energy-promo-pack-rolling-corp-blog-1920x1080-1-e1774982193753.jpg" type="image/jpeg" width="1882" height="1008">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/energy-promo-pack-rolling-corp-blog-1920x1080-1-e1774982193753-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Efficiency at Scale: NVIDIA, Energy Leaders Accelerating Power‑Flexible AI Factories to Fortify the Grid]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Into the Omniverse: NVIDIA GTC Showcases Virtual Worlds Powering the Physical AI Era</title>
		<link>https://blogs.nvidia.com/blog/gtc-2026-virtual-worlds-physical-ai/</link>
		
		<dc:creator><![CDATA[Heather McDiarmid]]></dc:creator>
		<pubDate>Thu, 26 Mar 2026 15:00:56 +0000</pubDate>
				<category><![CDATA[Pro Graphics]]></category>
		<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Cosmos]]></category>
		<category><![CDATA[Digital Twin]]></category>
		<category><![CDATA[GTC 2026]]></category>
		<category><![CDATA[Industrial and Manufacturing]]></category>
		<category><![CDATA[Into the Omniverse]]></category>
		<category><![CDATA[Isaac]]></category>
		<category><![CDATA[Jetson]]></category>
		<category><![CDATA[NVIDIA Blueprints]]></category>
		<category><![CDATA[Omniverse]]></category>
		<category><![CDATA[Physical AI]]></category>
		<category><![CDATA[Universal Scene Description]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=91763</guid>

					<description><![CDATA[Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners, and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. NVIDIA GTC last week showcased a turning point in physical AI: Robots, vehicles and factories are scaling from single use cases and [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><i><span style="font-weight: 400">Editor’s note: This post is part of </span></i><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/news/"><i><span style="font-weight: 400">Into the Omniverse</span></i></a><i><span style="font-weight: 400">, a series focused on how developers, 3D practitioners, and enterprises can transform their workflows using the latest advances in </span></i><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/usd/"><i><span style="font-weight: 400">OpenUSD</span></i></a><i><span style="font-weight: 400"> and </span></i><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/usd/"><i><span style="font-weight: 400">NVIDIA Omniverse</span></i></a><i><span style="font-weight: 400">.</span></i></p>
<p><span style="font-weight: 400">NVIDIA GTC last week showcased a turning point in </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/generative-physical-ai/"><span style="font-weight: 400">physical AI</span></a><span style="font-weight: 400">: Robots, vehicles and factories are scaling from single use cases and isolated deployments to sophisticated enterprise workloads across industries. </span></p>
<p><span style="font-weight: 400">At the center of this shift are new frontier models for physical AI, including NVIDIA Cosmos 3, NVIDIA Isaac GR00T N1.7 and NVIDIA Alpamayo 1.5. </span></p>
<p><span style="font-weight: 400">NVIDIA also released the </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-open-physical-ai-data-factory-blueprint-to-accelerate-robotics-vision-ai-agents-and-autonomous-vehicle-development"><span style="font-weight: 400">NVIDIA Physical AI Data Factory Blueprint</span></a><span style="font-weight: 400">, designed to push the state of the art in world modeling, humanoid skills and autonomous driving, as well as the </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-releases-vera-rubin-dsx-ai-factory-reference-design-and-omniverse-dsx-digital-twin-blueprint-with-broad-industry-support"><span style="font-weight: 400">NVIDIA Omniverse DSX Blueprint</span></a><span style="font-weight: 400"> for AI factory digital twin simulation.</span></p>
<p><span style="font-weight: 400">Open source agentic frameworks such as OpenClaw extend the AI stack all the way to operations — enabling long‑running “claws” that use tools, memory and messaging interfaces to orchestrate workflows, manage data pipelines and execute tasks autonomously on dedicated machines. </span></p>
<p><span style="font-weight: 400">“With NVIDIA and the broader ecosystem, we’re building the claws and guardrails that let anyone create powerful, secure AI assistants,” said Peter Steinberger, creator of OpenClaw, in an </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-nemoclaw"><span style="font-weight: 400">NVIDIA press release</span></a><span style="font-weight: 400"> from GTC. </span></p>
<p><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/usd/"><span style="font-weight: 400">OpenUSD</span></a><span style="font-weight: 400"> is a driving force behind the scalability of physical AI — providing a common, scene‑description language that lets teams bring computer-aided design (CAD) data, simulation assets and real‑world telemetry into a shared, physically accurate view of the world. </span></p>
<h2><b>Simulating the AI Factory Before It’s Built</b></h2>
<p><span style="font-weight: 400">Modern AI factories are complex — spanning thermals, power grids, network load and mechanical systems. Building them on time and on budget becomes much easier when using simulation technology. </span></p>
<p><span style="font-weight: 400">To tackle this, NVIDIA introduced the </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-releases-vera-rubin-dsx-ai-factory-reference-design-and-omniverse-dsx-digital-twin-blueprint-with-broad-industry-support"><span style="font-weight: 400">Omniverse DSX Blueprint</span></a><span style="font-weight: 400"> at GTC, a reference architecture that unifies simulation across every layer of an AI factory through a single digital twin. This enables operators to optimize performance and efficiency before a rack is installed in the real world.</span></p>
<h2><b>Compute Is Data: Real-World Data Is No Longer the Moat</b></h2>
<p><span style="font-weight: 400">Real-world data used to function as a moat for physical AI — but it doesn’t scale. The real world is messy, unpredictable and full of edge cases, and the pipelines to process, simulate and evaluate data are fragmented. The bottleneck isn’t just data — it’s the entire data factory.</span></p>
<p><span style="font-weight: 400">To help address this, NVIDIA introduced at GTC its </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-open-physical-ai-data-factory-blueprint-to-accelerate-robotics-vision-ai-agents-and-autonomous-vehicle-development"><span style="font-weight: 400">Physical AI Data Factory Blueprint</span></a><span style="font-weight: 400">, an open reference architecture that transforms compute into large-scale, high-quality training data. Built on </span><a target="_blank" href="https://www.nvidia.com/en-us/ai/cosmos/"><span style="font-weight: 400">NVIDIA Cosmos open world foundation models</span></a><span style="font-weight: 400"> and the </span><a target="_blank" href="https://developer.nvidia.com/osmo"><span style="font-weight: 400">NVIDIA OSMO</span></a><span style="font-weight: 400"> operator, it unifies data curation, augmentation and evaluation into a single pipeline, enabling developers to generate diverse, long-tail datasets from limited real-world inputs.</span></p>
<p><span style="font-weight: 400">Leading physical AI developers including </span><a target="_blank" href="https://www.fieldai.com/news/fieldai-accelerates-industrial-customers-adoption-of-ai-in-collaboration-with-nvidia"><span style="font-weight: 400">FieldAI</span></a><span style="font-weight: 400">, </span><a target="_blank" href="https://robotics.hexagon.com/industrial-autonomy-hexagon-robotics-nvidia-physical-ai/"><span style="font-weight: 400">Hexagon Robotics</span></a><span style="font-weight: 400">, </span><a target="_blank" href="https://www.prnewswire.com/news-releases/linker-vision-highlights-video-reasoning-ai-at-nvidia-gtc-2026-302714380.html"><span style="font-weight: 400">Linker Vision</span></a><span style="font-weight: 400">, </span><a target="_blank" href="https://www.milestonesys.com/company/news/press-releases/ai-as-a-service-at-nvidia-gtc/"><span style="font-weight: 400">Milestone Systems</span></a><span style="font-weight: 400">, </span><a target="_blank" href="https://www.skild.ai/blogs/reindustrial-revolution"><span style="font-weight: 400">Skild AI</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.universal-robots.com/news-and-media/news-center/universal-robots-scale-ai-launch-imitation-learning-system-accelerate-ai-training-lab-to-factory/"><span style="font-weight: 400">Teradyne Robotics</span></a><span style="font-weight: 400"> are already tapping the blueprint to speed up robotics projects, vision AI agents and autonomous vehicle programs.</span></p>
<p><a target="_blank" href="https://azure.microsoft.com/en-us"><span style="font-weight: 400">Microsoft Azure</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://nebius.com/?utm_term=nebius&amp;utm_campaign=Search_us_all_lgen_brand_cloud&amp;utm_source=google&amp;utm_medium=cpc&amp;hsa_acc=3900112445&amp;hsa_cam=21863834149&amp;hsa_grp=173545027150&amp;hsa_ad=797256119441&amp;hsa_src=g&amp;hsa_tgt=kwd-1496427731642&amp;hsa_kw=nebius&amp;hsa_mt=p&amp;hsa_net=adwords&amp;hsa_ver=3&amp;gad_source=1&amp;gad_campaignid=21863834149&amp;gclid=CjwKCAjwpcTNBhA5EiwAdO1S9mCKsrmaSvioZDA4wzAMNJkHJBWbEAeP7HMRpipjp7IGccKlvvbOOhoCaV8QAvD_BwE"><span style="font-weight: 400">Nebius</span></a><span style="font-weight: 400"> are the first cloud platforms to offer the blueprint, turning world-scale compute into turnkey data production engines.</span></p>
<p><span style="font-weight: 400">“Together with cloud leaders, we’re providing a new kind of agentic engine that transforms compute into the high-quality data required to bring the next generation of autonomous systems and robots to life,” said Rev Lebaredian, vice president of Omniverse and simulation technologies at NVIDIA, in </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-open-physical-ai-data-factory-blueprint-to-accelerate-robotics-vision-ai-agents-and-autonomous-vehicle-development"><span style="font-weight: 400">this press release</span></a><span style="font-weight: 400">. “In this new era, compute is data.”</span></p>
<h2><b>From OpenUSD to Reality: Seamless Design to Deployment</b></h2>
<p><span style="font-weight: 400">Converting CAD files to </span><a target="_blank" href="https://docs.nvidia.com/learn-openusd/latest/glossary.html"><span style="font-weight: 400">OpenUSD</span></a><span style="font-weight: 400"> is a critical step in the physical AI pipeline — transforming engineering data into simulation-ready assets that developers can use to build, test and validate robots in physically accurate virtual environments. </span></p>
<p><span style="font-weight: 400">Using tools like the </span><a target="_blank" href="https://docs.omniverse.nvidia.com/kit/docs/kit-app-template/latest/docs/kit_sdk_overview.html"><span style="font-weight: 400">NVIDIA Omniverse Kit</span></a><span style="font-weight: 400"> software development kit and </span><a target="_blank" href="https://developer.nvidia.com/isaac/sim"><span style="font-weight: 400">NVIDIA Isaac Sim</span></a><span style="font-weight: 400">, teams can optimize and enrich 3D data for real-time rendering, simulation and collaborative workflows.  </span></p>
<p><span style="font-weight: 400">Companies including </span><a target="_blank" href="https://www.fanucamerica.com/news-resources/fanuc-america-press-releases/2026/03/16/fanuc-accelerates-physical-ai-in-industrial-robotics-leveraging-nvidia-technologies"><span style="font-weight: 400">FANUC</span></a><span style="font-weight: 400"> and Fauna Robotics are using this seamless CAD-to-OpenUSD workflow to speed up robotic system design and validation.</span></p>
<h2><b>Transforming Manufacturing and Logistics Through Industrial Digital Twins</b></h2>
<p><span style="font-weight: 400">“Factories themselves are now robotic systems,” Lebaredian said during his special address on digital twins and simulation at GTC. </span></p>
<p><span style="font-weight: 400">All factories are born in simulation. The </span><a target="_blank" href="https://www.nvidia.com/en-us/industries/manufacturing/mega-blueprint/"><span style="font-weight: 400">NVIDIA Mega Omniverse Blueprint</span></a><span style="font-weight: 400"> provides enterprises with a reference architecture to design, test and optimize robot fleets and AI agents in a physically accurate facility digital twin before a single robot is deployed on the floor. </span></p>
<p><a target="_blank" href="https://www.kiongroup.com/en/News-Stories/Press-Releases/Press-Releases-Detail.html?id=1099696911&amp;type=corporate&amp;title=KION%20brings%20physical%20AI%20into%20live%20warehouse%20operations%20at%20GTC%202026%20in%20San%20Jos%C3%A9,%20California"><span style="font-weight: 400">KION</span></a><span style="font-weight: 400">, working with Accenture and Siemens, is using this blueprint to build large-scale warehouse digital twins that train and test fleets of NVIDIA Jetson-based autonomous forklifts for GXO, the world’s largest pure-play contract logistics provider. </span></p>
<h2><b>Physical AI Steps From Simulation to the Real World</b></h2>
<p><iframe loading="lazy" title="Official Keynote Closing Video | GTC 2026" width="1200" height="675" src="https://www.youtube.com/embed/aDT9bBt9HxM?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><span style="font-weight: 400">NVIDIA is partnering with the global robotics ecosystem — including leading robot brain developers, industrial robot giants and humanoid pioneers — to enhance production-level physical AI. </span></p>
<p><a target="_blank" href="https://new.abb.com/news/detail/134030/prsrl-abb-robotics-partners-with-nvidia-to-deliver-industrial-grade-physical-ai-at-scale"><span style="font-weight: 400">ABB Robotics</span></a><span style="font-weight: 400">, </span><a target="_blank" href="https://www.fanucamerica.com/news-resources/fanuc-america-press-releases/2026/03/16/fanuc-accelerates-physical-ai-in-industrial-robotics-leveraging-nvidia-technologies"><span style="font-weight: 400">FANUC</span></a><span style="font-weight: 400">, </span><a target="_blank" href="https://www.kuka.com/en-de/company/press/news/2026/03/kuka-amp-nvidia-gtc"><span style="font-weight: 400">KUKA</span></a><span style="font-weight: 400"> and Yaskawa, which have a combined global install base of over 2 million robots, are using </span><a target="_blank" href="https://www.nvidia.com/en-us/omniverse/"><span style="font-weight: 400">NVIDIA Omniverse</span></a><span style="font-weight: 400"> libraries and </span><a target="_blank" href="https://developer.nvidia.com/isaac"><span style="font-weight: 400">NVIDIA Isaac</span></a><span style="font-weight: 400"> simulation frameworks to validate complex robot applications and production lines through physically accurate </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/digital-twin/"><span style="font-weight: 400">digital twins</span></a><span style="font-weight: 400">. These companies have also integrated </span><a target="_blank" href="https://developer.nvidia.com/embedded/jetson-modules"><span style="font-weight: 400">NVIDIA Jetson modules</span></a><span style="font-weight: 400"> into their controllers to enable real-time AI inference. </span></p>
<p><span style="font-weight: 400">Robot development starts with the robot brains, which is why leading developers including FieldAI and </span><a target="_blank" href="https://www.nvidia.com/en-us/case-studies/skild-ai/"><span style="font-weight: 400">Skild AI</span></a><span style="font-weight: 400"> are building theirs using </span><a target="_blank" href="https://www.nvidia.com/en-us/ai/cosmos/"><span style="font-weight: 400">NVIDIA Cosmos</span></a><span style="font-weight: 400"> world models for </span><a target="_blank" href="https://www.nvidia.com/en-us/use-cases/synthetic-data/"><span style="font-weight: 400">data generation</span></a><span style="font-weight: 400"> and Isaac simulation frameworks to validate policies in simulation. </span></p>
<p><span style="font-weight: 400">Meanwhile, Generalist AI is using NVIDIA Cosmos to explore generating synthetic data. This combination allows robots to become proficient in any task — from supply chain monitoring to food delivery — at an exceptional pace. </span></p>
<p><i><span style="font-weight: 400">Read all of NVIDIA’s announcements from GTC on this </span></i><a target="_blank" href="https://nvidianews.nvidia.com/online-press-kit/gtc-2026-news"><i><span style="font-weight: 400">online press kit</span></i></a><i><span style="font-weight: 400"> and </span></i><a target="_blank" href="https://www.youtube.com/watch?v=jw_o0xr8MWU&amp;t=2s"><i><span style="font-weight: 400">watch the keynote replay</span></i></a><i><span style="font-weight: 400">. Catch up on all <a target="_blank" href="https://www.youtube.com/playlist?list=PL3jK4xNnlCVclphegeS4R9JYbhWprKJe_">Physical AI Days</a> sessions from GTC and watch the <a target="_blank" href="https://www.youtube.com/live/MplaRtIZerU?si=Vn-S0BNz4xC-fVjL&amp;t=11130">developer livestream</a> replay. </span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/nv-ov-ito-mar-social-1920x1080_credit.jpg" type="image/jpeg" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/nv-ov-ito-mar-social-1920x1080_credit-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Into the Omniverse: NVIDIA GTC Showcases Virtual Worlds Powering the Physical AI Era]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Game On: Five New Titles Now Streaming on GeForce NOW</title>
		<link>https://blogs.nvidia.com/blog/geforce-now-thursday-screamer/</link>
		
		<dc:creator><![CDATA[GeForce NOW Community]]></dc:creator>
		<pubDate>Thu, 26 Mar 2026 13:00:47 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Cloud Gaming]]></category>
		<category><![CDATA[GeForce NOW]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=91730</guid>

					<description><![CDATA[That gaming backlog won’t clear itself — GeForce NOW is here to help. Stream the latest titles straight from the cloud across a variety of devices. This week, five new titles are ready to play instantly in the cloud gaming platform’s library. Screamer drifts onto the scene with retro‑racing attitude and pixel‑perfect speed. Plus, Honkai: Star Rail Version [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">That gaming backlog won’t clear itself — </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce-now/"><span style="font-weight: 400;">GeForce NOW</span></a><span style="font-weight: 400;"> is here to help. Stream the latest titles straight from the cloud across a variety of devices.</span></p>
<p><span style="font-weight: 400;">This week, </span><span style="font-weight: 400;">five</span><span style="font-weight: 400;"> new titles are ready to play instantly in the cloud gaming platform’s library. </span><i><span style="font-weight: 400;">Screamer </span></i><span style="font-weight: 400;">drifts onto the scene with retro‑racing attitude and pixel‑perfect speed. Plus, </span><i><span style="font-weight: 400;">Honkai: Star Rail </span></i><span style="font-weight: 400;">Version 4.1, “Unraveled for Daybreak,” touches down.</span></p>
<h2><b>Hit the Gas</b></h2>
<figure id="attachment_91739" aria-describedby="caption-attachment-91739" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-91739" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday_Screamer-1680x945.jpg" alt="Screamer on GeForce NOW" width="1200" height="675" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday_Screamer-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday_Screamer-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday_Screamer-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday_Screamer-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday_Screamer-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday_Screamer-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday_Screamer-630x354.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday_Screamer-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday_Screamer-400x225.jpg 400w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-91739" class="wp-caption-text"><em>The ‘90s called — it wants the cloud.</em></figcaption></figure>
<p><i><span style="font-weight: 400;">Screamer</span></i><span style="font-weight: 400;"> from Milestone roars back onto the track as a blistering arcade racer that thrives on speed, precision and pure retro attitude.</span></p>
<p><span style="font-weight: 400;">Tight corners and neon‑soaked straights define a style built for thrill seekers who crave the rush of classic ‘90s racing action. The mix of sharp visuals, snappy handling and roaring engines creates an experience that’s equal parts vintage energy and modern muscle.</span></p>
<p><span style="font-weight: 400;">Running on GeForce NOW, </span><i><span style="font-weight: 400;">Screamer</span></i><span style="font-weight: 400;"> puts pedal to the metal with ultralow latency and buttery‑smooth streaming. In the cloud, every race launches instantly, every drift hits with full force and every victory feels just a little louder.</span></p>
<h2><b>Let’s Play Today</b></h2>
<figure id="attachment_91742" aria-describedby="caption-attachment-91742" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-91742 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-Honkai_Star_Rail_V4_1-1680x840.jpg" alt="Honkai Star Rail 4.1 on GeForce NOW" width="1200" height="600" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-Honkai_Star_Rail_V4_1-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-Honkai_Star_Rail_V4_1-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-Honkai_Star_Rail_V4_1-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-Honkai_Star_Rail_V4_1-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-Honkai_Star_Rail_V4_1-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-Honkai_Star_Rail_V4_1.jpg 2048w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-91742" class="wp-caption-text"><em>All aboard the Astral Express.</em></figcaption></figure>
<p><i><span style="font-weight: 400;">Honkai: Star Rail </span></i><span style="font-weight: 400;">Version 4.1, “Unraveled for Daybreak,” is available now, bringing new adventures aboard the Astral Express. The crew touches down at Star Rail FEST, a grand interstellar celebration packed with new zones, characters and challenges. Detective Ashveil joins as a new five-star Lightning hunter, chasing conspiracies hidden behind the glitz of the Phantasmoon Games. Dive into the new Wispae War Saga, enjoy free Warps and explore fresh Divergent Universe content filled with rewards and events. Play the latest </span><i><span style="font-weight: 400;">Honkai: Star Rail</span></i><span style="font-weight: 400;"> update instantly on GeForce NOW — no installs, just starlight and action.</span></p>
<p><span style="font-weight: 400;">Members can also look for the following:</span><i></i></p>
<ul>
<li><i><span style="font-weight: 400;">Screamer </span></i><span style="font-weight: 400;">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2814990?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400;">Steam</span></a><span style="font-weight: 400;">, March 26, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400;">King’s Quest</span></i><span style="font-weight: 400;"> (New release on Ubisoft, March 25)</span></li>
<li><i><span style="font-weight: 400;">BATTLETECH </span></i><span style="font-weight: 400;">(</span><a target="_blank" href="https://www.xbox.com/games/store/battletech/9NQVDQS2BC10?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400;">Xbox</span></a><span style="font-weight: 400;">, available on Game Pass)</span></li>
<li><i><span style="font-weight: 400;">Despot’s Game </span></i><span style="font-weight: 400;">(</span><a target="_blank" href="https://www.xbox.com/games/store/despots-game/9P5ZDVMCJMFD?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400;">Xbox</span></a><span style="font-weight: 400;">, available on Microsoft)</span></li>
<li><i><span style="font-weight: 400;">Diablo II: Resurrected</span></i><span style="font-weight: 400;"> (</span><a target="_blank" href="https://store.steampowered.com/app/2536520?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400;">Steam</span></a><span style="font-weight: 400;">)</span></li>
</ul>
<p><span style="font-weight: 400;">What are you planning to play this weekend? Let us know on </span><a target="_blank" href="https://www.twitter.com/nvidiagfn"><span style="font-weight: 400;">X</span></a><span style="font-weight: 400;"> or in the comments below.</span></p>
<blockquote class="twitter-tweet" data-width="550" data-dnt="true">
<p lang="en" dir="ltr">You get a magic wand that can save any character from their death in a video game.</p>
<p>But you can only choose one. Who do you save? </p>
<p>Beware spoilers in the replies <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f60f.png" alt="😏" class="wp-smiley" style="height: 1em; max-height: 1em;" /> and choose wisely.</p>
<p>&mdash; <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f329.png" alt="🌩" class="wp-smiley" style="height: 1em; max-height: 1em;" /> NVIDIA GeForce NOW (@NVIDIAGFN) <a target="_blank" href="https://twitter.com/NVIDIAGFN/status/2036835500095312283?ref_src=twsrc%5Etfw">March 25, 2026</a></p></blockquote>
<p><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gfn-thursday-3-26-blog-2048x1024-1.jpg" type="image/jpeg" width="2048" height="1024">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gfn-thursday-3-26-blog-2048x1024-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Game On: Five New Titles Now Streaming on GeForce NOW]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>The Future of AI Is Open and Proprietary</title>
		<link>https://blogs.nvidia.com/blog/ai-future-open-and-proprietary/</link>
		
		<dc:creator><![CDATA[Kari Briski]]></dc:creator>
		<pubDate>Wed, 25 Mar 2026 19:00:04 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[GTC 2026]]></category>
		<category><![CDATA[Inference]]></category>
		<category><![CDATA[Nemotron]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Trustworthy AI]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=91731</guid>

					<description><![CDATA[AI is the defining technology of our time, quickly becoming core business infrastructure. It’s fueled by a diverse ecosystem of models: large and small, open and proprietary, generalist and specialist.  This variety is essential for a future where every application will be powered by AI, every country will build it and every company will use [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">AI is the defining technology of our time, quickly becoming core business infrastructure. It’s fueled by a diverse ecosystem of models: large and small, open and proprietary, generalist and specialist. </span></p>
<p><span style="font-weight: 400;">This variety is essential for a future where every application will be powered by AI, every country will build it and every company will use it. And it’s not a debate between open versus closed innovation. </span></p>
<p><span style="font-weight: 400;">As NVIDIA founder and CEO Jensen Huang told attendees at a special session on open frontier models at </span><a target="_blank" href="https://www.nvidia.com/gtc/"><span style="font-weight: 400;">NVIDIA GTC</span></a><span style="font-weight: 400;">, “Proprietary versus open is not a thing. It’s proprietary </span><i><span style="font-weight: 400;">and</span></i><span style="font-weight: 400;"> open.”</span></p>
<p><span style="font-weight: 400;">That’s why the future of AI innovation isn’t about a single massive model. Every industry — healthcare, finance, manufacturing — tackles its own unique challenges. They all need AI that can reason about their data and workflows in various ways. And that requires systems of models, tuned and specialized for different modalities, domains and organizations, working together to solve a specific business problem. </span></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-91747" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/jhh-open-models-panel-pull-quote-1920x1080-3-1680x945.jpg" alt="" width="1200" height="675" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/jhh-open-models-panel-pull-quote-1920x1080-3-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/jhh-open-models-panel-pull-quote-1920x1080-3-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/jhh-open-models-panel-pull-quote-1920x1080-3-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/jhh-open-models-panel-pull-quote-1920x1080-3-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/jhh-open-models-panel-pull-quote-1920x1080-3-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/03/jhh-open-models-panel-pull-quote-1920x1080-3-630x354.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/jhh-open-models-panel-pull-quote-1920x1080-3-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/03/jhh-open-models-panel-pull-quote-1920x1080-3-400x225.jpg 400w, https://blogs.nvidia.com/wp-content/uploads/2026/03/jhh-open-models-panel-pull-quote-1920x1080-3.jpg 1920w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></p>
<p><span style="font-weight: 400;">NVIDIA is a major contributor to open source AI: it’s now the </span><a target="_blank" href="https://www.linkedin.com/posts/clementdelangue_nvidia-officially-surpassed-google-as-the-activity-7440434938237083648-6I8C?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAABp3D98BciuQuFJoFJCZAAiJe2bJwG5n3s4"><span style="font-weight: 400;">largest organization on Hugging Face</span></a><span style="font-weight: 400;">, with </span><a target="_blank" href="https://huggingface.co/nvidia"><span style="font-weight: 400;">nearly 4,000 team members</span></a><span style="font-weight: 400;">. And at GTC, the company announced the </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-launches-nemotron-coalition-of-leading-global-ai-labs-to-advance-open-frontier-models"><span style="font-weight: 400;">NVIDIA Nemotron Coalition</span></a><span style="font-weight: 400;">, a first-of-its-kind global collaboration of model builders and AI labs working to advance open, frontier-level foundation models through shared expertise, data and compute.</span></p>
<p><span style="font-weight: 400;">The first project stemming from the coalition will be a base model codeveloped by Mistral AI and NVIDIA, with coalition members contributing data, evaluations and domain expertise to support the model’s post-training and continued development. It’ll be shared with the open ecosystem and underpin the next generation of </span><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/"><span style="font-weight: 400;">NVIDIA Nemotron</span></a><span style="font-weight: 400;"> models, which have been downloaded more than 45 million times from Hugging Face.</span></p>
<p><span style="font-weight: 400;">Several Nemotron Coalition members joined other leaders building and consuming open models for a back-to-back panel session at GTC.</span></p>
<p><span style="font-weight: 400;">The first panel featured LangChain cofounder and CEO Harrison Chase, Thinking Machines Lab founder and CEO Mira Murati, Perplexity CEO and cofounder Aravind Srinivas, Cursor CEO and cofounder Michael Truell, and Reflection AI cofounder and CEO Misha Laskin. The second included Mistral cofounder and CEO Arthur Mensch, OpenEvidence CEO Daniel Nadler, and Black Forest Labs cofounder and CEO Robin Rombach, alongside Hanna Hajishirzi, senior director of natural language processing at Ai2, and Anjney Midha, founder of AMP PBC.</span></p>
<p><iframe loading="lazy" title="NVIDIA GTC 2026 Open Models Panel Highlights with Jensen Huang" width="1200" height="675" src="https://www.youtube.com/embed/H26xnpL-ei0?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><span style="font-weight: 400;">Five key points stood out from the conversation:  </span></p>
<p><strong>1. </strong><b>AI agents are becoming highly capable coworkers. </b></p>
<p><span style="font-weight: 400;">“We’re soon going to see agents really be coworkers that can take on tasks that take many hours or many days, and do incredibly complex workloads,” said Cursor’s Truell. </span></p>
<p><b>2. AI is not a single model — it’s an orchestrated system. </b></p>
<p><span style="font-weight: 400;">“What you want is a multimodal, multi-model and multi-cloud orchestra,” said Perplexity’s Srinivas. “All you’ve got to do is delegate your task. You don’t have to worry about which model is good at what — it’s for the orchestration system to figure it out.” </span></p>
<p><b>3. Openness fuels innovation across the model ecosystem. </b></p>
<p><span style="font-weight: 400;">“Models are fundamental knowledge infrastructure, and fundamental knowledge infrastructure yearns for openness,” said Reflection AI’s Laskin. “There’s a flourishing ecosystem of powerful, closed models but equally capable open models that are going to be coming over the next couple years.” </span></p>
<p><span style="font-weight: 400;">This combination of open and proprietary models drives advancements at frontier AI companies as well as in academia. </span></p>
<p><span style="font-weight: 400;">“There’s a lot of study to be done, and it cannot be done completely in the large labs,” said Thinking Machines Lab’s Murati. “This is where openness can be very helpful…it advances the science of AI, the science of intelligence.” </span></p>
<figure id="attachment_91519" aria-describedby="caption-attachment-91519" style="width: 1680px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-91519 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-GM_02878_sized-1680x945.jpg" alt="Panelists seated from left to right: NVIDIA founder and CEO Jensen Huang, LangChain cofounder and CEO Harrison Chase, Thinking Machines Lab founder and CEO Mira Murati, Perplexity CEO and cofounder Aravind Srinivas, Cursor CEO and cofounder Michael Truell, and Reflection AI cofounder and CEO Misha Laskin." width="1680" height="945" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-GM_02878_sized-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-GM_02878_sized-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-GM_02878_sized-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-GM_02878_sized-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-GM_02878_sized-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-GM_02878_sized-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-GM_02878_sized-630x354.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-GM_02878_sized-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-GM_02878_sized-400x225.jpg 400w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /><figcaption id="caption-attachment-91519" class="wp-caption-text">From left to right: NVIDIA founder and CEO Jensen Huang, LangChain cofounder and CEO Harrison Chase, Thinking Machines Lab founder and CEO Mira Murati, Perplexity CEO and cofounder Aravind Srinivas, Cursor CEO and cofounder Michael Truell, and Reflection AI cofounder and CEO Misha Laskin.</figcaption></figure>
<p><b>4. Open systems are trustworthy and accessible. </b><span style="font-weight: 400;"> </span></p>
<p><span style="font-weight: 400;">“At the end of the day, you’re delegating trust…and it’s much easier to trust an open system,” said AMP PBC’s Midha. </span></p>
<p><span style="font-weight: 400;">With a trusted system, developers can deploy long-running AI agents that can tackle virtually any task.  </span></p>
<p><span style="font-weight: 400;">“The models and the systems orchestrating the models are going to get much more capable,” said LangChain’s Chase. “And so you’ll be able to have personal productivity agents that can take on more complex tasks that run for longer.” </span></p>
<p><span style="font-weight: 400;">Open ecosystems also foster collaboration, helping democratize access to AI. </span></p>
<p><span style="font-weight: 400;">“We believe that open-wide models should be the basis for building all the AI software in the world,” said Mistral’s Mensch. “By having an open ecosystem of people that have aligned incentives to create assets that are going to be great for humanity, we can accelerate progress and make sure that everybody gets access in a fair way across the world to artificial intelligence.”  </span></p>
<figure id="attachment_91752" aria-describedby="caption-attachment-91752" style="width: 1680px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-91752 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-DEB39983-1680x945.jpg" alt="Panelists seated from left to right: NVIDIA founder and CEO Jensen Huang; Mistral cofounder and CEO Arthur Mensch; OpenEvidence CEO Daniel Nadler; Hanna Hajishirzi, senior director of natural language processing at Ai2; Black Forest Labs cofounder and CEO Robin Rombach; and Anjney Midha, founder of AMP PBC. " width="1680" height="945" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-DEB39983-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-DEB39983-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-DEB39983-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-DEB39983-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-DEB39983-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-DEB39983-630x354.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-DEB39983-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-DEB39983-400x225.jpg 400w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-DEB39983.jpg 1920w" sizes="auto, (max-width: 1680px) 100vw, 1680px" /><figcaption id="caption-attachment-91752" class="wp-caption-text">From left to right: NVIDIA founder and CEO Jensen Huang; Mistral cofounder and CEO Arthur Mensch; OpenEvidence CEO Daniel Nadler; Hanna Hajishirzi, senior director of natural language processing at Ai2; Black Forest Labs cofounder and CEO Robin Rombach; and Anjney Midha, founder of AMP PBC.</figcaption></figure>
<p><b>5. Society needs generalist and specialist AI to provide value. </b></p>
<p><span style="font-weight: 400;">“You have to sort of shape AI the way you shape society,” said OpenEvidence’s Nadler, describing how hospitals are organized into generalists working alongside world-class specialists. “I think the shape of AI is going to reflect that.”</span></p>
<p><span style="font-weight: 400;">Specialized AI is on the rise because it lets organizations combine open foundations with their own proprietary data. That unique data is where they unlock real, differentiated value across business and academia.</span></p>
<p><span style="font-weight: 400;">“These days you might argue that progress in AI is getting limited into a few closed labs, but it’s actually very important to the vast majority of academia and researchers, or nonprofit and other places who want to also be part of this progress,” said Ai2’s Hajishirzi. “And we’ve seen that all this progress already has happened by everything being open.”</span></p>
<p><span style="font-weight: 400;">“It’s actually one of the most exciting times to work on either the frontier models, the big models or more specialized open models that then get deployed on device,” said Black Forest Labs’ Rombach. “There’s so many different frontiers, and all of them should have some open component.”</span></p>
<figure id="attachment_91525" aria-describedby="caption-attachment-91525" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-91525" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-D1011131_sized-1680x945.jpg" alt="NVIDIA CEO Jensen Huang, sporting a custom leather jacket from Cursor, meets with open model ecosystem leaders before a panel discussion at GTC. " width="1200" height="675" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-D1011131_sized-1680x945.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-D1011131_sized-960x540.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-D1011131_sized-1280x720.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-D1011131_sized-1536x864.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-D1011131_sized-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-D1011131_sized-1290x725.jpg 1290w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-D1011131_sized-630x354.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-D1011131_sized-300x169.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-D1011131_sized-400x225.jpg 400w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-91525" class="wp-caption-text">NVIDIA CEO Jensen Huang, sporting a custom leather jacket from Cursor, meets with open model ecosystem leaders before a panel discussion at GTC.</figcaption></figure>
<p><i><span style="font-weight: 400;">Watch the </span></i><a target="_blank" href="https://www.youtube.com/watch?v=H26xnpL-ei0"><i><span style="font-weight: 400;">GTC session highlights on YouTube</span></i></a><i><span style="font-weight: 400;"> and start building with </span></i><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/"><i><span style="font-weight: 400;">NVIDIA Nemotron</span></i></a><i><span style="font-weight: 400;"> open models. </span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-SAG_0177-edit-169-scaled.jpg" type="image/jpeg" width="2048" height="1152">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtcsj26-S82480-open-models-SAG_0177-edit-169-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[The Future of AI Is Open and Proprietary]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Blowing Off Steam: How Power-Flexible AI Factories Can Stabilize the Global Energy Grid</title>
		<link>https://blogs.nvidia.com/blog/power-flexible-ai-factories-energy-grid/</link>
		
		<dc:creator><![CDATA[Josh Parker]]></dc:creator>
		<pubDate>Wed, 25 Mar 2026 11:00:00 +0000</pubDate>
				<category><![CDATA[AI Infrastructure]]></category>
		<category><![CDATA[AI Factory]]></category>
		<category><![CDATA[AI for Good]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Energy]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Inception]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90446</guid>

					<description><![CDATA[At the half-time whistle of the UEFA EURO 2020 round of 16 football match between England and Germany, millions of viewers stepped away from their screens in the U.K. to do the same thing at the same time — turn on their kettles. National Grid, which provides electricity for England and Wales, saw a demand [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>At the half-time whistle of the UEFA EURO 2020 round of 16 football match between England and Germany, millions of viewers stepped away from their screens in the U.K. to do the same thing at the same time — turn on their kettles.</p>
<p>National Grid, which provides electricity for England and Wales, saw <a target="_blank" href="https://www.neso.energy/news/euro-2020-and-tv-pick-effect">a demand spike of about 1 gigawatt</a> — an increase equivalent to the average output of a standard nuclear reactor — in a matter of minutes from this countrywide tea break. Grid operators must carefully manage these demand peaks to keep the system stable, and this could become even more difficult as the grid continues to add large new customers.</p>
<p>But what if those new customers could actually be flexible and relieve the grid during periods of peak strain?</p>
<p>In a recent <a target="_blank" href="https://www.ngpartners.com/stories/emerald-ai-whitepaper">white paper</a>, Emerald AI — in collaboration with NVIDIA, EPRI, National Grid and Nebius — showcased how “power-flexible” AI factories can autonomously adjust their power usage during peak demand.</p>
<p>For AI factories, this could unlock significantly faster grid connections without waiting for massive, years-long infrastructure upgrades. For the public, it helps limit grid build outs by curbing the peak load that the system needs to serve, helping keep electricity rates affordable for everyday bill payers.</p>
<h2><b>Boil the Kettle, Balance the Grid </b></h2>
<p>After successful proof-of-concept trials at AI factories in Arizona, Virginia and Illinois, Emerald AI took its flexible grid solution across the pond, last December, bringing the <a target="_blank" href="https://www.emeraldai.co/">Emerald AI Conductor Platform</a> to Nebius’ new AI factory in London, built on NVIDIA infrastructure — among the first of its kind in the U.K.</p>
<p>At the AI factory, the research team ran production-grade AI workloads on a cluster of <a target="_blank" href="https://developer.nvidia.com/blog/inside-nvidia-blackwell-ultra-the-chip-powering-the-ai-factory-era/">96 NVIDIA Blackwell Ultra GPUs</a> connected through the <a target="_blank" href="https://www.nvidia.com/en-us/networking/products/infiniband/quantum-x800/">NVIDIA Quantum-X800 InfiniBand platform</a>. The <a target="_blank" href="https://developer.nvidia.com/system-management-interface">NVIDIA System Management Interface</a> is used to retrieve consistent, seconds-level GPU power telemetry.</p>
<p>EPRI and National Grid simulated stress scenarios on the power grid — from lightning strikes to long periods of low wind power supply — and sent signals instructing the AI factory, with the help of the Conductor Platform, to temporarily reduce its power use to relieve grid strain.</p>
<p>One of these scenarios was the “TV pickup” phenomenon, where that very same Euro 2020 football match’s energy surge was reenacted.</p>
<p>As millions of simulated tea kettles were about to be turned on, the AI cluster ramped down its power use — successfully acting as a shock absorber for the abrupt power surge without disrupting the highest-priority AI workloads running on the cluster.</p>
<div style="width: 1200px;" class="wp-video"><video class="wp-video-shortcode" id="video-90446-8" width="1200" height="675" loop autoplay preload="auto" controls="controls"><source type="video/mp4" src="https://blogs.nvidia.com/wp-content/uploads/2026/02/Grid-Responsive-AI-Infrastructure-Chart_v4.mp4?_=8" /><a href="https://blogs.nvidia.com/wp-content/uploads/2026/02/Grid-Responsive-AI-Infrastructure-Chart_v4.mp4">https://blogs.nvidia.com/wp-content/uploads/2026/02/Grid-Responsive-AI-Infrastructure-Chart_v4.mp4</a></video></div>
<p>In practice, this means the grid can manage sudden demand swings using existing capacity more efficiently, reducing the need to overbuild permanent infrastructure to meet worst-case peaks and helping keep rates affordable for everyday consumers.</p>
<p>“With this technology, AI factories become friendly and helpful grid assets,” said Varun Sivaram, founder and CEO of Emerald AI. “Simultaneously, the AI factories get connected much faster to the grid because they can tap into existing power grids.”</p>
<h2><b>Stress Relievers, Not Query Crushers </b></h2>
<p>In the Nebius AI factory demonstration, despite the quick ramp down of energy to power the national tea break, Emerald AI Conductor ensured that the simulated high-priority AI workloads performed at peak throughput, while more flexible jobs were slowed down temporarily.</p>
<p>Emerald AI recorded 100% alignment with over 200 power targets that EPRI and National Grid instructed the AI cluster to follow for this experiment.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-90462 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/02/corporate-self-service-infographic-templates-final-1.jpg" alt="High-level white paper stats including 100% compliance across 200+ power targets, 22 distinct real-time dispatch events, and 30% slashed power in under 40 seconds. " width="1440" height="576" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/02/corporate-self-service-infographic-templates-final-1.jpg 1440w, https://blogs.nvidia.com/wp-content/uploads/2026/02/corporate-self-service-infographic-templates-final-1-960x384.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/02/corporate-self-service-infographic-templates-final-1-1280x512.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/02/corporate-self-service-infographic-templates-final-1-630x252.jpg 630w" sizes="auto, (max-width: 1440px) 100vw, 1440px" /></p>
<p>“We did tests that go beyond the ones that have been done so far in the U.S. because we tested not just the GPUs, but also the CPUs and everything that sits around it — as well as the total power consumption of the IT equipment,” said Steve Smith, group chief strategy officer of National Grid. “We’ve proved the value that this technology brings.”</p>
<h2><b>Scaling London’s Grid at Super Speed </b></h2>
<p>London’s power grid is constantly working to meet the ever-growing energy needs of its citizens. Its grid operators — including National Grid — face a key bottleneck: constraints in infrastructure upgrades to connect large customers.</p>
<p>Plugging flexible AI factories into the grid with solutions like Emerald AI’s Conductor Platform won’t just help to stabilize energy spikes —  it can optimize the use of existing grid infrastructure to propel new industry talent and economic opportunities in the U.K.</p>
<p>“We have enormous skills and potential in AI,” said Smith. “We’re never going to be on the scale of the U.S. in terms of data centers, but relative to the size of the country, we could be — and we’re certainly seeing that interest from many of the hyperscalers. So, it gives us the opportunity to play our part as National Grid in helping unlock that economic growth for the country.”</p>
<p>Four demonstrations in, Emerald AI and NVIDIA are gearing up to put power-flexible AI factories into real-world deployment with the Aurora AI Factory in Virginia, set to open this year.</p>
<p>Learn more about the first <a target="_blank" href="https://www.emeraldai.co/blog/launching-the-first-power-flexible-ai-factory-with-nvidia">power-flexible</a> AI factory powered by <a href="https://blogs.nvidia.com/blog/omniverse-dsx-blueprint/">NVIDIA GPUs</a>.</p>
]]></content:encoded>
					
		
		<enclosure url="https://blogs.nvidia.com/wp-content/uploads/2026/02/Grid-Responsive-AI-Infrastructure-Chart_v4.mp4" length="2079062" type="video/mp4" />

				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/02/Copy-of-Blog-1280x680.pptx-2.jpg" type="image/jpeg" width="1278" height="679">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/02/Copy-of-Blog-1280x680.pptx-2-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Blowing Off Steam: How Power-Flexible AI Factories Can Stabilize the Global Energy Grid]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Advancing Open Source AI, NVIDIA Donates Dynamic Resource Allocation Driver for GPUs to Kubernetes Community</title>
		<link>https://blogs.nvidia.com/blog/nvidia-at-kubecon-2026/</link>
		
		<dc:creator><![CDATA[Justin Boitano]]></dc:creator>
		<pubDate>Tue, 24 Mar 2026 08:00:12 +0000</pubDate>
				<category><![CDATA[AI Infrastructure]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Events]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[NVIDIA in Europe]]></category>
		<category><![CDATA[NVLink]]></category>
		<category><![CDATA[Open Source]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=91716</guid>

					<description><![CDATA[Artificial intelligence has rapidly emerged as one of the most critical workloads in modern computing. For the vast majority of enterprises, this workload runs on Kubernetes, an open source platform that automates the deployment, scaling and management of containerized applications. To help the global developer community manage high-performance AI infrastructure with greater transparency and efficiency, [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">Artificial intelligence has rapidly emerged as one of the most critical workloads in modern computing.</span></p>
<p><span style="font-weight: 400;">For the vast majority of enterprises, this workload runs on Kubernetes, an open source platform that automates the deployment, scaling and management of containerized applications.</span></p>
<p><span style="font-weight: 400;">To help the global developer community manage high-performance AI infrastructure with greater transparency and efficiency, NVIDIA is donating a critical piece of software — the </span><a target="_blank" href="https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/dra-intro-install.html"><span style="font-weight: 400;">NVIDIA Dynamic Resource Allocation (DRA) Driver for GPUs</span></a><span style="font-weight: 400;"> — to the Cloud Native Computing Foundation (CNCF), a vendor-neutral organization dedicated to fostering and sustaining the cloud-native ecosystem. </span></p>
<p><span style="font-weight: 400;">Announced today at KubeCon Europe, CNCF’s flagship conference running this week in Amsterdam, the donation moves the driver from being vendor-governed to offering full community ownership under the Kubernetes project. This open environment encourages a wider circle of experts to contribute ideas, accelerate innovation and help ensure the technology stays aligned with the modern cloud landscape. </span></p>
<p><span style="font-weight: 400;">“NVIDIA’s deep collaboration with the Kubernetes and CNCF community to upstream the NVIDIA DRA Driver for GPUs marks a major milestone for open source Kubernetes and AI infrastructure,” </span><span style="font-weight: 400;">said Chris Aniszczyk, chief technology officer of CNCF. “</span><span style="font-weight: 400;">By aligning its hardware innovations with upstream Kubernetes and AI conformance efforts, NVIDIA is making high-performance GPU orchestration seamless and accessible to all.”</span></p>
<p><span style="font-weight: 400;">In addition, in collaboration with the CNCF’s Confidential Containers community, NVIDIA has introduced GPU support for Kata Containers, lightweight virtual machines that act like containers. This extends hardware acceleration into a stronger isolation, separating workloads for increased security and enabling AI workloads to run with enhanced protection so organizations can easily implement confidential computing to safeguard data.</span></p>
<h2><b>Simplifying AI Infrastructure</b></h2>
<p><span style="font-weight: 400;">Historically, managing the powerful GPUs that fuel AI within data centers required significant effort. </span></p>
<p><span style="font-weight: 400;">This contribution is designed to make high-performance computing more accessible. Key benefits for developers include:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><b>Improved Efficiency:</b><span style="font-weight: 400;"> The driver allows for smarter sharing of GPU resources, delivering effective use of computing power, with support of </span><a target="_blank" href="https://docs.nvidia.com/deploy/mps/index.html"><span style="font-weight: 400;">NVIDIA Multi-Process Service</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://www.nvidia.com/en-us/technologies/multi-instance-gpu/"><span style="font-weight: 400;">NVIDIA Multi-Instance GPU</span></a><span style="font-weight: 400;"> technologies.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Massive Scale:</b><span style="font-weight: 400;"> It provides native support for connecting systems together, including with </span><a target="_blank" href="https://developer.nvidia.com/blog/enabling-multi-node-nvlink-on-kubernetes-for-gb200-and-beyond/"><span style="font-weight: 400;">NVIDIA Multi-Node NVlink</span></a><span style="font-weight: 400;"> interconnect technology. This is essential for training massive AI models on </span><a target="_blank" href="https://www.nvidia.com/en-us/data-center/technologies/blackwell-architecture/"><span style="font-weight: 400;">NVIDIA Grace Blackwell</span></a><span style="font-weight: 400;"> systems and next-generation AI infrastructure.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Flexibility:</b><span style="font-weight: 400;"> Developers can dynamically reconfigure their hardware to suit their needs, changing how resources are allocated on the fly.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Precision:</b><span style="font-weight: 400;"> The software supports fine-tuned requests, allowing users to ask for the specific computing power, memory settings or interconnect arrangement needed for their applications.</span></li>
</ul>
<h2><b>A Collaborative, Industry-Wide Effort</b></h2>
<p><span style="font-weight: 400;">NVIDIA is collaborating with industry leaders — including</span><span style="font-weight: 400;"> Amazon Web Services,</span> <a target="_blank" href="https://blogs.vmware.com/cloud-foundation/2026/03/23/strengthening-the-cloud-native-ecosystem-through-upstream-collaboration/"><span style="font-weight: 400;">Broadcom</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://ubuntu.com/blog/canonical-nvidia-kubecon-2026"><span style="font-weight: 400;">Canonical</span></a><span style="font-weight: 400;">,</span> <a target="_blank" href="https://cloud.google.com/blog/products/containers-kubernetes/gke-and-oss-innovation-at-kubecon-eu-2026"><span style="font-weight: 400;">Google Cloud</span></a><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Microsoft</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Nutanix</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Red Hat</span><span style="font-weight: 400;"> and </span><a target="_blank" href="http://suse.com/c/the-power-of-community-for-enterprise-ai"><span style="font-weight: 400;">SUSE</span></a><span style="font-weight: 400;"> — to drive these features forward for the benefit of the entire cloud-native ecosystem.</span></p>
<p><span style="font-weight: 400;">“Open source will be at the core of every successful enterprise AI strategy, bringing standardization to the high-performance infrastructure components that fuel production AI workloads,”</span><span style="font-weight: 400;"> said Chris Wright, chief technology officer and senior vice president of global engineering at Red Hat</span><span style="font-weight: 400;">. “NVIDIA’s donation of the NVIDIA DRA Driver for GPUs helps to cement the role of open source in AI’s evolution, and we look forward to collaborating with NVIDIA and the broader community within the Kubernetes ecosystem.”</span></p>
<p><span style="font-weight: 400;">“Open source software and the communities that sustain it are a cornerstone of the infrastructure used for scientific computing and research,”</span><span style="font-weight: 400;"> said Ricardo Rocha, lead of platforms infrastructure at CERN</span><span style="font-weight: 400;">. “For organizations like CERN, where efficiently analyzing petabytes of data is essential to discovery, community-driven innovation helps accelerate the pace of science. NVIDIA’s donation of the DRA Driver strengthens the ecosystem researchers rely on to process data across both traditional scientific computing and emerging machine learning workloads.”</span></p>
<h2><b>Expanding the Open Source Horizon</b></h2>
<p><span style="font-weight: 400;">This donation is just part of NVIDIA’s broader initiatives to support the open source community. For example, </span><a target="_blank" href="https://github.com/NVIDIA/NVSentinel"><span style="font-weight: 400;">NVSentinel</span></a><span style="font-weight: 400;"> — a system for GPU fault remediation — and </span><a target="_blank" href="https://github.com/NVIDIA/aicr"><span style="font-weight: 400;">AI Cluster Runtime</span></a><span style="font-weight: 400;">, an agentic AI framework, were announced at GTC last week.</span></p>
<p><span style="font-weight: 400;">In addition, NVIDIA </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-nemoclaw"><span style="font-weight: 400;">announced at GTC new open source projects</span></a><span style="font-weight: 400;"> including the </span><a target="_blank" href="https://github.com/NVIDIA/NemoClaw"><span style="font-weight: 400;">NVIDIA NemoClaw</span></a><span style="font-weight: 400;"> reference stack and </span><a target="_blank" href="https://github.com/NVIDIA/OpenShell"><span style="font-weight: 400;">NVIDIA OpenShell</span></a><span style="font-weight: 400;"> runtime for securely running autonomous agents. OpenShell provides fine-grained programmable policy security and privacy controls, and natively integrates with Linux, eBPF and Kubernetes.</span></p>
<p><span style="font-weight: 400;">NVIDIA also today announced that its high-performance AI workload scheduler, the KAI Scheduler, has been onboarded as a CNCF Sandbox project — a key step toward fostering broader collaboration and ensuring the technology evolves alongside the needs of the wider cloud-native ecosystem. Developers and organizations can </span><a target="_blank" href="https://github.com/kai-scheduler/KAI-Scheduler"><span style="font-weight: 400;">use and contribute to the KAI Scheduler today</span></a><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;">NVIDIA remains committed to actively maintaining and contributing to Kubernetes and CNCF projects to help meet the rigorous demands of enterprise AI customers. </span></p>
<p><span style="font-weight: 400;">In addition, following the release of </span><a target="_blank" href="https://github.com/ai-dynamo/dynamo"><span style="font-weight: 400;">NVIDIA Dynamo</span></a><span style="font-weight: 400;"> 1.0</span><span style="font-weight: 400;">, NVIDIA is expanding the Dynamo ecosystem with </span><a target="_blank" href="https://github.com/ai-dynamo/grove"><span style="font-weight: 400;">Grove</span></a><span style="font-weight: 400;">, an open source Kubernetes application programming interface for orchestrating AI workloads on GPU clusters. Grove, which enables developers to express complex inference systems in a single declarative resource, is being integrated with the llm-d inference stack for wider adoption in the Kubernetes community. </span></p>
<p><i><span style="font-weight: 400;">Developers and organizations can begin using and contributing to the </span></i><a target="_blank" href="https://github.com/NVIDIA/k8s-dra-driver-gpu"><i><span style="font-weight: 400;">NVIDIA DRA Driver today</span></i></a><i><span style="font-weight: 400;">.</span></i></p>
<p><i><span style="font-weight: 400;">Visit the </span></i><a target="_blank" href="https://www.nvidia.com/en-eu/events/kubecon-cloudnativecon-europe/"><i><span style="font-weight: 400;">NVIDIA booth at KubeCon</span></i></a><i><span style="font-weight: 400;"> to see live demos of this technology in action.</span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/kubecon-1920x1080-1.jpg" type="image/jpeg" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/kubecon-1920x1080-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Advancing Open Source AI, NVIDIA Donates Dynamic Resource Allocation Driver for GPUs to Kubernetes Community]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>How Autonomous AI Agents Become Secure by Design With NVIDIA OpenShell</title>
		<link>https://blogs.nvidia.com/blog/secure-autonomous-ai-agents-openshell/</link>
		
		<dc:creator><![CDATA[Ali Golshan]]></dc:creator>
		<pubDate>Mon, 23 Mar 2026 15:00:03 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Cybersecurity]]></category>
		<category><![CDATA[Open Source]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=91699</guid>

					<description><![CDATA[Autonomous agents mark a new inflection point in AI. Systems are no longer limited to generating responses or reasoning through tasks. They can take action: Agents can read files, use tools, write and run code, and execute workflows across enterprise systems, all while expanding their own capabilities.  Application-layer risk grows exponentially when agents continuously improve [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">Autonomous agents mark a new inflection point in AI. Systems are no longer limited to generating responses or reasoning through tasks. They can take action: Agents can read files, use tools, write and run code, and execute workflows across enterprise systems, all while expanding their own capabilities. </span></p>
<p><span style="font-weight: 400;">Application-layer risk grows exponentially when agents continuously improve and evolve. The </span><a target="_blank" href="https://developer.nvidia.com/blog/run-autonomous-self-evolving-agents-more-safely-with-nvidia-openshell/"><span style="font-weight: 400;">NVIDIA OpenShell</span></a><span style="font-weight: 400;"> runtime is being built to address this. </span></p>
<p><span style="font-weight: 400;">Part of </span><a target="_blank" href="https://nvidianews.nvidia.com/news/ai-agents"><span style="font-weight: 400;">NVIDIA Agent Toolkit</span></a><span style="font-weight: 400;">, OpenShell is an open source, secure-by-design runtime for running autonomous agents such as claws. It works by ensuring each agent runs inside its own sandbox, separating application-layer operations from infrastructure-layer policy enforcement.</span></p>
<p><span style="font-weight: 400;">This means security policies are out of reach of the agent — they’re applied at the system level. Instead of relying on behavioral prompts, OpenShell enforces constraints on the environment the agent runs in — meaning the agent cannot override policies, or leak credentials or private data, even if compromised. </span></p>
<p><span style="font-weight: 400;">With OpenShell, enterprises can separate agent behavior, policy definition and policy enforcement. Organizations gain a single, unified policy layer to define and monitor how autonomous systems operate. Coding agents, research assistants and agentic workflows all run under the same runtime policies regardless of host operating system, simplifying compliance and operational oversight.</span></p>
<p><span style="font-weight: 400;">This is the “browser tab” model applied to agents: Sessions are isolated, resources are controlled and permissions are verified by the runtime before any action takes place.</span></p>
<p><span style="font-weight: 400;">Securing autonomous systems requires an integrated ecosystem. OpenShell is designed to add privacy and security controls for AI agents. NVIDIA is collaborating with security partners, including </span><a target="_blank" href="https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2026/m03/cisco-reimagines-security-for-the-agentic-workforce.html"><span style="font-weight: 400;">Cisco</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://www.crowdstrike.com/en-us/press-releases/crowdstrike-nvidia-unveil-secure-by-design-ai-blueprint-for-ai-agents/"><span style="font-weight: 400;">CrowdStrike</span></a><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Google Cloud</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Microsoft Security</span> <span style="font-weight: 400;">and </span><a target="_blank" href="https://www.trendmicro.com/en_us/research/26/c/securing-autonomous-ai-agents-with-trendai-and-nvidia-openshell.html"><span style="font-weight: 400;">TrendAI</span></a><span style="font-weight: 400;">,</span><span style="font-weight: 400;"> to align runtime policy management and enforcement for agents across the enterprise stack. </span></p>
<h2><b>OpenShell Provides an Enterprise-Grade Sandbox for Building Personal AI Assistants</b></h2>
<p><a target="_blank" href="https://www.nvidia.com/en-us/ai/nemoclaw/"><span style="font-weight: 400;">NVIDIA NemoClaw</span></a><span style="font-weight: 400;"> is an open source reference stack that simplifies installing OpenClaw always-on assistants with the OpenShell runtime and </span><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/"><span style="font-weight: 400;">NVIDIA Nemotron</span></a><span style="font-weight: 400;"> models in a single command. </span></p>
<p><span style="font-weight: 400;">NemoClaw provides enthusiasts with an open reference for building self-evolving personal AI agents, or claws. Since security needs vary, NemoClaw provides a reference example for policy-based privacy and security guardrails to give users more control over their agents’ behavior and data-handling. Users can customize it for their specific use cases — much like adjusting security preferences for applications on a phone. </span></p>
<p><span style="font-weight: 400;">NemoClaw includes an example configuration of OpenShell that defines how the agent should interact with systems. NemoClaw uses open source models like NVIDIA Nemotron alongside OpenShell. </span></p>
<p><span style="font-weight: 400;">This enables self-evolving claws to run more securely in clouds, on premises or on personal computers, including </span><a target="_blank" href="https://www.nvidia.com/en-us/ai-on-rtx/"><span style="font-weight: 400;">NVIDIA GeForce RTX PCs and laptops</span></a><span style="font-weight: 400;"> or </span><a target="_blank" href="https://www.nvidia.com/en-us/products/workstations/"><span style="font-weight: 400;">NVIDIA RTX PRO-powered workstations</span></a><span style="font-weight: 400;">, as well as </span><a href="https://blogs.nvidia.com/blog/gtc-2026-news/#dgx-spark-station"><span style="font-weight: 400;">NVIDIA DGX Station and NVIDIA DGX Spark</span></a><span style="font-weight: 400;"> AI supercomputers</span><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;">Both OpenShell and NemoClaw are in early preview. NVIDIA is building in the open with the community and its partners to enable enterprises to scale self-evolving, long-running autonomous agents safely, confidently and in compliance with global security standards.</span></p>
<p><span style="font-weight: 400;">Get started with </span><a target="_blank" href="https://build.nvidia.com/openshell"><span style="font-weight: 400;">NVIDIA OpenShell</span></a><span style="font-weight: 400;"> and launch a ready‑to‑use environment on </span><a target="_blank" href="https://brev.nvidia.com/launchable/deploy/now?launchableID=env-3Azt0aYgVNFEuz7opyx3gscmowS"><span style="font-weight: 400;">NVIDIA Brev</span></a><span style="font-weight: 400;">, or explore the open source project on </span><a target="_blank" href="https://github.com/nvidia/openshell"><span style="font-weight: 400;">GitHub</span></a><span style="font-weight: 400;">.</span></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/agentic-ai-visual-nemoclaw-opie-r2-scaled.png" type="image/png" width="2048" height="1143">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/agentic-ai-visual-nemoclaw-opie-r2-842x450.png" width="842" height="450" />
			<media:title type="html"><![CDATA[How Autonomous AI Agents Become Secure by Design With NVIDIA OpenShell]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>NVIDIA GTC 2026: Live Updates on What’s Next in AI</title>
		<link>https://blogs.nvidia.com/blog/gtc-2026-news/</link>
		
		<dc:creator><![CDATA[NVIDIA Writers]]></dc:creator>
		<pubDate>Fri, 20 Mar 2026 00:15:17 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Infrastructure]]></category>
		<category><![CDATA[Corporate]]></category>
		<category><![CDATA[Robotics]]></category>
		<category><![CDATA[GTC 2026]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=91623&#038;preview=true&#038;preview_id=91623</guid>

					<description><![CDATA[Rolling coverage from San Jose, including NVIDIA CEO Jensen Huang’s keynote, news highlights, live demos and on‑the‑ground color through March 19.]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div>]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/26gtc-blog-1920x1080-DEB35146.jpeg" type="image/jpeg" width="1921" height="1082">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/26gtc-blog-1920x1080-DEB35146-842x450.jpeg" width="842" height="450" />
			<media:title type="html"><![CDATA[NVIDIA GTC 2026: Live Updates on What’s Next in AI]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Smooth Moves: 90 Frames-Per-Second Virtual Reality Arrives on GeForce NOW</title>
		<link>https://blogs.nvidia.com/blog/geforce-now-thursday-virtual-reality-update/</link>
		
		<dc:creator><![CDATA[GeForce NOW Community]]></dc:creator>
		<pubDate>Thu, 19 Mar 2026 13:00:09 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Cloud Gaming]]></category>
		<category><![CDATA[GeForce NOW]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=91463</guid>

					<description><![CDATA[It’s a double feature on GFN Thursday. This week, GeForce NOW offers smoother sights in virtual reality (VR) and a sprawling new land to conquer. Streaming at 90 frames per second (fps) comes to supported VR headsets. And Crimson Desert, which recently surpassed 3 million wishlist additions on Steam, debuts in the cloud with GeForce [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400">It’s a double feature on GFN Thursday. This week, </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce-now/"><span style="font-weight: 400">GeForce NOW</span></a><span style="font-weight: 400"> offers smoother sights in virtual reality (VR) and a sprawling new land to conquer.</span></p>
<p><span style="font-weight: 400">Streaming at 90 frames per second (fps) comes to supported VR headsets.</span></p>
<p><span style="font-weight: 400">And </span><i><span style="font-weight: 400">Crimson Desert</span></i><span style="font-weight: 400">, which recently surpassed 3 million wishlist additions on Steam, debuts in the cloud with GeForce RTX 5080‑class power. Catch it as part of </span><span style="font-weight: 400">four </span><span style="font-weight: 400">new games on GeForce NOW</span></p>
<h2><b>VR, but Make It 90 FPS</b></h2>
<figure id="attachment_91468" aria-describedby="caption-attachment-91468" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-91468" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/gfn-thursday-ecosystem-ces26-blog-2048x1024-no-branding-1680x840.jpg" alt="Updated VR gaming on GeForce NOW" width="1200" height="600" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/gfn-thursday-ecosystem-ces26-blog-2048x1024-no-branding-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gfn-thursday-ecosystem-ces26-blog-2048x1024-no-branding-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gfn-thursday-ecosystem-ces26-blog-2048x1024-no-branding-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gfn-thursday-ecosystem-ces26-blog-2048x1024-no-branding-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gfn-thursday-ecosystem-ces26-blog-2048x1024-no-branding-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/gfn-thursday-ecosystem-ces26-blog-2048x1024-no-branding.jpg 2048w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-91468" class="wp-caption-text"><em>Power levels rising … to 90 fps.</em></figcaption></figure>
<p><span style="font-weight: 400">VR in the cloud is getting a smoothness upgrade. GeForce NOW is boosting support for Apple Vision Pro, Meta Quest devices and Pico devices to stream at up to 90 fps for Ultimate members, bringing crisper motion and more responsive gameplay straight from the cloud. The app update is starting to roll out to members starting today.</span></p>
<p><span style="font-weight: 400">Members can transform the space around them into a personal gaming theater with GeForce NOW, playing favorite PC titles on a massive virtual screen. With support for up to 90 fps for Ultimate members, gameplay feels smoother, movement more natural and action more comfortable. All premium members can continue to enhance their experiences with </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce/rtx/"><span style="font-weight: 400">NVIDIA RTX</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce/technologies/dlss/"><span style="font-weight: 400">DLSS</span></a><span style="font-weight: 400"> technologies in supported titles.</span></p>
<p><span style="font-weight: 400">Just fire up GeForce NOW on </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce-now/download/#:~:text=Learn%20more.-,Streaming%20Devices,-Apple%20Vision%20Pro"><span style="font-weight: 400">supported VR platforms</span></a><span style="font-weight: 400">, step into a virtual big screen and let the cloud handle the heavy lifting. From chill sessions in a virtual theater to high‑octane firefights, 90 fps helps keep every moment looking sharp and feeling responsive.</span></p>
<h2><b>A Storm Brews</b></h2>
<figure id="attachment_91471" aria-describedby="caption-attachment-91471" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-91471" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-Crimson_Desert-1680x840.jpg" alt="Crimson Desert on GeForce NOW" width="1200" height="600" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-Crimson_Desert-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-Crimson_Desert-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-Crimson_Desert-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-Crimson_Desert-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-Crimson_Desert-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-Crimson_Desert.jpg 2048w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-91471" class="wp-caption-text"><em>Into the cloud with “Crimson Desert.”</em></figcaption></figure>
<p><span style="font-weight: 400">Pearl Abyss’ </span><i><span style="font-weight: 400">Crimson Desert</span></i><span style="font-weight: 400">, a stunning, open‑world action adventure set in a war‑torn fantasy land, pairs large‑scale exploration with cinematic storytelling and hard‑hitting, combo‑focused combat. One moment is a quiet ride across windswept plains, the next is a chaotic clash against towering foes with mystical power effects lighting up the battlefield.</span></p>
<p><span style="font-weight: 400">Follow Kliff, a mercenary whose close‑knit group is destroyed in a sudden ambush. The journey centers on exploring a massive, detailed world, taking on story missions, riding mounts, finding supplies and facing dangerous enemies as Kliff works to bring his scattered companions back together.</span></p>
<p><span style="font-weight: 400">GeForce NOW brings 5080-class power to the game across supported devices, delivering high settings, smooth action and lush visuals without the need to worry about system specs. Members can jump into its cinematic battles and sweeping landscapes on the device of their choice, from low‑spec laptops to compatible TVs and mobile devices.</span></p>
<h2><b>Joining the Family</b></h2>
<figure id="attachment_91474" aria-describedby="caption-attachment-91474" style="width: 1200px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-91474" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-World_of_Tanks_Mafia-1680x840.jpg" alt="World of Tanks Battle Pass Special: Mafia on GeForce NOW" width="1200" height="600" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-World_of_Tanks_Mafia-1680x840.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-World_of_Tanks_Mafia-960x480.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-World_of_Tanks_Mafia-1280x640.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-World_of_Tanks_Mafia-1536x768.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-World_of_Tanks_Mafia-630x315.jpg 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/GFN_Thursday-World_of_Tanks_Mafia.jpg 2048w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /><figcaption id="caption-attachment-91474" class="wp-caption-text"><em>It’s time to make an offer no tank can refuse.</em></figcaption></figure>
<p><i><span style="font-weight: 400">World of Tanks</span></i><span style="font-weight: 400"> is rolling out the red carpet for </span><i><span style="font-weight: 400">Battle Pass Special: Mafia</span></i><span style="font-weight: 400"> from March 19-29, welcoming the respectable family from the fine city of Lost Heaven and bringing their signature mix of loyalty, ambition and firepower to the battlefield. Recruit iconic characters from 2K’s original </span><i><span style="font-weight: 400">Mafia </span></i><span style="font-weight: 400">game, unlock stylish 2D looks like Little Italy and Lost Heaven Noir, and command the Italian Predatore tank to make sure every deal ends with a bang. Respect is earned — and it’s best earned playing on GeForce NOW, where the action hits fast and smooth — powered by the cloud.</span></p>
<p><span style="font-weight: 400">In addition, members can look for the following:</span></p>
<ul>
<li><i><span style="font-weight: 400">Everwind </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/2253100?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, March 17, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">Retro Rewind &#8211; Video Store Simulator </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3552140?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, March 17)</span></li>
<li><i><span style="font-weight: 400">Crimson Desert </span></i><span style="font-weight: 400">(New release on </span><a target="_blank" href="https://store.steampowered.com/app/3321460?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">, March 19, GeForce RTX 5080-ready)</span></li>
<li><i><span style="font-weight: 400">Fallout 3 </span></i><span style="font-weight: 400">(</span><a target="_blank" href="https://store.steampowered.com/app/22300?utm_source=nvidia&amp;utm_campaign=geforce_now"><span style="font-weight: 400">Steam</span></a><span style="font-weight: 400">)</span></li>
</ul>
<p><i><span style="font-weight: 400">Cyberpunk 2077, Forza Motor Sport, Icarus </span></i><span style="font-weight: 400">and </span><i><span style="font-weight: 400">Ark Survival Ascended</span></i><span style="font-weight: 400"> — currently available to all GeForce NOW members — will no longer be available on the free tier starting Wednesday, April 1, as the basic rig type does not meet the game’s updated minimum system requirements. Premium members can continue to play these games uninterrupted.</span></p>
<p><span style="font-weight: 400">For some icing on the cake, the GeForce NOW Reddit community is running a giveaway for a VR headset. Find </span><a target="_blank" href="https://x.com/NVIDIAGFN/status/2032507547530272860?s=20"><span style="font-weight: 400">details on how to enter</span></a><span style="font-weight: 400">. Plus, check out GeForce NOW creator Airie Summer’s gaming giveaway on </span><a target="_blank" href="https://x.com/airiesummer/status/2032091266989564143?s=20"><span style="font-weight: 400">X</span></a></p>
<blockquote class="twitter-tweet" data-width="550" data-dnt="true">
<p lang="en" dir="ltr">much love to nvidia for the community giveaways ♡<a target="_blank" href="https://twitter.com/LogitechG?ref_src=twsrc%5Etfw">@logitechg</a> pro grade performance gears <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f4ab.png" alt="💫" class="wp-smiley" style="height: 1em; max-height: 1em;" /><a target="_blank" href="https://t.co/lRpSTKhQHJ">https://t.co/lRpSTKhQHJ</a> <a target="_blank" href="https://twitter.com/NVIDIAGFN?ref_src=twsrc%5Etfw">@nvidiagfn</a></p>
<p>entries open @ channel during the following times xoxo<br />march 12, 19, &amp; 26 from 09:30 a.m. to 05:30 p.m. et<a target="_blank" href="https://t.co/q67SOJNpbO">https://t.co/q67SOJNpbO</a> <a target="_blank" href="https://twitter.com/hashtag/nvidiapartner?src=hash&amp;ref_src=twsrc%5Etfw">#nvidiapartner</a> <a target="_blank" href="https://t.co/uQ6L1rcVcj">pic.twitter.com/uQ6L1rcVcj</a></p>
<p>&mdash; airie (@airiesummer) <a target="_blank" href="https://twitter.com/airiesummer/status/2032091266989564143?ref_src=twsrc%5Etfw">March 12, 2026</a></p></blockquote>
<p><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p><span style="font-weight: 400">. </span></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gfn-thursday-3-19-nv-blog-1280x680-logo.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gfn-thursday-3-19-nv-blog-1280x680-logo-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[Smooth Moves: 90 Frames-Per-Second Virtual Reality Arrives on GeForce NOW]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>From Simulation to Production: How to Build Robots With AI</title>
		<link>https://blogs.nvidia.com/blog/build-robots-with-ai/</link>
		
		<dc:creator><![CDATA[Katie Washabaugh]]></dc:creator>
		<pubDate>Wed, 18 Mar 2026 13:00:13 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Cosmos]]></category>
		<category><![CDATA[GTC 2026]]></category>
		<category><![CDATA[Isaac]]></category>
		<category><![CDATA[Jetson]]></category>
		<category><![CDATA[Omniverse]]></category>
		<category><![CDATA[Physical AI]]></category>
		<category><![CDATA[Simulation and Design]]></category>
		<category><![CDATA[Synthetic Data Generation]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=91222</guid>

					<description><![CDATA[The latest open models and frameworks from NVIDIA bring together simulation, robot learning and embedded compute to accelerate cloud-to-robot workflows. ]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div>]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtc26-models-and-frameworks-1920x1080-1.gif" type="image/gif" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtc26-models-and-frameworks-1920x1080-1-842x450.gif" width="842" height="450" />
			<media:title type="html"><![CDATA[From Simulation to Production: How to Build Robots With AI]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>More Than Meets the Eye: NVIDIA RTX-Accelerated Computers Now Connect Directly to Apple Vision Pro</title>
		<link>https://blogs.nvidia.com/blog/nvidia-cloudxr-apple-vision-pro/</link>
		
		<dc:creator><![CDATA[Richard Kerris]]></dc:creator>
		<pubDate>Tue, 17 Mar 2026 17:00:58 +0000</pubDate>
				<category><![CDATA[Pro Graphics]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[CloudXR]]></category>
		<category><![CDATA[Digital Twin]]></category>
		<category><![CDATA[Gaming]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[GTC 2026]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Industrial and Manufacturing]]></category>
		<category><![CDATA[Rendering]]></category>
		<category><![CDATA[Simulation and Design]]></category>
		<category><![CDATA[Virtual Reality]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=91090</guid>

					<description><![CDATA[NVIDIA and Apple’s collaboration brings native integration of NVIDIA CloudXR 6.0 to visionOS, securely delivering NVIDIA RTX-powered simulators and professional 3D graphics applications — like Immersive for Autodesk VRED on Innoactive’s XR streaming solutions — to Apple Vision Pro.]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div>]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/apple-kia-featured-still-1920x1080-1.jpg" type="image/jpeg" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/apple-kia-featured-still-1920x1080-1-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[More Than Meets the Eye: NVIDIA RTX-Accelerated Computers Now Connect Directly to Apple Vision Pro]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>NVIDIA, Telecom Leaders Build AI Grids to Optimize Inference on Distributed Networks</title>
		<link>https://blogs.nvidia.com/blog/telecom-ai-grids-inference/</link>
		
		<dc:creator><![CDATA[Kanika Atri]]></dc:creator>
		<pubDate>Tue, 17 Mar 2026 17:00:15 +0000</pubDate>
				<category><![CDATA[AI Infrastructure]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[GTC 2026]]></category>
		<category><![CDATA[Inference]]></category>
		<category><![CDATA[NVIDIA RTX]]></category>
		<category><![CDATA[Telecommunications]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=90843</guid>

					<description><![CDATA[As AI‑native applications scale to more users, agents and devices, the telecommunications network is becoming the next frontier for distributing AI.  At NVIDIA GTC 2026, leading operators in the U.S. and Asia showed that this shift is underway, announcing AI grids — geographically distributed and interconnected AI infrastructure — using their network footprint to power [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">As AI‑native applications scale to more users, agents and devices, the telecommunications network is becoming the next frontier for distributing AI. </span></p>
<p><span style="font-weight: 400;">At NVIDIA GTC 2026, leading operators in the U.S. and Asia showed that this shift is underway, announcing </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/ai-grid/"><span style="font-weight: 400;">AI grids</span></a><span style="font-weight: 400;"> — geographically distributed and interconnected </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/ai-infrastructure/"><span style="font-weight: 400;">AI infrastructure</span></a><span style="font-weight: 400;"> — using their network footprint to power and monetize new AI services across the distributed edge.  </span></p>
<p><span style="font-weight: 400;">Different operators are taking different paths. Many are starting by lighting up existing wired edge sites as AI grids they can monetize today. Others harness </span><a target="_blank" href="https://www.nvidia.com/en-us/glossary/ai-ran/"><span style="font-weight: 400;">AI-RAN</span></a><span style="font-weight: 400;"> — a technology that enables the full integration of AI into the radio access network — as a workload and edge inference platform on the same grid.  </span></p>
<p><span style="font-weight: 400;">Telcos and distributed cloud providers run some of the most expansive infrastructure in the world: about 100,000 distributed network data centers worldwide, spanning regional hubs, mobile switching offices and central offices, with enough spare power to offer more than 100 gigawatts of new AI capacity over time.</span><span style="font-weight: 400;"><br />
</span><span style="font-weight: 400;"><br />
</span><span style="font-weight: 400;">AI grids turn this existing real-estate, power and connectivity into a geographically distributed computing platform that runs AI inference closer to users, devices and data, where response and cost per token align best. This is more than an infrastructure upgrade — it’s a structural change in how AI is delivered, putting telecom networks at the center of scaling AI rather than just carrying its traffic. </span></p>
<h2><b>Global Operators Turn Distributed Networks Into AI Grids</b></h2>
<p><span style="font-weight: 400;">Across six major operators, AI grids are moving from concept to reality.</span></p>
<p><span style="font-weight: 400;">AT&amp;T</span><span style="font-weight: 400;">, a leader in connected IoT with over 100 million connections across thousands of device types, <a target="_blank" href="https://about.att.com/story/2026/cisco-ai-grid-with-nvidia.html">is partnering with Cisco and NVIDIA</a> to build an AI grid for IoT. By running AI on a dedicated IoT core and moving AI inference closer to where data is created, AT&amp;T can support mission‑critical, real‑time applications like public‑safety use cases with Linker Vision, enabling faster detection, alerting and response while helping keep sensitive information under customer control at the network edge.</span></p>
<p><span style="font-weight: 400;">“Scaling AI services that are both highly secure and accessible for enterprises and developers is a core pillar of our IoT connectivity strategy,” said Shawn Hakl, senior vice president of product at AT&amp;T Business. “By combining AT&amp;T’s business‑grade connectivity, localized AI compute and zero‑trust security while working with members of the NVIDIA Inception program and harnessing Cisco’s AI Grid with NVIDIA infrastructure and Cisco Mobility Services Platform, we’re bringing real‑time AI inference closer to where data is generated — accelerating digital transformation and unlocking new business opportunities.”</span></p>
<p><a target="_blank" href="https://corporate.comcast.com/press/releases/comcast-nvidia-ai-network-edge-accelerate-next-generation-applications"><span style="font-weight: 400;">Comcast</span></a><span style="font-weight: 400;"> is developing one of the nation’s largest low‑latency broadband footprints into an AI grid for real‑time, hyper‑personalized experiences. Working with NVIDIA, </span><span style="font-weight: 400;">Decart</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Personal AI</span><span style="font-weight: 400;"> and </span><span style="font-weight: 400;">HPE</span><span style="font-weight: 400;">, Comcast has validated that its AI grid keeps conversational agents, interactive media and NVIDIA GeForce NOW cloud gaming responsive and economical even during demand spikes, with significantly higher throughput and lower cost per token.</span></p>
<p><a target="_blank" href="https://corporate.charter.com/newsroom/spectrum-deploys-ai-infrastructure-at-network-edge-using-nvidia-ai-grid"><span style="font-weight: 400;">Spectrum</span></a> <span style="font-weight: 400;">has the network infrastructure to support an AI grid that spans more than 1,000 edge data centers and hundreds of megawatts of capacity less than 10 milliseconds away from 500 million devices. The initial deployment focuses on rendering high-resolution graphics for media production using remote GPUs embedded across Spectrum&#8217;s fiber-powered, low-latency network.</span></p>
<p><a target="_blank" href="https://www.akamai.com/"><span style="font-weight: 400;">Akamai</span></a><span style="font-weight: 400;"> i</span><span style="font-weight: 400;">s building a globally distributed AI grid, expanding </span><a target="_blank" href="https://www.akamai.com/products/akamai-inference-cloud-platform"><span style="font-weight: 400;">Akamai Inference Cloud </span></a><span style="font-weight: 400;">across more than 4,400 edge locations with thousands of NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. </span><a target="_blank" href="https://www.akamai.com/newsroom/press-release/akamai-launches-ai-grid-intelligent-orchestration-for-distributed-inference-across-4400-edge-locations"><span style="font-weight: 400;">Akamai&#8217;s AI grid orchestration platform</span></a><span style="font-weight: 400;"> matches each request to the right tier of compute, improving the token economics of inference while powering low-latency, real-time AI experiences for applications like gaming, media, financial services and retail.</span></p>
<p><a target="_blank" href="https://ioh.co.id/portal/en/iohcorppressreleasedetail/ioh-nvidia-nemotron?_id=10015514"><span style="font-weight: 400;">Indosat Ooredoo Hutchison</span></a><span style="font-weight: 400;"> is connecting its sovereign AI factory with distributed edge and AI‑RAN sites across Indonesia to build an AI grid for local innovation. By running Sahabat-AI — a Bahasa Indonesia-based platform — on this grid within Indonesia’s borders, Indosat can bring localized AI services closer to hundred millions of Indonesians across thousands of islands, giving local developers and startups a sovereign platform to build AI applications that are fast, culturally relevant and compliant by design.</span></p>
<p><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-t-mobile-and-partners-integrate-physical-ai-applications-on-ai-ran-ready-infrastructure"><span style="font-weight: 400;">T‑Mobile</span></a> <span style="font-weight: 400;"> is working with NVIDIA to explore edge AI applications using NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, demonstrating how distributed network locations could support emerging AI-RAN and edge inference use cases. Developers including LinkerVision, Levatas, Vaidio, </span><span style="font-weight: 400;">Archetype AI</span><span style="font-weight: 400;"> and Serve Robotics are already piloting smart‑city, industrial and retail applications on the grid, connecting cameras, delivery robots and city‑scale agents to real-time intelligence on the network edge. This demonstrates how cell sites and mobile switching offices can support distributed edge AI workloads while continuing to deliver advanced 5G connectivity.</span></p>
<h2><b>New AI‑Native Services Put Telecom AI Grids to Work</b></h2>
<p><span style="font-weight: 400;">AI grids are becoming </span><a target="_blank" href="https://developer.nvidia.com/blog/building-the-ai-grid-with-nvidia-orchestrating-intelligence-everywhere/"><span style="font-weight: 400;">foundational</span></a><span style="font-weight: 400;"> to a new class of AI‑native applications — real‑time, hyper‑personalized, concurrent and token-intensive.</span></p>
<p><a target="_blank" href="http://personal.ai/gtcpr"><span style="font-weight: 400;">Personal AI </span></a><span style="font-weight: 400;">is using NVIDIA Riva to power human‑grade conversational agents on the AI grid. By running small language models closer to users, it achieves sub-500 millisecond end-to-end latency and over 50% lower cost-per-token, enabling voice experiences that feel natural while remaining economically viable at scale.</span></p>
<p><a target="_blank" href="https://www.linkervision.com/post/linker-vision-highlights-video-reasoning-ai-at-nvidia-gtc-2026"><span style="font-weight: 400;">Linker Vision</span></a><span style="font-weight: 400;"> is transforming city operations by running real‑time vision AI on the AI grid. By processing thousands of camera feeds across distributed edge sites, it delivers predictable latency for live detection and instant alerting — enabling safer, smarter cities with up to 10x faster traffic accident detection, 15x faster disaster response and sub‑minute alerts for unsafe crowd behavior. </span></p>
<p><a target="_blank" href="https://decart.ai/publications/decart-is-helping-shape-the-ai-grid"><span style="font-weight: 400;">Decart</span></a><span style="font-weight: 400;"> is redefining hyper‑personalized distributed media by bringing real‑time video generation to AI grids. By running its Lucy models at the network edge, it achieves sub‑12-millisecond network latency, enabling interactive video streams and overlays that adapt instantly to each viewer, delivering smooth, immersive live video experiences even when viewership peaks.</span></p>
<h2><b>AI Grid Reference Design and Ecosystem</b></h2>
<p><span style="font-weight: 400;">The NVIDIA </span><a target="_blank" href="http://docs.nvidia.com/ai-grid/whitepapers/ai-grid-reference-design"><span style="font-weight: 400;">AI Grid Reference Design</span></a><span style="font-weight: 400;"> defines the building blocks — including NVIDIA accelerated computing, networking and software platforms — for deploying and orchestrating AI across distributed sites.</span></p>
<p><span style="font-weight: 400;">A growing ecosystem of full‑stack partners including </span><a target="_blank" href="https://blogs.cisco.com/sp/monetizing-the-ai-opportunity-how-cisco-ai-grid-with-nvidia-transforms-networks-into-ai-platforms"><span style="font-weight: 400;">Cisco</span></a><span style="font-weight: 400;"> and infrastructure partners like </span><a target="_blank" href="https://www.hpe.com/us/en/newsroom/press-release/2026/03/hpe-transforms-distributed-ai-factories-into-intelligent-ai-grid-powered-by-nvidia.html"><span style="font-weight: 400;">HPE</span></a><span style="font-weight: 400;"> are bringing AI grid solutions to market on systems built with the </span><a target="_blank" href="https://www.nvidia.com/en-us/data-center/rtx-pro-6000-blackwell-server-edition/"><span style="font-weight: 400;">NVIDIA RTX PRO 6000 Blackwell Server Edition</span></a><span style="font-weight: 400;">. </span><a target="_blank" href="https://ar.md/gtc-ai-grid"><span style="font-weight: 400;">Armada</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://rafay.co/ai-and-cloud-native-blog/rafay-launches-ai-grid-orchestration-solution-to-help-telcos-intelligently-deploy-distributed-ai-infrastructure"><span style="font-weight: 400;">Rafay</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://www.spectrocloud.com/blog/nvidia-ai-grid"><span style="font-weight: 400;">Spectro Cloud </span></a><span style="font-weight: 400;">are among the partners building an AI grid control plane to seamlessly orchestrate workloads across distributed AI infrastructure.</span></p>
<p><span style="font-weight: 400;">“Physical AI is accelerating the shift from centralized intelligence to distributed decision making at the network edge,” said Masum Mir, senior vice president and general manager provider mobility at Cisco. “Our partnership with NVIDIA brings together the full stack — from NVIDIA GPUs to Cisco’s networking and mobility capabilities — enabling operators to power mission-critical applications, deliver real-time inferencing and participate in the AI value chain.”</span></p>
<p><span style="font-weight: 400;">Together, this ecosystem is helping telcos and distributed cloud providers redefine their role in the AI value chain — transforming the network edge into a unified intelligence layer that runs, scales and monetizes AI workloads.</span></p>
<p><em>Learn more about <a target="_blank" href="https://www.nvidia.com/en-us/industries/telecommunications/ai-grid/">AI Grid</a>.</em></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtc26-tech-blog-telco-ai-grid-corp-blog-1280x680-1.png" type="image/png" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtc26-tech-blog-telco-ai-grid-corp-blog-1280x680-1-842x450.png" width="842" height="450" />
			<media:title type="html"><![CDATA[NVIDIA, Telecom Leaders Build AI Grids to Optimize Inference on Distributed Networks]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>GTC Spotlights NVIDIA RTX PCs and DGX Sparks Running Latest Open Models and AI Agents Locally</title>
		<link>https://blogs.nvidia.com/blog/rtx-ai-garage-gtc-2026-nemoclaw/</link>
		
		<dc:creator><![CDATA[Gerardo Delgado]]></dc:creator>
		<pubDate>Tue, 17 Mar 2026 13:00:46 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[GTC 2026]]></category>
		<category><![CDATA[NVIDIA RTX]]></category>
		<category><![CDATA[NVIDIA Studio]]></category>
		<category><![CDATA[RTX AI Garage]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=91156</guid>

					<description><![CDATA[The paradigm of consumer computing has revolved around the concept of a personal device — from PCs to smartphones and tablets. Now, generative AI — particularly OpenClaw — has introduced a new category: agent computers. These devices, like the NVIDIA DGX Spark desktop AI supercomputer or dedicated NVIDIA RTX PCs, are ideal for running personal [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400">The paradigm of consumer computing has revolved around the concept of a personal device — from PCs to smartphones and tablets. Now, generative AI — particularly OpenClaw — has introduced a new category: agent computers. These devices, like the NVIDIA DGX Spark desktop AI supercomputer or dedicated NVIDIA RTX PCs, are ideal for running personal agents — privately and for free. </span></p>
<p><a target="_blank" href="https://www.nvidia.com/gtc/"><span style="font-weight: 400">NVIDIA GTC</span></a><span style="font-weight: 400">, running</span><span style="font-weight: 400"> this week, is showcasing a host of agentic AI announcements including:</span></p>
<ul>
<li><span style="font-weight: 400">New open models for local agents, including NVIDIA Nemotron 3 Nano 4B and Nemotron 3 Super 120B, and optimizations for Qwen 3.5 and Mistral Small 4.</span></li>
<li><span style="font-weight: 400">NVIDIA NemoClaw, an open source stack for OpenClaw that optimizes OpenClaw experiences on NVIDIA devices by increasing security and supporting local models. </span></li>
<li><span style="font-weight: 400">Easier fine‑tuning with Unsloth Studio</span> <span style="font-weight: 400">to further improve open model accuracy for agentic workflows.</span></li>
</ul>
<p><span style="font-weight: 400">In-person GTC attendees can swing by the </span><a href="https://blogs.nvidia.com/blog/gtc-2026-news/#build-a-claw"><span style="font-weight: 400">NVIDIA build-a-claw event</span></a><span style="font-weight: 400"> in the GTC Park, running daily through March 19, from 8 a.m.-5 p.m. NVIDIA experts will help guests customize and deploy a proactive, always-on AI assistant using their device of choice. Whether technical or just curious, participants will name their agent, define its personality and grant it access to the tools it needs — creating a personal assistant reachable from their preferred messaging app.</span></p>
<h2><b>New Open Models Bring Cloud-Level Quality to Local Agents </b></h2>
<p><span style="font-weight: 400">The next generation of local models — with increasingly large context windows — delivers the intelligence to run agents on PC. Combined with richer user context and powerful local tools, these advances are unlocking new possibilities on AI PCs, especially on DGX Spark, with its 128GB of unified memory that supports models with more than 120 billion parameters.</span></p>
<p><a href="https://blogs.nvidia.com/blog/nemotron-3-super-agentic-ai/"><b>Nemotron 3 Super</b></a><span style="font-weight: 400">, released last week, is a 120‑billion‑parameter open model with 12 billion active parameters, designed to run complex agentic AI systems. Nemotron 3 Super is optimal for powering agents on the DGX Spark or NVIDIA RTX PRO workstations. On </span><a target="_blank" href="https://pinchbench.com/?score=best"><span style="font-weight: 400">PinchBench</span></a><span style="font-weight: 400"> — a new benchmark for determining how well large language models perform with OpenClaw — Nemotron 3 Super scored 85.6%, making it the top open model in its class.</span></p>
<p><b>Mistral Small 4</b><span style="font-weight: 400">, a 119-billion-parameter open model with 6 billion active parameters — 8 billion including all layers — unifies the capabilities of Mistral’s flagship models. Users now have an ultraefficient model optimized for general chat, coding and agentic tasks.</span></p>
<p><span style="font-weight: 400">Both of these models run locally on DGX Spark and RTX PRO GPUs.</span></p>
<p><span style="font-weight: 400">For GeForce RTX users looking for smaller models, </span><b>Nemotron 3 Nano 4B</b><span style="font-weight: 400"> is the latest model to join the </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-debuts-nemotron-3-family-of-open-models"><span style="font-weight: 400">NVIDIA Nemotron 3 family of open models</span></a><span style="font-weight: 400">, providing a compact, capable starting point for building agents and assistants locally on RTX AI PCs. The model is a strong fit for building action-taking conversational personas in games and apps that run on resource-constrained hardware. It’s available across any NVIDIA GPU-enabled system and combines state-of-the-art instruction-following and exceptional tool use with minimal VRAM footprint. </span></p>
<p><span style="font-weight: 400">In addition, NVIDIA announced optimizations for </span><b>Alibaba’s Qwen 3.5 models</b><span style="font-weight: 400">,</span> <span style="font-weight: 400">which have demonstrated outstanding accuracy (</span><a target="_blank" href="https://huggingface.co/Qwen/Qwen3.5-27B"><span style="font-weight: 400">27B</span></a><span style="font-weight: 400">, </span><a target="_blank" href="https://huggingface.co/Qwen/Qwen3.5-9B"><span style="font-weight: 400">9B</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://huggingface.co/Qwen/Qwen3.5-4B"><span style="font-weight: 400">4B</span></a><span style="font-weight: 400">) and are suited for running local agents on NVIDIA GPUs. The new models natively support vision, multi-token prediction and a large 262,000-token context window. The dense 27-billion-parameter model excels when paired with an RTX 5090 GPU.</span></p>
<figure id="attachment_91182" aria-describedby="caption-attachment-91182" style="width: 1200px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2026/03/rtx-ai-pc-raig-blog-perf-chart-desktop-light@2x.png"><img loading="lazy" decoding="async" class="size-large wp-image-91182" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/rtx-ai-pc-raig-blog-perf-chart-desktop-light@2x-1680x819.png" alt="" width="1200" height="585" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/rtx-ai-pc-raig-blog-perf-chart-desktop-light@2x-1680x819.png 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/rtx-ai-pc-raig-blog-perf-chart-desktop-light@2x-960x468.png 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/rtx-ai-pc-raig-blog-perf-chart-desktop-light@2x-1280x624.png 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/rtx-ai-pc-raig-blog-perf-chart-desktop-light@2x-1536x749.png 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/rtx-ai-pc-raig-blog-perf-chart-desktop-light@2x-630x307.png 630w, https://blogs.nvidia.com/wp-content/uploads/2026/03/rtx-ai-pc-raig-blog-perf-chart-desktop-light@2x.png 2045w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></a><figcaption id="caption-attachment-91182" class="wp-caption-text"><em>All configurations measured using Q4_K_M quantizations BS = 1, ISL = 1024 and OSL = 128 on NVIDIA RTX 5090 and Mac M3 Ultra desktops. Token generation throughput measured on llama.cpp b7789, using the llama-bench tool.</em></figcaption></figure>
<p><span style="font-weight: 400">Users can try these models today via Ollama, LM Studio and llama.cpp, with accelerated inference powered by RTX GPUs and DGX Spark. Learn more about the latest on </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-expands-open-model-families-to-power-the-next-wave-of-agentic-physical-and-healthcare-ai"><span style="font-weight: 400">NVIDIA open models</span></a><span style="font-weight: 400">. </span></p>
<h2><b>Faster Creative AI With the Latest RTX-Optimized Models</b></h2>
<p><span style="font-weight: 400">LTX 2.3, Lightricks’ state-of-the-art audio-video model, released earlier this month, now has support for </span><a target="_blank" href="https://huggingface.co/Lightricks/LTX-2.3-nvfp4"><span style="font-weight: 400">NVFP4</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://huggingface.co/Lightricks/LTX-2.3-fp8"><span style="font-weight: 400">FP8</span></a><span style="font-weight: 400"> distilled models, accelerating performance by 2.1x. Learn more about </span><a target="_blank" href="https://ltx.io/model/model-blog/ltx-2-3-release"><span style="font-weight: 400">Lightricks’ LTX 2.3 model</span></a><span style="font-weight: 400">.</span></p>
<p><iframe loading="lazy" title="Introducing LTX-2.3: Our Most Production-Ready Model Yet" width="1200" height="675" src="https://www.youtube.com/embed/o-7us-BR_gQ?start=5&#038;feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><span style="font-weight: 400">In addition, Black Forest Lab’s FLUX.2 Klein 9B received an update last week, accelerating image editing by up to 2x. NVIDIA has collaborated with Black Forest Labs to release an </span><a target="_blank" href="https://huggingface.co/black-forest-labs/FLUX.2-klein-9b-kv"><span style="font-weight: 400">FP8 version</span></a><span style="font-weight: 400">, optimized for the fastest performance and optimal memory consumption on RTX GPUs. </span></p>
<h2><b>NVIDIA NemoClaw — NVIDIA Optimizations for OpenClaw</b></h2>
<p><span style="font-weight: 400">AI developers and enthusiasts are buying DGX Spark supercomputers or building dedicated RTX PCs to run autonomous AI agents, such as OpenClaw, that draw context from personal files, apps and workflows and can automate daily tasks. However, as adoption of agentic systems like OpenClaw grows, so do concerns about token costs, as well as security and privacy.</span></p>
<p><span style="font-weight: 400">To help address these concerns, NVIDIA this week introduced </span><a target="_blank" href="https://www.nvidia.com/en-us/ai/nemoclaw/"><span style="font-weight: 400">NemoClaw</span></a><span style="font-weight: 400">, an open source stack for OpenClaw that deploys optimizations for OpenClaw on NVIDIA devices. The first features available in NemoClaw are NVIDIA Nemotron open models and the NVIDIA OpenShell runtime. Nemotron local models enable users to run inference locally, which means better privacy and no token costs. OpenShell is the runtime designed for executing claws more safely.</span></p>
<p><span style="font-weight: 400">Learn more about</span> <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-nemoclaw"><span style="font-weight: 400">NemoClaw</span></a><span style="font-weight: 400">. Watch the</span> <a target="_blank" href="https://www.nvidia.com/gtc/keynote/"><span style="font-weight: 400">GTC keynote</span></a><span style="font-weight: 400"> from NVIDIA founder and CEO Jensen Huang and explore </span><a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/"><span style="font-weight: 400">sessions</span></a><i><span style="font-weight: 400">.</span></i></p>
<h2><b>Fine-Tuning Made Easy With Unsloth Studio</b></h2>
<p><span style="font-weight: 400">As open models make giant leaps, one way of further improving accuracy is fine-tuning, which allows users to customize a model for their own data and use cases. This technique normally requires in-depth technical expertise, coding knowledge and massive amounts of configuration. Unsloth, a leading open source library for model fine-tuning and alignment, today launched Unsloth Studio, an easy-to-use, web-based user interface that simplifies the fine-tuning process for AI enthusiasts and developers.</span></p>
<p><iframe loading="lazy" title="Get Started with Unsloth Studio: Generate Data &amp; Fine-Tune LLMs Locally on any NVIDIA GPU" width="1200" height="675" src="https://www.youtube.com/embed/mmbkP8NARH4?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><span style="font-weight: 400">Unsloth Studio offers support for more than 500 AI models. The simple user interface makes the training and fine-tuning process easy: Users can just drop in their dataset, tap the graph-based canvas to generate additional high-quality synthetic data and start the fine-tuning job. It supports quantized low-rank adaptation, low-rank adaptation and full fine-tuning. As the model is being fine-tuned, users can monitor and visualize job progress. Finally, they can export the model into a framework of choice and chat away, all within the same web app. </span></p>
<p><span style="font-weight: 400">Unsloth Studio’s new interface is built on the Unsloth library, which delivers up to 2x faster training with up to 70% VRAM savings, using custom and specialized GPU kernels. This means that new users can get the most out of their NVIDIA RTX GPUs and DGX Spark, right out of the box. </span></p>
<p><span style="font-weight: 400">Try </span><a target="_blank" href="https://github.com/unslothai/unsloth-studio/tree/main/unsloth_studio"><span style="font-weight: 400">Unsloth Studio today</span></a><span style="font-weight: 400">, including with new models like Nemotron 3 Nano 4B and Qwen 3.5. Check out other </span><a href="https://blogs.nvidia.com/blog/rtx-ai-garage-fine-tuning-unsloth-dgx-spark/"><span style="font-weight: 400">RTX AI Garage</span></a><span style="font-weight: 400"> posts for more information on fine-tuning models with NVIDIA GeForce RTX GPUs.</span></p>
<h2><b>#ICYMI From GTC 2026</b></h2>
<p><span style="font-weight: 400"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2728.png" alt="✨" class="wp-smiley" style="height: 1em; max-height: 1em;" /></span><b>RTX AI</b> <b>video generation guide featuring RTX Video in ComfyUI: </b><span style="font-weight: 400">Launched at CES earlier this year, the new </span><a target="_blank" href="https://www.nvidia.com/en-us/geforce/news/rtx-ai-video-generation-guide/"><span style="font-weight: 400">RTX AI video generation guide</span></a><span style="font-weight: 400"> shows creators and enthusiasts how to go from concept to creation using guided text-to-image workflows to produce keyframes for AI-generated videos, then upscale to 4K with RTX Video technology running on local GPUs. Get started with the guide and share creations on social media with #AIonRTX.</span></p>
<p><span style="font-weight: 400"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f4bf.png" alt="💿" class="wp-smiley" style="height: 1em; max-height: 1em;" /></span><a target="_blank" href="https://developer.nvidia.com/maxine?sortBy=developer_learning_library%2Fsort%2Ftitle%3Aasc"><b>NVIDIA AI for Media</b></a><span style="font-weight: 400"> is a set of high‑performance, easy‑to‑use software development kits that bring NVIDIA Broadcast-class AI effects — enhanced audio (</span><a target="_blank" href="https://catalog.ngc.nvidia.com/orgs/nvidia/teams/maxine/collections/maxine_linux_audio_effects_sdk_collection"><span style="font-weight: 400">Linux</span></a><span style="font-weight: 400"> or </span><a target="_blank" href="https://catalog.ngc.nvidia.com/orgs/nvidia/teams/maxine/collections/maxine_windows_audio_effects_sdk_collection"><span style="font-weight: 400">Windows</span></a><span style="font-weight: 400">), </span><a target="_blank" href="https://catalog.ngc.nvidia.com/orgs/nvidia/teams/maxine/collections/maxine_vfx_sdk"><span style="font-weight: 400">video</span></a><span style="font-weight: 400"> and </span><a target="_blank" href="https://catalog.ngc.nvidia.com/orgs/nvidia/teams/maxine/collections/maxine_ar_sdk"><span style="font-weight: 400">augmented-reality</span></a><span style="font-weight: 400"> features — to live media, video conferencing and post‑production workflows. The latest update — available today — adds more accurate lip-syncing, multi‑active-speaker detection, faster 4K upscaling on RTX PRO and GeForce RTX 40 and 50 Series GPUs via the RTX Video Super Resolution feature, better background noise reduction and lower latency for the NVIDIA Studio Voice feature.</span></p>
<p><span style="font-weight: 400"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f4bb.png" alt="💻" class="wp-smiley" style="height: 1em; max-height: 1em;" /> </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-dlss-5-delivers-ai-powered-breakthrough-in-visual-fidelity-for-games"><b>NVIDIA DLSS 5</b></a><span style="font-weight: 400">, arriving this fall, delivers an AI-powered breakthrough in visual fidelity for games by infusing pixels with photoreal lighting and materials to bridge the gap between rendering and reality.</span></p>
<p><span style="font-weight: 400"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f916.png" alt="🤖" class="wp-smiley" style="height: 1em; max-height: 1em;" /></span><b>Maxon released Redshift 2026.4</b><span style="font-weight: 400">, introducing a new real-time visualization workflow powered by DLSS to allow architects to walk through projects at interactive speed and quality. “NVIDIA’s DLSS technology is a critical component, allowing us to deliver high-quality visuals at interactive speeds,” said Philip Losch, chief technology and AI officer at Maxon.</span></p>
<p><b><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1fa9f.png" alt="🪟" class="wp-smiley" style="height: 1em; max-height: 1em;" />Reincubate Camo has added Windows ML on NVIDIA TensorRT RTX EP </b><span style="font-weight: 400">for AI Autotune in its Camo Streamlight app, significantly improving performance on RTX GPUs.</span></p>
<p><i><span style="font-weight: 400">Plug in to NVIDIA AI PC on </span></i><a target="_blank" href="https://www.facebook.com/NVIDIA.AI.PC/"><i><span style="font-weight: 400">Facebook</span></i></a><i><span style="font-weight: 400">, </span></i><a target="_blank" href="https://www.instagram.com/nvidia.ai.pc/"><i><span style="font-weight: 400">Instagram</span></i></a><i><span style="font-weight: 400">, </span></i><a target="_blank" href="https://www.tiktok.com/@nvidia_ai_pc"><i><span style="font-weight: 400">TikTok</span></i></a><i><span style="font-weight: 400"> and </span></i><a target="_blank" href="https://x.com/NVIDIA_AI_PC"><i><span style="font-weight: 400">X</span></i></a><i><span style="font-weight: 400"> — and stay informed by subscribing to the </span></i><a target="_blank" href="https://www.nvidia.com/en-us/ai-on-rtx/?modal=subscribe-ai"><i><span style="font-weight: 400">RTX AI PC newsletter</span></i></a><i><span style="font-weight: 400">.</span></i></p>
<p><i><span style="font-weight: 400">Follow NVIDIA Workstation on </span></i><a target="_blank" href="https://www.linkedin.com/showcase/3761136/"><i><span style="font-weight: 400">LinkedIn</span></i></a><i><span style="font-weight: 400"> and </span></i><a target="_blank" href="https://x.com/NVIDIAworkstatn"><i><span style="font-weight: 400">X</span></i></a><i><span style="font-weight: 400">. </span></i></p>
<p><i><span style="font-weight: 400">See </span></i><a target="_blank" href="https://www.nvidia.com/en-eu/about-nvidia/terms-of-service/"><i><span style="font-weight: 400">notice</span></i></a><i><span style="font-weight: 400"> regarding software product information.</span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtc-2026-nv-blog-1280x680-2.jpg" type="image/jpeg" width="1280" height="680">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/gtc-2026-nv-blog-1280x680-2-842x450.jpg" width="842" height="450" />
			<media:title type="html"><![CDATA[GTC Spotlights NVIDIA RTX PCs and DGX Sparks Running Latest Open Models and AI Agents Locally]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
		<item>
		<title>Snap Decisions: How Open Libraries for Accelerated Data Processing Boost A/B Testing for Snapchat</title>
		<link>https://blogs.nvidia.com/blog/snap-accelerated-data-processing/</link>
		
		<dc:creator><![CDATA[Sid Sharma]]></dc:creator>
		<pubDate>Tue, 17 Mar 2026 13:00:23 +0000</pubDate>
				<category><![CDATA[Accelerated Analytics]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[Consumer Internet]]></category>
		<category><![CDATA[CUDA-X]]></category>
		<category><![CDATA[Customer Stories]]></category>
		<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Open Source]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=91197</guid>

					<description><![CDATA[The features on social media apps like Snapchat evolve nearly as fast as what’s trending. To keep pace, its parent company Snap has adopted open data processing libraries from NVIDIA on Google Cloud services to boost development.  Every new feature rolled out to Snapchat’s more than 940 million monthly active users goes through a set [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">The features on social media apps like Snapchat evolve nearly as fast as what’s trending. To keep pace, its parent company Snap has adopted open data processing libraries from NVIDIA on Google Cloud services to boost development. </span></p>
<p><span style="font-weight: 400;">Every new feature rolled out to Snapchat’s more than 940 million monthly active users goes through a set of controlled experiments before it’s launched. During this A/B testing cycle, the development team studies different variables with a subset of users, measuring nearly 6,000 metrics that analyze engagement, app performance and monetization. </span></p>
<p><span style="font-weight: 400;">Snap runs thousands of these experiments each month — processing over 10 petabytes of data within a three-hour window each morning using the Apache Spark distributed framework. By adopting </span><a target="_blank" href="https://developer.nvidia.com/topics/ai/data-science/cuda-x-data-science-libraries/cudf#section-accelerate-apache-spark"><span style="font-weight: 400;">Apache Spark accelerated by NVIDIA cuDF</span></a><span style="font-weight: 400;">, the company is boosting these data processing workloads on NVIDIA GPUs to achieve 4x speedups in runtime with the same number of machines, providing a cost-effective path to scale.</span></p>
<p><span style="font-weight: 400;">By pairing NVIDIA’s GPU-optimized software, including NVIDIA CUDA-X libraries, with Google&#8217;s infrastructure management services such as Google Kubernetes Engine, Snap is harnessing a full-stack platform for data processing at scale. </span></p>
<p><span style="font-weight: 400;">“Experimentation is at the core of our company. Changing our data infrastructure from CPUs to GPUs allows us to efficiently scale this experimentation to more features, more metrics and more users over time,” said Prudhvi Vatala, senior engineering manager at Snap. “The more experiments we’re able to run, the more innovative experiences we can deliver for Snapchat users.”</span></p>
<h2><b>A Sustainable Way to Scale</b></h2>
<p><span style="font-weight: 400;">Snapchat fans frequently see new features in the app — from arrival notifications to AI-generated stickers — but Snap is also continuously rolling out behind-the-scenes updates such as performance optimizations and compatibility updates for new operating system versions. </span></p>
<p><span style="font-weight: 400;">The A/B testing for all these new features now runs on cuDF, which allows developers to run existing Apache Spark applications on NVIDIA GPUs with no code changes for easy deployment. The open library for accelerated data processing builds on the power of the NVIDIA </span><a target="_blank" href="https://developer.nvidia.com/topics/ai/data-science/cuda-x-data-science-libraries/cudf#section-accelerate-apache-spark"><span style="font-weight: 400;">cuDF</span></a><span style="font-weight: 400;"> GPU DataFrame library while scaling it for the Apache Spark distributed computing framework.</span></p>
<p><span style="font-weight: 400;">With this migration, the team has — based on Snap internal data collected between January 1 and February 28 — realized 76% daily cost savings using NVIDIA GPUs on Google Kubernetes Engine compared with CPU-only workflows.</span></p>
<p><span style="font-weight: 400;">“We were projecting an ambitious roadmap to scale up experimentation that would have blown up our computing costs based on our existing infrastructure,” Vatala said. “Switching to GPU-accelerated pipelines with cuDF gave us a way to flatten the scaling curve, and the results were tremendous.”</span></p>
<p><span style="font-weight: 400;"><img loading="lazy" decoding="async" class="aligncenter wp-image-91207 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2026/03/Snap_pullquote-scaled.jpg" alt="" width="2048" height="819" srcset="https://blogs.nvidia.com/wp-content/uploads/2026/03/Snap_pullquote-scaled.jpg 2048w, https://blogs.nvidia.com/wp-content/uploads/2026/03/Snap_pullquote-960x384.jpg 960w, https://blogs.nvidia.com/wp-content/uploads/2026/03/Snap_pullquote-1680x672.jpg 1680w, https://blogs.nvidia.com/wp-content/uploads/2026/03/Snap_pullquote-1280x512.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2026/03/Snap_pullquote-1536x614.jpg 1536w, https://blogs.nvidia.com/wp-content/uploads/2026/03/Snap_pullquote-630x252.jpg 630w" sizes="auto, (max-width: 2048px) 100vw, 2048px" />To support workload migration, the team also harnessed cuDF suite of microservices that automatically qualify, test, configure and optimize Spark workloads for GPU acceleration at scale. </span></p>
<p><span style="font-weight: 400;">Working with NVIDIA experts, the Snap team optimized its pipelines on Google Cloud’s G2 virtual machines powered by NVIDIA L4 GPUs so they required just 2,100 GPUs running concurrently — as opposed to the initial projection that around 5,500 GPUs would need to run concurrently, according to data Snap collected between January 1 and March 13.</span></p>
<p><span style="font-weight: 400;">“When I saw the results of the initial experiments, they were pretty crazy — we saw much higher cost savings than we had expected,” said Joshua Sambasivam, a backend engineer on the A/B testing team. “The Spark accelerator is a perfect match for our workloads.”</span></p>
<p><span style="font-weight: 400;">Looking ahead, the Snap team plans to integrate the Spark accelerator beyond the A/B team to a broader range of production workloads. </span></p>
<p><span style="font-weight: 400;">“We didn’t realize we were sitting on this gold mine,” Vatala said. “We’ve so far migrated our two biggest pipelines, but there’s a lot of opportunity ahead.” </span></p>
<p><span style="font-weight: 400;">Learn more by tuning into </span><a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/sessions/gtc26-s81678/"><span style="font-weight: 400;">Vatala’s session at NVIDIA GTC</span></a><span style="font-weight: 400;">, taking place </span><span style="font-weight: 400;">Tuesday, March 17 at 1 p.m. PT</span><span style="font-weight: 400;">. </span></p>
<p><i><span style="font-weight: 400;">Read more about </span></i><a target="_blank" href="https://developer.nvidia.com/topics/ai/data-science/cuda-x-data-science-libraries/cudf"><i><span style="font-weight: 400;">NVIDIA cuDF </span></i></a><i><span style="font-weight: 400;">and get started with </span></i><a target="_blank" href="https://developer.nvidia.com/topics/ai/data-science/cuda-x-data-science-libraries/cudf#section-accelerate-apache-spark"><i><span style="font-weight: 400;">GPU acceleration for Apache Spark</span></i></a><i><span style="font-weight: 400;">.</span></i></p>
<p><i><span style="font-weight: 400;">Main image above courtesy of Snap, depicting A/B test of its Maps feature.</span></i></p>
]]></content:encoded>
					
		
		
				<media:content url="https://blogs.nvidia.com/wp-content/uploads/2026/03/agentic-ai-promo-snap-corp-gtc26-1920x1080-5043163.png" type="image/png" width="1920" height="1080">
			<media:thumbnail url="https://blogs.nvidia.com/wp-content/uploads/2026/03/agentic-ai-promo-snap-corp-gtc26-1920x1080-5043163-842x450.png" width="842" height="450" />
			<media:title type="html"><![CDATA[Snap Decisions: How Open Libraries for Accelerated Data Processing Boost A/B Testing for Snapchat]]></media:title>
			<media:description type="html"></media:description>
		</media:content>
	</item>
	</channel>
</rss>
