<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:media="http://search.yahoo.com/mrss/">

<channel>
	<title>GTC Archives | NVIDIA Blog</title>
	<atom:link href="https://blogs.nvidia.com/blog/tag/gtc/feed/" rel="self" type="application/rss+xml" />
	<link>https://blogs.nvidia.com/blog/tag/gtc/</link>
	<description></description>
	<lastBuildDate>Wed, 16 Oct 2024 19:00:21 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.6.2</generator>
	<item>
		<title>Tune In to the Top 5 NVIDIA Videos of 2023</title>
		<link>https://blogs.nvidia.com/blog/top-5-nvidia-videos-2023/</link>
		
		<dc:creator><![CDATA[Kristen Yee]]></dc:creator>
		<pubDate>Wed, 27 Dec 2023 16:00:22 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Climate]]></category>
		<category><![CDATA[Energy]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[NVIDIA DGX]]></category>
		<category><![CDATA[NVIDIA Modulus]]></category>
		<category><![CDATA[Omniverse]]></category>
		<category><![CDATA[Simulation and Design]]></category>
		<category><![CDATA[Social Impact]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=68938</guid>

					<description><![CDATA[2023 was marked by the generative AI boom, representing a new era for how artificial intelligence can be used across industries. The year’s top videos from the NVIDIA YouTube channel reflect this focus, with popular videos highlighting the technology powering large language models, new platforms for building generative AI applications and how accelerated computing and	<a class="read-more" href="https://blogs.nvidia.com/blog/top-5-nvidia-videos-2023/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>2023 was marked by the <a href="https://www.nvidia.com/en-us/glossary/data-science/generative-ai/" target="_blank" rel="noopener">generative AI</a> boom, representing a new era for how artificial intelligence can be used across industries.</p>
<p>The year’s top videos from the <a href="https://www.youtube.com/@NVIDIA" target="_blank" rel="noopener">NVIDIA YouTube channel</a> reflect this focus, with popular videos highlighting the technology powering <a href="https://www.nvidia.com/en-us/glossary/data-science/large-language-models/" target="_blank" rel="noopener">large language models</a>, new platforms for building generative AI applications and how accelerated computing and AI can advance climate science.</p>
<p>And don’t miss replays of NVIDIA founder and CEO Jensen Huang’s event appearances — his <a href="https://www.youtube.com/watch?v=DiGB5uAYKAg" target="_blank" rel="noopener">GTC keynote</a> in March has garnered 22 million views, making it by far the most-viewed video on the channel.</p>
<p>Tune in to NVIDIA’s top five videos of the year:</p>
<h2><strong>Predicting Extreme Weather Risk — Weeks in Advance</strong></h2>
<p>Explore in colorful detail how running FourCastNet — an AI framework developed by researchers at NVIDIA, Caltech and Lawrence Berkeley Lab — on NVIDIA GPUs enables quicker, more accurate extreme weather predictions.</p>
<p><iframe title="Predicting Extreme Weather Risk Three Weeks in Advance With FourCastNet" width="500" height="281" src="https://www.youtube.com/embed/FUUT6IrQjo4?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<h2><b>Accelerating Carbon Capture and Storage</b></h2>
<p>Buckle up — learn how reservoir engineers are using <a href="https://www.nvidia.com/en-us/omniverse/" target="_blank" rel="noopener">NVIDIA Omniverse</a>, <a href="https://developer.nvidia.com/modulus" target="_blank" rel="noopener">NVIDIA Modulus</a> and <a href="https://blogs.nvidia.com/blog/what-is-accelerated-computing/#:~:text=Accelerated%20computing%20uses%20parallel%20processing,analytics%20to%20simulations%20and%20visualizations." target="_blank" rel="noopener">accelerated computing</a> to optimize carbon capture, ensuring long-term storage and safer operations.</p>
<p><iframe title="Accelerating Carbon Capture and Storage With Fourier Neural Operator and NVIDIA Modulus" width="500" height="281" src="https://www.youtube.com/embed/u-M5LQvx1cQ?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<h2><b>Visualizing Global-Scale Climate Data</b></h2>
<p>Seeing is achieving with this stunning demo of the <a href="https://www.nvidia.com/en-us/high-performance-computing/earth-2/" target="_blank" rel="noopener">NVIDIA Earth-2 platform</a>, which offers high-resolution climate visualizations for scientists, as well as breathtakingly detailed urban airflow information for architects and city planners.</p>
<p><iframe title="Interactive Visualization of High-Resolution, Global-Scale Climate Data in the Cloud" width="500" height="281" src="https://www.youtube.com/embed/8cQoYcbUG_M?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><b>A Tour of the NVIDIA DGX H100</b></p>
<p>Presenting the engine behind the large language model breakthrough — the <a href="https://www.nvidia.com/en-us/data-center/dgx-h100/" target="_blank" rel="noopener">NVIDIA DGX H100</a>. Hear from Huang on why DGX is “the essential instrument of AI.”</p>
<p><iframe loading="lazy" title="Quick Tour of NVIDIA DGX H100" width="500" height="281" src="https://www.youtube.com/embed/a_tXcmEeGxo?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><b>Fine-Tuning Generative AI With NVIDIA AI Workbench </b></p>
<p>Check out this demo — featuring a multitude of Toy Jensens — to learn how <a href="https://www.nvidia.com/en-us/deep-learning-ai/solutions/data-science/workbench/" target="_blank" rel="noopener">NVIDIA AI Workbench</a> streamlines selecting <a href="https://blogs.nvidia.com/blog/what-are-foundation-models/" target="_blank" rel="noopener">foundation models</a>, building project environments and fine-tuning models with domain-specific data.</p>
<p><iframe loading="lazy" title="NVIDIA AI Workbench | Fine Tuning Generative AI" width="500" height="281" src="https://www.youtube.com/embed/ntMRzPzSvM4?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><em>Explore <a title="Original URL: https://www.nvidia.com/gtc/sessions/generative-ai/?nvid=nv-int-txtad-141445 Click to follow link." target="_blank" href="https://www.nvidia.com/gtc/sessions/generative-ai/?nvid=nv-int-txtad-141445" data-outlook-id="97194cbf-406f-48c7-a83a-cd48b46329c6">generative AI</a> sessions and experiences at <a target="_blank" href="https://www.nvidia.com/gtc/">NVIDIA GTC</a>, the global conference on AI and accelerated computing, running March 18-21 in San Jose, Calif., and online.</em></p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2022/07/digital-twin-earth.jpg"
			type="image/jpeg"
			width="1280"
			height="680"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2022/07/digital-twin-earth-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[Tune In to the Top 5 NVIDIA Videos of 2023]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>Fresh-Faced AI: NVIDIA Avatar Solutions Enhance Customer Service and Virtual Assistants</title>
		<link>https://blogs.nvidia.com/blog/avatar-solutions-enhance-development/</link>
		
		<dc:creator><![CDATA[Stephanie Rubenstein]]></dc:creator>
		<pubDate>Tue, 21 Mar 2023 16:16:08 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Pro Graphics]]></category>
		<category><![CDATA[3D]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[GTC 2023]]></category>
		<category><![CDATA[NVIDIA NeMo]]></category>
		<category><![CDATA[Omniverse]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=63107</guid>

					<description><![CDATA[Companies across industries are looking to use interactive avatars to enhance digital experiences. But creating them is a complex, time-consuming process requiring state-of-the-art AI models that can see, hear, understand and communicate with end users. To ease this process, NVIDIA is providing creators and developers with real-time AI solutions through Omniverse Avatar Cloud Engine (ACE),	<a class="read-more" href="https://blogs.nvidia.com/blog/avatar-solutions-enhance-development/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>Companies across industries are looking to use interactive avatars to enhance digital experiences. But creating them is a complex, time-consuming process requiring state-of-the-art AI models that can see, hear, understand and communicate with end users.</p>
<p>To ease this process, NVIDIA is providing creators and developers with real-time AI solutions through Omniverse Avatar Cloud Engine (ACE), a suite of cloud-native microservices for end-to-end development of interactive avatars. In collaboration with early-access partners, NVIDIA is delivering improvements that will provide users with the tools they need to easily design and deploy various kinds of avatars, from interactive chatbots to intelligent digital humans.</p>
<p><a target="_blank" href="https://nvidianews.nvidia.com/news/at-t-supercharges-operations-with-nvidia-ai">AT&amp;T and Quantiphi</a> are among the first to experience how Omniverse ACE can help increase employee productivity and enhance customer service experiences.</p>
<p>Omniverse ACE users can now seamlessly integrate NVIDIA AI into their applications, including <a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/products/riva/">Riva</a> for speech AI, NeMo service for natural language understanding, and Omniverse <a target="_blank" href="https://www.nvidia.com/en-us/omniverse/apps/audio2face/">Audio2Face</a> or Live Portrait for AI-powered 2D and 3D character animation.</p>
<p>With the latest improvements to Omniverse ACE, teams can also deploy advanced avatars across web conferencing and customer service use cases by integrating domain-specific NVIDIA AI workflows like <a target="_blank" href="https://developer.nvidia.com/nvidia-omniverse-platform/ace/tokkio-showcase">Tokkio</a> and <a target="_blank" href="https://developer.nvidia.com/maxine">Maxine</a>.</p>
<h2><b>Early Partners and Customers Develop AI-Driven Digital Humans</b></h2>
<p>AT&amp;T is planning to use Omniverse ACE and the Tokkio AI avatar workflow to build, customize and deploy virtual assistants for customer service and its employee help desk. Working with Quantiphi, one of NVIDIA’s service delivery partners, AT&amp;T is developing interactive avatars that can provide 24/7 support in local languages across regions. This is helping the company reduce costs while providing a better experience for its employees worldwide.</p>
<p>In addition to customer service, AT&amp;T is planning to build and develop digital humans for various use cases across the company.</p>
<p>“Quantiphi and NVIDIA have been collaborating to make customer experience more immersive by combining the power of large language models, graphics and recommender systems,” said Siddharth Kotwal, global head of NVIDIA Practice at Quantiphi. “NVIDIA’s Tokkio framework has made it easier to build, deploy and personalize AI-powered digital assistants or avatars for our enterprise customers. The process of seamlessly integrating automatic speech recognition, conversational agents and information retrieval systems with real-time animation has been simplified.”</p>
<p><a href="https://blogs.nvidia.com/wp-content/uploads/2023/03/Copy-of-att-quantiphi-still-1-1280x680-1.jpg"><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-63109" src="https://blogs.nvidia.com/wp-content/uploads/2023/03/Copy-of-att-quantiphi-still-1-1280x680-1-672x357.jpg" alt="" width="672" height="357" /></a></p>
<p>Leading professional-services company Deloitte is also working with NVIDIA to help enterprises deploy transformative applications. Deloitte’s latest hybrid-cloud offerings — which consist of NVIDIA AI and Omniverse services and platforms, including Omniverse ACE — will be added to the Deloitte Center for AI Computing.</p>
<h2><b>An Advanced, Streamlined Solution for Deploying Avatars</b></h2>
<p>Omniverse ACE provides all the necessary tools so users can streamline the development process for realistic, intelligent avatars. Teams can also customize pre-built AI avatar workflows to suit their needs with applications like NVIDIA Tokkio. Additionally, Omniverse ACE is bringing new improvements to existing microservices.</p>
<p>Learn more about <a target="_blank" href="https://developer.nvidia.com/nvidia-omniverse-platform/ace">NVIDIA Omniverse ACE</a> and register to join the early-access program, available now for developers.</p>
<p>Dive into the art of AI avatars at <a target="_blank" href="https://www.nvidia.com/gtc/">GTC</a>, a global conference for the era of AI and the metaverse. Join sessions with NVIDIA and industry experts, and watch the GTC keynote below:</p>
<p><iframe loading="lazy" title="GTC 2023 Keynote with NVIDIA CEO Jensen Huang" width="500" height="281" src="https://www.youtube.com/embed/DiGB5uAYKAg?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2023/03/gtc23-ace-tj-campaign-blog-1280x680-1.jpg"
			type="image/jpeg"
			width="1280"
			height="680"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2023/03/gtc23-ace-tj-campaign-blog-1280x680-1-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[Fresh-Faced AI: NVIDIA Avatar Solutions Enhance Customer Service and Virtual Assistants]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>NVIDIA Omniverse Accelerates Game Content Creation With Generative AI Services and Game Engine Connectors</title>
		<link>https://blogs.nvidia.com/blog/omniverse-accelerates-game-dev/</link>
		
		<dc:creator><![CDATA[Ike Nnoli]]></dc:creator>
		<pubDate>Tue, 21 Mar 2023 16:13:00 +0000</pubDate>
				<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Graphics Virtualization]]></category>
		<category><![CDATA[Pro Graphics]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Game Development]]></category>
		<category><![CDATA[GDC]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[GTC 2023]]></category>
		<category><![CDATA[Omniverse]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=63082</guid>

					<description><![CDATA[Powerful AI technologies are making a massive impact in 3D content creation and game development. Whether creating realistic characters that show emotion or turning simple texts into imagery, AI tools are becoming fundamental to developer workflows — and this is just the start. At NVIDIA GTC and the Game Developers Conference (GDC), learn how the	<a class="read-more" href="https://blogs.nvidia.com/blog/omniverse-accelerates-game-dev/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>Powerful AI technologies are making a massive impact in 3D content creation and game development. Whether creating realistic characters that show emotion or turning simple texts into imagery, AI tools are becoming fundamental to developer workflows — and this is just the start.</p>
<p>At <a target="_blank" href="https://www.nvidia.com/gtc/">NVIDIA GTC</a> and the <a target="_blank" href="https://www.nvidia.com/en-us/events/gdc/">Game Developers Conference</a> (GDC), learn how the <a target="_blank" href="https://www.nvidia.com/en-us/omniverse/">NVIDIA Omniverse</a> platform for creating and operating <a href="https://blogs.nvidia.com/blog/what-is-the-metaverse/">metaverse</a> applications is expanding with new Connectors and <a target="_blank" href="https://developer.nvidia.com/blog/rapidly-generate-3d-assets-for-virtual-worlds-with-generative-ai/">generative AI services</a> for game developers.</p>
<p>Part of the excitement around <a href="https://www.nvidia.com/en-us/glossary/data-science/generative-ai/" target="_blank" rel="noopener">generative AI</a> is because of its ability to capture the creator’s intent. The technology learns the underlying patterns and structures of data, and uses that to generate new content, such as images, audio, code, text, 3D models and more.</p>
<p>Announced today, the <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-brings-generative-ai-to-worlds-enterprises-with-cloud-services-for-creating-large-language-and-visual-models">NVIDIA AI Foundations</a> cloud services enable users to build, refine and operate custom <a href="https://blogs.nvidia.com/blog/what-are-large-language-models-used-for/">large language models (LLMs)</a> and generative AI trained with their proprietary data for their domain-specific tasks.</p>
<p>And through NVIDIA Omniverse, developers can get their first taste of using generative AI technology to enhance game creation and accelerate development pipelines with the <a target="_blank" href="https://www.nvidia.com/en-us/omniverse/apps/audio2face/">Omniverse Audio2Face</a> app.</p>
<h2><b>Accelerating 3D Content With Generative AI</b></h2>
<p>Specialized generative AI tools can boost creator productivity, even for users who don’t have extensive technical skills. Anyone can use generative AI to bring their creative ideas to life, producing high-quality, highly iterative experiences — all in a fraction of the time and cost of traditional game development.</p>
<p>For example, <a target="_blank" href="https://developer.nvidia.com/omniverse-platform/ace">NVIDIA Omniverse Avatar Cloud Engine (ACE)</a> offers the fastest, most versatile solution for bringing interactive avatars to life at scale. Game developers could leverage ACE to seamlessly integrate NVIDIA AI into their applications, including <a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/products/riva/">NVIDIA Riva </a>for creating expressive character voices using speech and translation AI, or <a target="_blank" href="https://www.nvidia.com/en-us/omniverse/apps/audio2face/">Omniverse Audio2Face</a> and Live Portrait for AI-powered 2D and 3D character animation.</p>
<p><iframe loading="lazy" title="NVIDIA Omniverse Audio2Face Real-Time Facial Animation Demo | GTC 2023 Updates" width="500" height="281" src="https://www.youtube.com/embed/-9OJZ1zOsDY?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p>Today, game developers are already taking advantage of Audio2Face, where artists are more efficiently animating characters without a tedious manual process. The app’s latest release brings major quality, usability and performance updates, including headless mode and a REST API — enabling developers to run the app and process numerous audio files from multiple users in the data center.</p>
<p>Mandarin Chinese language support can now be previewed in Audio2Face, along with improved lip-sync quality, more robust multi-language support and a new <a href="https://blogs.nvidia.com/blog/what-is-a-pretrained-ai-model/">pretrained</a> female model. The world’s first fully real-time, ray-traced subsurface scattering shader is also demonstrated in the demo with Diana, a new digital human model.</p>
<p>GSC Game World, one of Europe’s leading game developers, is adopting Omniverse Audio2Face in its upcoming game, <i>S.T.A.L.K.E.R. 2 Head of Chernobyl. </i>Join the <a target="_blank" href="https://schedule.gdconf.com/session/stalker-2-next-gen-game-development-with-nvidia-omniverse-audio2face-presented-by-nvidia/894518">NVIDIA and GCS session at GDC</a> to learn how developers are implementing generative AI technology in Omniverse.</p>
<figure id="attachment_63096" aria-describedby="caption-attachment-63096" style="width: 672px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2023/03/GDC-Blog-Copy-1.jpg"><img loading="lazy" decoding="async" class="wp-image-63096 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2023/03/GDC-Blog-Copy-1-672x357.jpg" alt="" width="672" height="357" /></a><figcaption id="caption-attachment-63096" class="wp-caption-text">A scene from “S.T.A.L.K.E.R. 2 Head of Chernobyl.”</figcaption></figure>
<p>Fallen Leaf, an indie game developer, is also using Omniverse Audio2Face for character facial animation in <i>Fort Solis</i>, a third-person sci-fi thriller game that takes place on Mars.</p>
<p>New generative AI services such as NVIDIA Picasso, announced at GTC, preview the future of building and deploying assets for game production pipelines. Omniverse is opening portals to enrich workflows with generative AI tools powered by NVIDIA and its partners, and the momentum around unifying the game asset pipeline is growing.</p>
<h2><b>Unifying Game Asset Pipelines With Universal Scene Description</b></h2>
<p>Based on the <a target="_blank" href="https://www.nvidia.com/en-us/omniverse/usd/">Universal Scene Description (USD)</a> framework, NVIDIA Omniverse is the connecting fabric that helps creators and developers build interoperability between their favorite tools — like Autodesk Maya, Autodesk 3ds Max and Adobe Substance 3D Painter — or make their own custom applications.</p>
<p>And with USD — an open, extensible framework and ecosystem for composing, simulating and collaborating within 3D worlds — developers can achieve non-destructive, collaborative workflows when creating scenes, as well as simplify asset aggregation so content creation teams can iterate faster.</p>
<figure id="attachment_63093" aria-describedby="caption-attachment-63093" style="width: 672px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2023/03/GDC-Blog-Copy-2.jpg"><img loading="lazy" decoding="async" class="wp-image-63093 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2023/03/GDC-Blog-Copy-2-672x357.jpg" alt="" width="672" height="357" /></a><figcaption id="caption-attachment-63093" class="wp-caption-text">Image courtesy of Tencent Games.</figcaption></figure>
<p><a target="_blank" href="https://www.tencent.com/en-us/">Tencent Games</a> is adopting USD workflows based on Omniverse to better streamline content creation pipelines. To create vast worlds in every level of a game, the artists at Tencent use design tools such as Autodesk Maya, SideFX Houdini and Unreal Engine to produce up to millions of trees, buildings and other properties to enrich their scenes. The technical artists often look to optimize their content creation pipelines to speed up this process, so they developed a proprietary Unreal Engine workflow powered by OmniObjects.</p>
<p>With USD, Tencent Games’ teams saw the opportunity to easily streamline and seamlessly connect their workflows. Building on Omniverse as the platform for developing USD workflows, the artists at Tencent no longer need to install plug-ins for each software they use. Using just one USD plug-in enables interoperability across all their favorite software tools. Learn more about Tencent Games by <a target="_blank" href="https://schedule.gdconf.com/session/evolving-the-game-development-pipeline-with-omniverse-presented-by-nvidia/894088">joining this session at GDC</a>.</p>
<p>New and updated Omniverse Connectors for game engines are also now available.</p>
<p>The open-beta <a target="_blank" href="https://developer.nvidia.com/game-engines/unity-engine">Omniverse Connector for Unity</a> workflows helps users of Omniverse and <a target="_blank" href="https://unity.com/">Unity</a> collaborate on projects. Developed by NVIDIA, the Connector delivers USD support alongside Unity workflows, enabling Unity users to take advantage of interoperable workflows. It offers <a target="_blank" href="https://docs.omniverse.nvidia.com/prod_nucleus/prod_nucleus/overview.html">Omniverse Nucleus</a> connection and browsing, USD geometry export, lights, cameras, <a target="_blank" href="https://www.nvidia.com/en-us/design-visualization/technologies/material-definition-language/">Material Definition Language</a> and preview for USD materials. Early features also include physics export, USD import and unidirectional live sync.</p>
<p>And with the Unreal Engine Connector’s latest release, Omniverse users can now use Unreal Engine’s USD import utilities to add skeletal mesh blend shape importing, and Python USD bindings to access stages on Omniverse Nucleus. The latest release also delivers improvements in import, export and live workflows, as well as updated software development kits.</p>
<p>Learn more about these latest technologies by joining NVIDIA at <a target="_blank" href="https://www.nvidia.com/en-us/events/gdc/">GDC</a>.</p>
<p>And catch up on all the groundbreaking announcements in generative AI and the metaverse by watching the <a target="_blank" href="https://www.youtube.com/watch?v=DiGB5uAYKAg">NVIDIA GTC keynote</a>.</p>
<p><i>Follow NVIDIA Omniverse on</i><a target="_blank" href="https://www.instagram.com/nvidiaomniverse/"> <i>Instagram</i></a><i>,</i><a target="_blank" href="https://medium.com/@nvidiaomniverse"> <i>Medium</i></a><i>,</i><a target="_blank" href="https://twitter.com/nvidiaomniverse"> <i>Twitter</i></a><i> and</i><a target="_blank" href="https://www.youtube.com/channel/UCSKUoczbGAcMld7HjpCR8OA"> <i>YouTube</i></a><i> for additional resources and inspiration. Check out the Omniverse</i><a target="_blank" href="https://forums.developer.nvidia.com/c/omniverse/300"> <i>forums</i></a><i>, and join our</i><a target="_blank" href="https://discord.com/invite/XWQNJDNuaC"> <i>Discord server</i></a> <i>and</i><a target="_blank" href="https://www.twitch.tv/nvidiaomniverse"> <i>Twitch</i></a> <i>channel</i><i> to chat with the community.</i></p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2023/03/GDC-Blog-Featured-Image.jpg"
			type="image/jpeg"
			width="1280"
			height="680"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2023/03/GDC-Blog-Featured-Image-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[NVIDIA Omniverse Accelerates Game Content Creation With Generative AI Services and Game Engine Connectors]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>Signed, Sealed, Delivered: NVIDIA AI Achieves World Record in Route Optimization</title>
		<link>https://blogs.nvidia.com/blog/cuopt-world-record-route/</link>
		
		<dc:creator><![CDATA[Brian Caulfield]]></dc:creator>
		<pubDate>Tue, 21 Mar 2023 15:16:12 +0000</pubDate>
				<category><![CDATA[Accelerated Analytics]]></category>
		<category><![CDATA[Data Center]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[NVIDIA cuOpt]]></category>
		<category><![CDATA[Smart Spaces]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=62876</guid>

					<description><![CDATA[Promising more timely deliveries for consumers around the globe, NVIDIA’s cuOpt real-time route optimization software has set records on a key route optimization benchmark. NVIDIA cuOpt set three new records on the widely followed Li &#38; Lim pickup and delivery benchmark. Last-mile delivery is the most expensive part of the logistics industry, representing over 40%	<a class="read-more" href="https://blogs.nvidia.com/blog/cuopt-world-record-route/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>Promising more timely deliveries for consumers around the globe, <a target="_blank" href="https://developer.nvidia.com/cuopt-logistics-optimization">NVIDIA’s cuOpt</a> real-time route optimization software has set records on a key route optimization benchmark.</p>
<p>NVIDIA cuOpt set three new records on the widely followed Li &amp; Lim pickup and delivery benchmark.</p>
<p>Last-mile delivery is the most expensive part of the logistics industry, representing over 40% of overall supply chain cost and carbon footprint, according to Gartner. Nearly 150 billion parcels are shipped every year, <a target="_blank" href="https://www.pitneybowes.com/content/dam/pitneybowes/us/en/shipping-index/22-pbcs-04529-2021-global-parcel-shipping-index-ebook-web-002.pdf">according to Pitney Bowes</a>.</p>
<p>AT&amp;T is using cuOpt to optimize routes for 30,000 technicians. “With cuOpt, AT&amp;T can find a solution 100x faster and update their dispatch in real time,” said NVIDIA CEO Jensen Huang, during his keynote at NVIDIA’s GTC technology conference Tuesday.</p>
<p><a target="_blank" href="https://nvidianews.nvidia.com/news/at-t-supercharges-operations-with-nvidia-ai">AT&amp;T is also testing digital assistants</a> built with <a target="_blank" href="https://developer.nvidia.com/omniverse-platform/ace">NVIDIA Omniverse Avatar Cloud Engine</a> to enhance the customer service experience and improve its employee help desk. <a target="_blank" href="https://nvidianews.nvidia.com/news/at-t-supercharges-operations-with-nvidia-ai">Additionally, the company is accelerating</a> its data-processing workflow using <a target="_blank" href="https://www.nvidia.com/en-us/deep-learning-ai/solutions/data-science/apache-spark-3/">NVIDIA RAPIDS Accelerator for Apache Spark</a>, a suite of libraries that enable GPU acceleration of data-science pipelines.</p>
<p>“AT&amp;T has adopted a full suite of NVIDIA AI libraries,” Huang said.</p>
<p>Better solutions to the pickup and delivery problems result in lower costs for manufacturers moving goods and services across the globe, quicker disaster relief and hotter, fresher pizza, among other benefits.</p>
<p>Introduced in 2021, NVIDIA cuOpt offers enterprises the ability to adapt to real-time data to optimize delivery routes by analyzing billions of feasible moves per second.</p>
<p>cuOpt is now at the center of a thriving partner ecosystem of system integrators and service providers, logistics and transportation software vendors, optimization software specialists and location service providers.</p>
<h2>A Global Benchmark</h2>
<p><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/ai-workflows/route-optimization/">Route optimization</a> is one of the most critical industrial computing problems of our time.</p>
<p>While the problems involved in route optimization are simple to understand — what’s the most efficient way to visit the most places — the computation required to determine the most efficient routes stacks up, fast, as the number of delivery vehicles, customers and delivery destinations increases.</p>
<p>That’s made the benchmarks for pickup and delivery problems set forth in 2001 by Hiabing Li at the New Jersey Institute of Technology and Andrew Lim at the Hong Kong University of Science and Technology — who introduced a collection of 300 datasets by which a route’s efficiency can be measured — a widely watched global standard.</p>
<p>Researchers have been proposing best route plans for these benchmarks for more than two decades, inventing algorithms that set and reset the world&#8217;s best-known solutions, with past winners focusing on making small tweaks to previous routes.</p>
<p>The route cuOpt created, by contrast, looks unlike the routes created by previous winners. It was able to find an entirely new approach to the problem, delivering three world-record solutions in the largest instances of the Li &amp; Lim benchmark suite. They include 1,000 pickup and delivery locations.</p>
<p>The benchmark’s top objective is to minimize the fleet size first, and the total distance traveled next. Nevertheless, cuOpt was able to cut the distance traveled by as much as 0.8% to 1,000 pickup and delivery locations.</p>
<p>By relying on the parallel computing capabilities of <a target="_blank" href="https://www.nvidia.com/en-us/data-center/a100/">NVIDIA A100 Tensor Core GPUs</a>, cuOpt is able to search for routes more deeply and more broadly.</p>
<p>The result: cuOpt delivered an improvement 7.2x higher than the improvement over the previous record on the benchmark and 26.6x higher than the improvement gained by the record-setting effort before that.</p>
<p>cuOpt for production environments is available with <a target="_blank" href="https://www.nvidia.com/en-us/data-center/products/ai-enterprise/">NVIDIA AI Enterprise</a>, the software layer of the NVIDIA AI platform. With enterprise support for over 50 production-ready frameworks, pretrained models and development tools included, NVIDIA AI Enterprise is designed to accelerate enterprises to the bleeding edge of AI, while also simplifying AI to make it accessible to every enterprise.</p>
<p>Enterprises can also work with NVIDIA service delivery partners including Deloitte and Quantiphi to integrate cuOpt solutions into their business.</p>
<h2>Learn More</h2>
<p>Explore what <a target="_blank" href="https://developer.nvidia.com/cuopt-logistics-optimization">cuOpt</a> can do for your business, stay up to date on the latest <a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/cuopt-news/">news for cuOpt</a> and <a target="_blank" href="https://register.nvidia.com/flow/nvidia/gtcspring2023/attendeeportal/page/sessioncatalog/session/1666640735962001t7KA">join us at GTC</a>.</p>
<p>Resources to learn more:</p>
<ul>
<li style="font-weight: 400;" aria-level="1"><a target="_blank" href="https://developer.nvidia.com/cuopt-logistics-optimization">cuOpt Product Page</a></li>
<li style="font-weight: 400;" aria-level="1"><a target="_blank" href="https://developer.nvidia.com/cuopt-logistics-optimization/cloud-service-early-access">cuOpt Cloud Service Early Access Program</a></li>
<li style="font-weight: 400;" aria-level="1"><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/ai-workflows/route-optimization/">Route Optimization AI Workflow</a> (powered by cuOpt)</li>
<li style="font-weight: 400;" aria-level="1"><a target="_blank" href="https://www.nvidia.com/en-us/ai-data-science/cuopt-news/">cuOpt News Sign Up Form</a></li>
<li style="font-weight: 400;" aria-level="1">cuOpt GTC sessions
<ul>
<li style="font-weight: 400;" aria-level="2"><a target="_blank" href="https://register.nvidia.com/flow/nvidia/gtcspring2023/attendeeportal/page/sessioncatalog/session/1666640735962001t7KA">Advances in Operations Optimization</a></li>
<li style="font-weight: 400;" aria-level="2"><a target="_blank" href="https://register.nvidia.com/flow/nvidia/gtcspring2023/attendeeportal/page/sessioncatalog/session/1666640735962001t7KA">Accelerated AI Logistics and Route Optimization 101</a></li>
</ul>
</li>
</ul>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2023/03/delivered.jpg"
			type="image/jpeg"
			width="624"
			height="332"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2023/03/delivered.jpg"
			width="624"
			height="332"
			/>
			<media:title type="html"><![CDATA[Signed, Sealed, Delivered: NVIDIA AI Achieves World Record in Route Optimization]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>Keynote Wrap-Up: NVIDIA CEO Unveils Next-Gen RTX GPUs, AI Workflows in the Cloud</title>
		<link>https://blogs.nvidia.com/blog/keynote-gtc-nvidia-ceo/</link>
		
		<dc:creator><![CDATA[Brian Caulfield]]></dc:creator>
		<pubDate>Tue, 20 Sep 2022 16:38:16 +0000</pubDate>
				<category><![CDATA[Corporate]]></category>
		<category><![CDATA[Data Center]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Driving]]></category>
		<category><![CDATA[Gaming]]></category>
		<category><![CDATA[Conversational AI]]></category>
		<category><![CDATA[GeForce]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[GTC 2022]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[NVIDIA DRIVE]]></category>
		<category><![CDATA[NVIDIA DRIVE Sim]]></category>
		<category><![CDATA[NVIDIA NeMo]]></category>
		<category><![CDATA[NVIDIA RTX]]></category>
		<category><![CDATA[Omniverse]]></category>
		<category><![CDATA[Rendering]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=59729</guid>

					<description><![CDATA[New cloud services to support AI workflows and the launch of a new generation of GeForce RTX GPUs featured today in NVIDIA CEO Jensen Huang’s GTC keynote, which was packed with new systems, silicon, and software. “Computing is advancing at incredible speeds, the engine propelling this rocket is accelerated computing, and its fuel is AI,”	<a class="read-more" href="https://blogs.nvidia.com/blog/keynote-gtc-nvidia-ceo/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-launches-omniverse-cloud-services-for-building-and-operating-industrial-metaverse-applications">New cloud services</a> to support AI workflows and the launch of a <a target="_blank" href="https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/">new generation of GeForce RTX GPUs</a> featured today in NVIDIA CEO Jensen Huang’s GTC keynote, which was packed with new systems, silicon, and software.</p>
<p>“Computing is advancing at incredible speeds, the engine propelling this rocket is accelerated computing, and its fuel is AI,” Huang said during a virtual presentation as he kicked off <a target="_blank" href="https://www.nvidia.com/gtc/">NVIDIA GTC</a>.</p>
<p>Again and again, Huang connected new technologies to new products to new opportunities – from harnessing AI to delight gamers with never-before-seen graphics to building virtual proving grounds where the world’s biggest companies can refine their products.</p>
<p>Driving the deluge of new ideas, new products and new applications: a singular vision of accelerated computing unlocking advances in AI, which, in turn will touch industries around the world.</p>
<p>Gamers and creators will get the <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidias-new-ada-lovelace-rtx-gpu-arrives-for-designers-and-creators">first GPUs based on the </a><a target="_blank" href="https://www.nvidia.com/en-us/geforce/ada-lovelace-architecture/">new NVIDIA Ada Lovelace architecture</a>.</p>
<p>Enterprises will get powerful new tools for high-performance computing applications with systems based on the <a href="https://blogs.nvidia.com/blog/grace-hopper-recommender-systems/">Grace CPU and Grace Hopper Superchip</a>. Those building the 3D internet will get <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-ovx-computing-systems-the-graphics-and-simulation-foundation-for-the-metaverse-powered-by-ada-lovelace-gpu">new OVX servers powered by Ada Lovelace L40 data center GPUs</a>. Researchers and computer scientists get new <a href="https://blogs.nvidia.com/blog/what-are-large-language-models-used-for/" target="_blank" rel="noopener">large language model</a> capabilities with <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-launches-large-language-model-cloud-services-to-advance-ai-and-digital-biology">NVIDIA LLMs NeMo Service</a>. And the auto industry gets <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-unveils-drive-thor-centralized-car-computer-unifying-cluster-infotainment-automated-driving-and-parking-in-a-single-cost-saving-system">Thor, a new brain with an astonishing 2,000 teraflops of performance</a>.</p>
<p>Huang highlighted how NVIDIA’s technologies are being put to work by a sweep of major partners and customers across a breadth of industries.</p>
<p>To speed adoption, he announced <a target="_blank" href="https://nvidianews.nvidia.com/news/deloitte-and-nvidia-to-bring-new-services-built-on-nvidia-ai-and-omniverse-platforms-to-the-worlds-enterprises">Deloitte, the world’s largest professional services firm, is bringing new services built on NVIDIA AI and NVIDIA Omniverse to the world’s enterprises</a>.</p>
<p>And he shared customer stories from telecoms giant Charter, as well as General Motors in the automotive industry, the German railway system’s <a target="_blank" href="https://nvidianews.nvidia.com/news/deloitte-and-nvidia-to-bring-new-services-built-on-nvidia-ai-and-omniverse-platforms-to-the-worlds-enterprises">Deutsche Bahn</a> in transportation, <a target="_blank" href="https://nvidianews.nvidia.com/news/broad-institute-and-nvidia-accelerate-terra-cloud-serving-25000-researchers-advancing-biomedical-discovery">The Broad Institute in medical research</a>, and <a href="https://blogs.nvidia.com/blog/lowes-retail-digital-twins-omniverse">Lowe’s in retail</a>.</p>
<p>NVIDIA GTC, which kicked off this week, has become one of the world’s most important AI gatherings, with 200+ speakers from companies such as <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.catalogallsessionstab=16566177511100015Kus&amp;search=linda%20hapgood#/session/1657650239584001oSsl">Boeing</a>, <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.catalogallsessionstab=16566177511100015Kus&amp;search=Deutsche%20Bank#/session/1657215118557001qvpX">Deutsche Bank</a>, <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.catalogallsessionstab=16566177511100015Kus&amp;search=lowe%27s#/session/1657552837943001kNnq">Lowe’s</a>, <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.catalogallsessionstab=16566177511100015Kus&amp;search=Polestar#/session/1654887457554001cFmh">Polestar</a>,<a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.catalogallsessionstab=16566177511100015Kus&amp;search=Johnson%20%26%20Johnson#/session/1659984532489001quiI"> Johnson &amp; Johnson</a>, <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.catalogallsessionstab=16566177511100015Kus&amp;search=kroger#/session/1656702733431001bdRM">Kroger</a>, <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.catalogallsessionstab=16566177511100015Kus&amp;search=Mercedes-Benz#/session/1658253101872001pJ69">Mercedes-Benz</a>, <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.catalogallsessionstab=16566177511100015Kus&amp;search=siemens#/">Siemens AG</a>, <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.catalogallsessionstab=16566177511100015Kus&amp;search=t-mobile#/session/1657833171460001cpNa">T-Mobile</a> and <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.catalogallsessionstab=16566177511100015Kus&amp;search=us%20bank#/session/1657550992826001Q3mx">US Bank</a>. More than 200,000 people have registered for the conference.</p>
<h2><b>A ‘Quantum Leap’: GeForce RTX 40 Series GPUs</b></h2>
<p>First out of the blocks at the keynote was the launch of next-generation GeForce RTX 40 Series GPUs powered by Ada, which Huang called a “quantum leap” that paves the way for creators of fully simulated worlds.</p>
<figure id="attachment_59825" aria-describedby="caption-attachment-59825" style="width: 1280px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-59825 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2022/09/gtc22-fall-web-keynote-blog-image-jhh-rtx4090-1280x680-r2-1.jpg" alt="" width="1280" height="680" srcset="https://blogs.nvidia.com/wp-content/uploads/2022/09/gtc22-fall-web-keynote-blog-image-jhh-rtx4090-1280x680-r2-1.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2022/09/gtc22-fall-web-keynote-blog-image-jhh-rtx4090-1280x680-r2-1-960x510.jpg 960w" sizes="(max-width: 1280px) 100vw, 1280px" /><figcaption id="caption-attachment-59825" class="wp-caption-text">NVIDIA CEO Jensen Huang launched the next-generation GeForce RTX 40 Series GPUs.</figcaption></figure>
<p>Huang gave his audience a taste of what that makes possible by offering up a look at Racer RTX, a fully interactive simulation that’s entirely ray traced, with all the action physically modeled.</p>
<p>Ada’s advancements include a new Streaming Multiprocessor, a new RT Core with twice the ray-triangle intersection throughput, and a new Tensor Core with the Hopper FP8 Transformer Engine and 1.4 petaflops of Tensor processor power.</p>
<p>Ada also introduces the latest version of <a target="_blank" href="https://www.nvidia.com/en-us/geforce/technologies/dlss/">NVIDIA DLSS technology</a>, DLSS 3, which uses AI to generate new frames by comparing new frames with prior frames to understand how a scene is changing. The result: boosting game performance by up to 4x over brute force rendering.</p>
<p>DLSS 3 has received support from many of the world’s leading game developers, with more than 35 games and applications announcing support. “DLSS 3 is one of our greatest neural rendering inventions,” Huang said.</p>
<p>Together, Huang said, these innovations help deliver 4x more processing throughput with the new GeForce RTX 4090 versus its forerunner, the RTX 3090 Ti. “The new heavyweight champ” starts at $1,599 and will be available Oct. 12.</p>
<p>Additionally, the new GeForce RTX 4080 is launching in November with two configurations.</p>
<p>The GeForce RTX 4080 16GB, priced at $1,199, has 9,728 CUDA cores and 16GB of high-speed Micron GDDR6X memory. With DLSS 3, it’s twice as fast in today’s games as the GeForce RTX 3080 Ti,  and more powerful than the GeForce RTX 3090 Ti at lower power.</p>
<p>The GeForce RTX 4080 12GB has 7,680 CUDA cores and 12GB of Micron GDDR6X memory, and with DLSS 3 is faster than the RTX 3090 Ti, the previous-generation flagship GPU. It’s priced at $899.</p>
<p>Huang also announced that <a target="_blank" href="https://www.nvidia.com/en-us/geforce/news/portal-with-rtx-ray-tracing/">NVIDIA Lightspeed Studios used Omniverse to reimagine <i>Portal</i></a>, one of the most celebrated games in history. With <a target="_blank" href="https://www.nvidia.com/en-us/geforce/news/rtx-remix-announcement/">NVIDIA RTX Remix</a>, an AI-assisted toolset, users can mod their favorite games, enabling them to up-res textures and assets, and give materials physically accurate properties.</p>
<figure id="attachment_59816" aria-describedby="caption-attachment-59816" style="width: 1280px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-59816 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2022/09/gtc22-fall-web-keynote-blog-image-2300584-portal-1280x680-2.jpg" alt="" width="1280" height="680" srcset="https://blogs.nvidia.com/wp-content/uploads/2022/09/gtc22-fall-web-keynote-blog-image-2300584-portal-1280x680-2.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2022/09/gtc22-fall-web-keynote-blog-image-2300584-portal-1280x680-2-960x510.jpg 960w" sizes="(max-width: 1280px) 100vw, 1280px" /><figcaption id="caption-attachment-59816" class="wp-caption-text">NVIDIA Lightspeed Studios used Omniverse to reimagine Portal, one of the most celebrated games in history.</figcaption></figure>
<h2><b>Powering AI Advances, H100 GPU in Full Production</b></h2>
<p>Once more tying systems and software to broad technology trends, Huang explained that large language models, or LLMs, and recommender systems are the two most important AI models today.</p>
<p>Recommenders “run the digital economy,” powering everything from e-commerce to entertainment to advertising, he said. “They’re the engines behind social media, digital advertising, e-commerce and search.”</p>
<p>And large language models based on the Transformer deep learning model first introduced in 2017 are now among the most vibrant areas for research in AI, and able to learn to understand human language without supervision or labeled datasets.</p>
<p>“A single pre-trained model can perform multiple tasks, like question answering, document summarization, text generation, translation and even software programming,” Huang said.</p>
<p>Delivering the computing muscle needed to power these enormous models, Huang said the NVIDIA H100 Tensor Core GPU, with Hopper’s next-generation Transformer Engine, is in full production, with systems shipping in the coming weeks.</p>
<p>“Hopper is in full production and coming soon to power the world’s AI factories,” Huang said.</p>
<p>Partners building systems include Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro. And Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first to deploy H100-based instances in the cloud starting next year.</p>
<p>And Grace Hopper, which combines NVIDIA’s Arm-based Grace data center CPU with Hopper GPUs, with its 7x increase in fast-memory capacity, will deliver a “giant leap” for recommender systems, Huang said. Systems incorporating Grace Hopper will be available in the first half of 2023.</p>
<h2><b>Weaving Together the Metaverse, L40 Data Center GPUs in Full Production</b></h2>
<p>The next evolution of the internet, called the metaverse, will be extended with 3D, Huang explained. Omniverse is NVIDIA’s platform for building and running metaverse applications.</p>
<p>Here, too, Huang explained how connecting and simulating these worlds will require powerful, flexible new computers. And NVIDIA OVX servers are built for scaling out metaverse applications.</p>
<p>NVIDIA’s 2nd-generation OVX systems will be powered by Ada Lovelace L40 data center GPUs, which are now in full production, Huang announced.</p>
<h2><b>Thor for Autonomous Vehicles, Robotics, Medical Instruments and More</b></h2>
<p>In today’s vehicles, active safety, parking, driver monitoring, camera mirrors, cluster and infotainment are driven by different computers. In the future, they’ll be delivered by software that improves over time, running on a centralized computer, Huang said.</p>
<p>To power this, Huang introduced DRIVE Thor, which combines the transformer engine of Hopper, the GPU of Ada, and the amazing CPU of Grace.</p>
<p>The new Thor superchip delivers 2,000 teraflops of performance, replacing Atlan on the DRIVE roadmap, and providing a seamless transition from DRIVE Orin, which has 254 TOPS of performance and is currently in production vehicles. Thor will be the processor for robotics, medical instruments, industrial automation and edge AI systems, Huang said.</p>
<h2><b>3.5 Million Developers, 3,000 Accelerated Applications</b></h2>
<p>Bringing NVIDIA’s systems and silicon, and the benefits of accelerated computing, to industries around the world, is a software ecosystem with more than 3.5 million developers creating some 3,000 accelerated apps using NVIDIA’s 550 software development kits, or SDKs, and AI models, Huang announced.</p>
<p>And it’s growing fast. Over the past 12 months, NVIDIA has updated more than 100 SDKs and introduced 25 new ones.</p>
<p>“New SDKs increase the capability and performance of systems our customers already own, while opening new markets for accelerated computing,” Huang said.</p>
<h2><b>New Services for AI, Virtual Worlds</b></h2>
<p>Large language models “are the most important AI models today,” Huang said. Based on the transformer architecture, these giant models can learn to understand meanings and languages without supervision or labeled datasets, unlocking remarkable new capabilities.</p>
<p>To make it easier for researchers to apply this “incredible” technology to their work, Huang announced the Nemo LLM Service, an NVIDIA-managed cloud service to adapt <a href="https://blogs.nvidia.com/blog/what-is-a-pretrained-ai-model/" target="_blank" rel="noopener">pretrained</a> LLMs to perform specific tasks.</p>
<p>To accelerate the work of drug and bioscience researchers, Huang also announced BioNeMo LLM, a service to create LLMs that understand chemicals, proteins, DNA and RNA sequences.</p>
<p>Huang announced that NVIDIA is working with The Broad Institute, the world’s largest producer of human genomic information, to make NVIDIA Clara libraries, such as NVIDIA Parabricks, the Genome Analysis Toolkit, and BioNeMo, available on Broad’s Terra Cloud Platform.</p>
<figure id="attachment_59804" aria-describedby="caption-attachment-59804" style="width: 1280px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-full wp-image-59804" src="https://blogs.nvidia.com/wp-content/uploads/2022/09/gtc22-fall-web-keynote-blog-image-2422307-broad-1280x680-1.jpg" alt="" width="1280" height="680" srcset="https://blogs.nvidia.com/wp-content/uploads/2022/09/gtc22-fall-web-keynote-blog-image-2422307-broad-1280x680-1.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2022/09/gtc22-fall-web-keynote-blog-image-2422307-broad-1280x680-1-960x510.jpg 960w" sizes="(max-width: 1280px) 100vw, 1280px" /><figcaption id="caption-attachment-59804" class="wp-caption-text">NVIDIA is working with The Broad Institute, the world’s largest producer of human genomic information, to make NVIDIA Clara libraries available on Broad’s Terra Cloud Platform.</figcaption></figure>
<p><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-launches-omniverse-cloud-services-for-building-and-operating-industrial-metaverse-applications">Huang also detailed NVIDIA Omniverse Cloud</a>, an infrastructure-as-a-service that connects Omniverse applications running in the cloud, on premises or on a device.</p>
<p>New Omniverse containers – Replicator for synthetic data generation, Farm for scaling render farms, and Isaac Sim for building and training AI robots – are now available for cloud deployment, Huang announced.</p>
<p>Omniverse is seeing wide adoption, and Huang shared several customer stories and demos:</p>
<ul>
<li style="font-weight: 300;" aria-level="1">Lowe’s, which has nearly 2,000 retail outlets, is using Omniverse to design, build and operate digital twins of their stores;</li>
<li style="font-weight: 300;" aria-level="1">Charter, a $50 billion dollar telecoms provider, and interactive data analytics provider HeavyAI, are using Omniverse to create digital twins of Charter’s 4G and 5G networks;</li>
<li style="font-weight: 300;" aria-level="1">GM is creating a digital twin of its Michigan Design Studio in Omniverse where designers, engineers and marketers can collaborate.</li>
</ul>
<figure id="attachment_59810" aria-describedby="caption-attachment-59810" style="width: 1280px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-59810 size-full" src="https://blogs.nvidia.com/wp-content/uploads/2022/09/gtc22-fall-web-keynote-blog-image-2430952-reinventing-retail-lowe-1280x680-1.jpg" alt="" width="1280" height="680" srcset="https://blogs.nvidia.com/wp-content/uploads/2022/09/gtc22-fall-web-keynote-blog-image-2430952-reinventing-retail-lowe-1280x680-1.jpg 1280w, https://blogs.nvidia.com/wp-content/uploads/2022/09/gtc22-fall-web-keynote-blog-image-2430952-reinventing-retail-lowe-1280x680-1-960x510.jpg 960w" sizes="(max-width: 1280px) 100vw, 1280px" /><figcaption id="caption-attachment-59810" class="wp-caption-text">Home improvement retailer Lowe’s is using Omniverse to design, build and operate digital twins of their stores.</figcaption></figure>
<h2><b>New Jetson Orin Nano for Robotics</b></h2>
<p>Shifting from virtual worlds to machines that will move through their world, robotic computers “are the newest types of computers,” Huang said, describing NVIDIA’s second-generation processor for robotics, Orin, as a homerun.</p>
<p>To bring Orin to more markets, <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-jetson-orin-nano-sets-new-baseline-for-entry-level-edge-ai-and-robotics-with-80x-performance-leap">he announced the Jetson Orin Nano</a>, a tiny robotics computer that is 80x faster than the previous super-popular Jetson Nano.</p>
<p>Jetson Orin Nano runs the NVIDIA Isaac robotics stack and features the ROS 2 GPU-accelerated framework, and NVIDIA Iaaac Sim, a robotics simulation platform, is available on the cloud.</p>
<p>And for robotics developers using AWS RoboMaker, <a href="https://blogs.nvidia.com/blog/nvidia-isaac-sim-robotics-simulation">Huang announced that containers for the NVIDIA Isaac platform for robotics development are in the AWS marketplace</a>.</p>
<h2><b>New Tools for Video, Image Services</b></h2>
<p>Most of the world’s internet traffic is video, and user-generated video streams will be increasingly augmented by AI special effects and computer graphics, Huang explained.</p>
<p>“Avatars will do computer vision, speech AI, language understanding and computer graphics in real time and at cloud scale,” Huang said.</p>
<p>To enable new innovations at the intersection of real-time graphics, AI and communications possible, <a href="https://blogs.nvidia.com/blog/computer-vision-cloud/">Huang announced NVIDIA has been building acceleration libraries like CV-CUDA</a>, a cloud runtime engine called UCF Unified Computing Framework, <a href="https://blogs.nvidia.com/blog/omniverse-ace-interactive-avatars/">Omniverse ACE Avatar Cloud Engine</a>, and a sample application called Tokkio for customer service avatars.</p>
<h2><b>Deloitte to Bring AI, Omniverse Services to Enterprises</b></h2>
<p>And to speed the adoption of all these technologies to the world’s enterprises, Deloitte, the world’s largest professional services firm, is bringing new services built on NVIDIA AI and NVIDIA Omniverse to the world’s enterprises, Huang announced.</p>
<p>He said that Deloitte’s professionals will help the world’s enterprises use NVIDIA application frameworks to build modern multi-cloud applications for customer service, cybersecurity, industrial automation, warehouse and retail automation and more.</p>
<h2><b>Just Getting Started</b></h2>
<p>Huang ended his keynote by recapping a talk that moved from outlining new technologies to product announcements and back — uniting scores of different parts into a singular vision.</p>
<p>“Today, we announced new chips, new advances to our platforms, and, for the very first time, new cloud services,” Huang said as he wrapped up. “These platforms propel new breakthroughs in AI, new applications of AI, and the next wave of AI for science and industry.”</p>
<p><iframe loading="lazy" title="GTC Sept 2022 Keynote with NVIDIA CEO Jensen Huang" width="500" height="281" src="https://www.youtube.com/embed/PWcNlRI00jo?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2022/09/gtc22-fall-web-keynote-blog-image-jhh-1280x680-1.jpg"
			type="image/jpeg"
			width="1280"
			height="680"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2022/09/gtc22-fall-web-keynote-blog-image-jhh-1280x680-1-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[Keynote Wrap-Up: NVIDIA CEO Unveils Next-Gen RTX GPUs, AI Workflows in the Cloud]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>NVIDIA Introduces Open-Source Project to Accelerate Computer Vision Cloud Applications</title>
		<link>https://blogs.nvidia.com/blog/computer-vision-cloud/</link>
		
		<dc:creator><![CDATA[Michael Boone]]></dc:creator>
		<pubDate>Tue, 20 Sep 2022 16:06:47 +0000</pubDate>
				<category><![CDATA[Corporate]]></category>
		<category><![CDATA[Data Center]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[GTC 2022]]></category>
		<category><![CDATA[Open Source]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=59548</guid>

					<description><![CDATA[Promising to help process images faster and more efficiently at a vast scale, NVIDIA introduced CV-CUDA, an open-source library for building accelerated end-to-end computer vision and image processing pipelines. The majority of internet traffic is video. Increasingly, this video will be augmented by AI special effects and computer graphics. To add to this complexity, fast-growing	<a class="read-more" href="https://blogs.nvidia.com/blog/computer-vision-cloud/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>Promising to help process images faster and more efficiently at a vast scale, NVIDIA introduced <a target="_blank" href="https://developer.nvidia.com/cv-cuda">CV-CUDA</a>, an open-source library for building accelerated end-to-end computer vision and image processing pipelines.</p>
<p>The majority of internet traffic is video. Increasingly, this video will be augmented by AI special effects and computer graphics.</p>
<p>To add to this complexity, fast-growing social media and video-sharing services are experiencing growing cloud computing costs and bottlenecks in their AI-based imaging processing and computer vision pipelines.</p>
<p>CV-CUDA accelerates AI special effects such as relighting, reposing, blurring backgrounds and super resolution.</p>
<p>NVIDIA GPUs already accelerate the inference portion of AI computer vision pipelines. But pre- and post-processing using traditional computer vision tools gobble up time and computing power.</p>
<p>CV-CUDA gives developers more than 50 high-performance computer vision algorithms, a development framework that makes it easy to implement custom kernels and zero-copy interfaces to remove bottlenecks in the AI pipeline.</p>
<p>The result is higher throughput and lower cloud-computing costs. CV-CUDA can process 10x as many streams on a single GPU.</p>
<p>All this helps developers move much faster when tackling video content creation, 3D worlds, image-based recommender systems, image recognition and video conferencing.</p>
<p>Video content creation platforms must process, enhance and moderate millions of video streams daily and ensure mobile-based users have the best experience running their apps on any phone.</p>
<ul>
<li style="font-weight: 400;" aria-level="1">For those building 3D worlds or metaverse applications, CV-CUDA is anticipated to enable tasks to help build or extend 3D worlds and their components.</li>
<li style="font-weight: 400;" aria-level="1">In image understanding and recognition, CV-CUDA can significantly speed up the pipelines running at hyperscale, allowing mobile users to enjoy sophisticated and responsive image recognition applications.</li>
<li style="font-weight: 400;" aria-level="1">And in video conferencing, CV-CUDA can support sophisticated augmented reality-based features. These features could involve complex AI pipelines requiring numerous pre- and post-processing steps.</li>
</ul>
<p>CV-CUDA accelerates pre- and post-processing pipelines through hand-optimized CUDA kernels and natively integrates into C/C++, Python and common deep learning frameworks, such as PyTorch.</p>
<p>CV-CUDA will be one of the core technologies that can accelerate AI workflows in <a target="_blank" href="https://www.nvidia.com/en-us/omniverse/">NVIDIA Omniverse</a>, a virtual world simulation and collaboration platform for 3D workflows.</p>
<p>Developers can get early access to code in December, with a beta release set for March.</p>
<p><strong>For more, visit the <a target="_blank" href="https://developer.nvidia.com/cv-cuda/early-access?ncid=prsy-631860-vt42#cid=gtcf22_prsy_en-us">early access interest page</a>.</strong></p>
<p><iframe loading="lazy" title="GTC Sept 2022 Keynote with NVIDIA CEO Jensen Huang" width="500" height="281" src="https://www.youtube.com/embed/PWcNlRI00jo?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<h5>Featured image credit: Factory42/BBC Studios</h5>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2022/09/cv-cuda-gtc22-social-2481500-2048x1024-1.png"
			type="image/png"
			width="2048"
			height="1024"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2022/09/cv-cuda-gtc22-social-2481500-2048x1024-1-842x450.png"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[NVIDIA Introduces Open-Source Project to Accelerate Computer Vision Cloud Applications]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>How to Start a Career in AI</title>
		<link>https://blogs.nvidia.com/blog/how-to-start-a-career-in-ai/</link>
		
		<dc:creator><![CDATA[Brian Caulfield]]></dc:creator>
		<pubDate>Mon, 08 Aug 2022 15:00:22 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[How To]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Deep Learning Institute]]></category>
		<category><![CDATA[Developer Program]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[Inception]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=58816</guid>

					<description><![CDATA[How do I start a career as a deep learning engineer? What are some of the key tools and frameworks used in AI? How do I learn more about ethics in AI? Everyone has questions, but the most common questions in AI always return to this: how do I get involved? Cutting through the hype	<a class="read-more" href="https://blogs.nvidia.com/blog/how-to-start-a-career-in-ai/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>How do I start a career as a deep learning engineer? What are some of the key tools and frameworks used in AI? How do I learn more about ethics in AI?</p>
<p>Everyone has questions, but the most common questions in AI always return to this: how do I get involved?</p>
<p>Cutting through the hype to share fundamental principles for building a career in AI, a group of AI professionals gathered at NVIDIA’s GTC conference in the spring offered what may be the best place to start.</p>
<p>Each panelist, in a conversation with NVIDIA’s Louis Stewart, head of strategic initiatives for the developer ecosystem, came to the industry from very different places.</p>
<p>Watch the <a href="https://www.nvidia.com/en-us/on-demand/session/gtcspring22-se2572/?playlistId=playList-40c8b78d-2f22-467e-8122-9962dcb0a3d0" target="_blank" rel="noopener">session on demand</a>.</p>
<p>But the speakers — Katie Kallot, NVIDIA’s former head of global developer relations and emerging areas; David Ajoku, founder of startup aware.ai; Sheila Beladinejad, CEO of Canada Tech; and Teemu Roos, professor at the University of Helsinki  — returned again and again to four basic principles.</p>
<h2><b>1) Start With Networking and Mentorship</b></h2>
<p>The best way to start, Ajoku explained, is to find people who are where you want to be in five years.</p>
<p>And don’t just look for them online — on Twitter and LinkedIn. Look for opportunities to connect with others in your community and at professional events who are going where you want to be.</p>
<p>“You want to find people you admire, find people who walk the path you want to be on over the next five years,” Ajoku said. “It doesn’t just come to you; you have to go get it.”</p>
<p>At the same time, be generous about sharing what you know with others. “You want to find people who will teach, and in teaching, you will learn,” he added.</p>
<p>But the best place to start is knowing that reaching out is okay.</p>
<p>“When I started my career in computer science, I didn’t even know I should be seeking a mentor,” Beladinejad said, echoing remarks from the other panelists.</p>
<p>“I learned not to be shy, to ask for support and seek help whenever you get stuck on something — always have the confidence to approach your professors and classmates,” she added.</p>
<h2><b>2) Get Experience</b></h2>
<p>Kallot explained that the best way to learn is by doing.</p>
<p>She got a degree in political science and learned about technology — including how to code — while working in the industry.</p>
<p>She started out as a sales and marketing analyst, then leaped to a product manager role.</p>
<p>“I had to learn everything about AI in three months, and at the same time I had to learn to use the product, I had to learn to code,” she said.</p>
<p>The best experience, explained Roos, is to surround yourself with people on the same learning journey, whether they’re learning online or in person.</p>
<p>“Don’t do it alone. If you can, grab your friends, grab your colleagues, maybe start a study group and create a curriculum,” he said. “Meet once a week, twice a week — it’s much more fun that way.”</p>
<h2><b>3) Develop Soft Skills</b></h2>
<p>You’ll also need the communications skills to explain what you’re learning, and doing, in AI as you progress.</p>
<p>“Practice talking about technical topics to non-technical audiences,” Stewart said.</p>
<p>Ajoku recommended learning and practicing public speaking.</p>
<p>Ajoku took an acting class at Carnegie Mellon University. Similarly, Roos took an improv comedy class.</p>
<p>Others on the panel learned to perform, publicly, through dance and sports.</p>
<p>“The more you’re cross-trained, the more comfortable you’re going to be and the better you’re going to be able to express yourself in any environment,” Stewart said.</p>
<h2><b>4) Define Your Why </b></h2>
<p>The most important element, however, comes from within, the panelists said.</p>
<p>They urged listeners to find a reason, something that drives them to stay motivated on their journey.</p>
<p>For some, it’s environmental issues. Others are driven by a desire to make technology more accessible. Or to help make the industry more inclusive, panelists said.</p>
<p>“It’s helpful for anyone if you have a topic that you’re passionate about,” Beladinejad said. “That would help keep you going, keep your motivation up.”</p>
<p>Whatever you do, “do it with passion,” Stewart said. “Do it with purpose.”</p>
<h2><b>Burning Questions</b></h2>
<p>Throughout the conversation, thousands of virtual attendees submitted more than 350 questions about how to get started in their AI careers.</p>
<p>Among them:</p>
<p><b><i>What’s the best way to learn about deep learning? </i></b></p>
<p>The NVIDIA <a href="https://www.nvidia.com/en-us/training/" target="_blank" rel="noopener">Deep Learning Institute</a> offers a huge variety of hands-on courses.</p>
<p>Even more resources for new and experienced developers alike are available through the <a href="https://developer.nvidia.com/" target="_blank" rel="noopener">NVIDIA Developer program</a>, which includes <a href="https://developer.nvidia.com/higher-education-and-research" target="_blank" rel="noopener">resources for those pursuing higher education and research</a>.</p>
<p>Massive open online courses — or MOOCs — have made learning about technical subjects more accessible than ever. One panelist suggested looking for classes taught by Stanford Professor Andrew Ng on Coursera.</p>
<p>“There are many MOOC courses out there, YouTube videos and books — I highly recommend finding a study buddy as well,” another wrote.</p>
<p>“Join technical and professional networks … get some experience through volunteering, participating in a Kaggle competition, etc.”</p>
<p><b><i>What are some of the most prevalent tools and frameworks used in machine learning and AI in industry? Which ones are crucial to landing a first job or internship in the field?</i></b><i><br />
</i><i><br />
</i>The best way to figure out which technologies you want to start with, one panelist suggested, is to think about what you want to do.</p>
<p>Another suggested, however, that learning Python isn’t a bad place to begin.</p>
<p>“A lot of today’s AI tools are based on Python,” they wrote. “You can’t go wrong by mastering Python.”</p>
<p>“The technology is evolving rapidly, so many of today&#8217;s AI developers are constantly learning new things. Having software fundamentals like data structures and common languages like Python and C++ will help set you up to ‘learn on the job,’” another added.</p>
<p><b><i>What’s the best way to start getting experience in the field? Do personal projects count as experience? </i></b></p>
<p>Student clubs, online developer communities, volunteering and personal projects are all a great way to gain hands-on experience, panelists wrote.</p>
<p>And definitely include personal projects on your resume, another added.</p>
<p><b><i>Is there an age limit for getting involved in AI? </i></b></p>
<p>Age isn’t at all a barrier, whether you’re just starting out or transitioning from another field, panelists wrote.</p>
<p>Build a portfolio for yourself so you can better demonstrate your skills and abilities — that’s what should count.</p>
<p>Employers should be able to easily realize your potential and skills.</p>
<p><b><i>I want to build a tech startup with some form of AI as the engine driving the solution to solve an as-yet-to-be-determined problem. What pointers do you have for entrepreneurs? </i></b></p>
<p>Entrepreneurs should apply to be a part of <a href="https://www.nvidia.com/en-us/startups/" target="_blank" rel="noopener">NVIDIA Inception</a>.</p>
<p>The program provides free benefits, such as technical support, go-to-market support, preferred pricing on hardware and access to its <a href="https://www.nvidia.com/en-us/startups/venture-capital/" target="_blank" rel="noopener">VC alliance</a> for funding.</p>
<p><b><i>Which programming language is best for AI?</i></b></p>
<p>Python is widely used in deep learning, machine learning and data science. The programming language is at the center of a thriving ecosystem of deep learning frameworks and developer tools. It’s predominantly used for training complex models and for real-time inference for web-based services.</p>
<p>C/C++ is a popular programming language for self-driving cars which is used for deploying models for real-time inference.</p>
<p>Those getting started, though, will want to make sure they&#8217;re familiar with a broad array of tools, not just Python.</p>
<p>The NVIDIA Deep Learning Institute’s beginner self-paced courses can be one of the best ways to get oriented.</p>
<h2><b>Learn More From GTC Sessions</b></h2>
<p>At NVIDIA GTC, a global conference on AI and the metaverse, professionals spoke about how they got started in their careers.</p>
<p>Watch the on-demand GTC sessions <a href="https://www.nvidia.com/en-us/on-demand/session/gtcfall22-se41226/?playlistId=playList-25f249c5-148d-4c04-a34f-13935b39b86a" target="_blank" rel="noopener"><i>How to Be a Deep Learning Engineer</i></a> and <i><a href="https://www.nvidia.com/en-us/on-demand/session/gtcfall22-se41225/?playlistId=playList-25f249c5-148d-4c04-a34f-13935b39b86a" target="_blank" rel="noopener">5 Paths to a Career in AI</a>.</i></p>
<p><i>Learn the AI essentials from NVIDIA fast: check out the </i><a href="https://www.nvidia.com/en-us/learn/" target="_blank" rel="noopener"><i>“getting started” resources</i></a><i> to explore the fundamentals of today’s hottest technologies on our </i><a href="https://www.nvidia.com/en-us/learn/" target="_blank" rel="noopener"><i>learning series page</i></a>.</p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2022/08/working-in-ai.jpg"
			type="image/jpeg"
			width="1280"
			height="680"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2022/08/working-in-ai-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[How to Start a Career in AI]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>Keynote Wrap-Up: Turning Data Centers into ‘AI Factories,’ NVIDIA CEO Intros Hopper Architecture, H100 GPU, New Supercomputers, Software</title>
		<link>https://blogs.nvidia.com/blog/ai-factories-hopper-h100-nvidia-ceo-jensen-huang/</link>
		
		<dc:creator><![CDATA[Brian Caulfield]]></dc:creator>
		<pubDate>Tue, 22 Mar 2022 16:45:40 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Corporate]]></category>
		<category><![CDATA[Data Center]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Driving]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Networking]]></category>
		<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[Supercomputing]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[GTC 2022]]></category>
		<category><![CDATA[NVIDIA A100]]></category>
		<category><![CDATA[NVIDIA DGX]]></category>
		<category><![CDATA[NVIDIA Hopper Architecture]]></category>
		<category><![CDATA[NVIDIA NeMo]]></category>
		<category><![CDATA[NVIDIA Triton]]></category>
		<category><![CDATA[NVLink]]></category>
		<category><![CDATA[Omniverse]]></category>
		<category><![CDATA[Rendering]]></category>
		<category><![CDATA[Simulation and Design]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=55990</guid>

					<description><![CDATA[Promising to transform trillion-dollar industries and address the “grand challenges” of our time, NVIDIA founder and CEO Jensen Huang Tuesday shared a vision of an era where intelligence is created on an industrial scale and woven into real and virtual worlds. Kicking off NVIDIA’s GTC conference, Huang introduced new silicon — including the new Hopper	<a class="read-more" href="https://blogs.nvidia.com/blog/ai-factories-hopper-h100-nvidia-ceo-jensen-huang/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>Promising to transform trillion-dollar industries and address the “grand challenges” of our time, NVIDIA founder and CEO Jensen Huang Tuesday shared a vision of an era where intelligence is created on an industrial scale and woven into real and virtual worlds.</p>
<p>Kicking off <a target="_blank" href="https://www.nvidia.com/gtc/">NVIDIA’s GTC conference</a>, Huang introduced new silicon — including the <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing">new Hopper GPU architecture and new H100 GPU</a>, new AI and accelerated computing software and <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-dgx-h100-systems-worlds-most-advanced-enterprise-ai-infrastructure">powerful new data-center-scale systems</a>.</p>
<p>”Companies are processing, refining their data, making AI software, becoming intelligence manufacturers,” Huang said, speaking from a virtual environment in the <a target="_blank" href="https://www.nvidia.com/en-us/omniverse/">NVIDIA Omniverse</a> real-time 3D collaboration and simulation platform as he described how AI is “racing in every direction.”</p>
<p>And all of it will be brought together by Omniverse to speed collaboration between people and AIs, better model and understand the real world, and serve as a proving ground for new kinds of robots, “the next wave of AI.”</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/tEjH3g3-QOs" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Huang shared his vision with a gathering that has become one of the world’s most important AI conferences, bringing together leading developers, scientists and researchers.</p>
<p>The conference features more 1,600 speakers including from companies such as <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.scheduledorondemand=1583520458947001NJiE&amp;search=%22American%20Express%22#/session/1635797342151001A9kR">American Express</a>, <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.scheduledorondemand=1583520458947001NJiE&amp;search=%22DoorDash%22#/session/1641921821969001XuoS">DoorDash</a>, <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.scheduledorondemand=1583520458947001NJiE&amp;search=LinkedIn#/session/1642993259995001x1TE">LinkedIn</a>, <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.scheduledorondemand=1583520458947001NJiE&amp;search=LinkedIn#/session/1642993259995001x1TE">Pinterest</a>, <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.scheduledorondemand=1583520458947001NJiE&amp;search=LinkedIn#/">Salesforce</a>, <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.scheduledorondemand=1583520458947001NJiE&amp;search=servicenow#/session/1642993259995001x1TE">ServiceNow</a>, <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.scheduledorondemand=1583520458947001NJiE&amp;search=snap#/">Snap</a> and <a target="_blank" href="https://www.nvidia.com/gtc/session-catalog/?tab.scheduledorondemand=1583520458947001NJiE&amp;search=visa#/session/1639504110903001TUTW">Visa</a>, as well as 200,000 registered attendees.</p>
<p>Huang’s presentation began with a spectacular flythrough of NVIDIA’s new campus, rendered in Omniverse, including buzzing labs working on advanced robotics projects.</p>
<p>He shared how the company’s work with the broader ecosystem is saving lives by advancing healthcare and drug discovery, and even helping save our planet.</p>
<p>“Scientists predict that a supercomputer a billion times larger than today’s is needed to effectively simulate regional climate change,” Huang said.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-56212" src="https://blogs.nvidia.com/wp-content/uploads/2022/03/ExtremeWeather_4-672x378.png" alt="" width="672" height="378" /></p>
<p>“NVIDIA is going to tackle this grand challenge with our Earth-2, the world’s first AI digital twin supercomputer, and invent new AI and computing technologies to give us a billion-X before it’s too late,” he said.</p>
<h2>New Silicon — NVIDIA H100: A “New Engine of the World’s AI Infrastructure”</h2>
<p>To power these ambitious efforts, Huang introduced the NVIDIA H100 built on the <a href="https://blogs.nvidia.com/blog/nvidia-hopper-accelerates-dynamic-programming-using-dpx-instructions/">Hopper architecture</a>, as the “new engine of the world’s AI infrastructures.”<b><br />
</b><b><br />
</b>AI applications like speech, conversation, customer service and recommenders are driving fundamental changes in data center design, he said.</p>
<p>“AI data centers process mountains of continuous data to train and refine AI models,” Huang said. “Raw data comes in, is refined, and intelligence goes out — companies are manufacturing intelligence and operating giant AI factories.”</p>
<p>The factory operation is 24/7 and intense, Huang said. Minor improvements in quality drive a significant increase in customer engagement and company profits, Huang explained.</p>
<p>H100 will help these factories move faster. The “massive” 80 billion transistor chip uses TSMC’s 4N process.</p>
<p>“Hopper H100 is the biggest generational leap ever — 9x at-scale training performance over A100 and 30x large-language-model inference throughput,” Huang said.</p>
<p><a href="https://blogs.nvidia.com/wp-content/uploads/2022/03/H100-die-press-gtc22-spring-1600x900-copy-4.png"><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-56230" src="https://blogs.nvidia.com/wp-content/uploads/2022/03/H100-die-press-gtc22-spring-1600x900-copy-4-672x378.png" alt="" width="672" height="378" /></a></p>
<p>&nbsp;</p>
<p>Hopper is packed with technical breakthroughs, including a <a href="https://blogs.nvidia.com/blog/h100-transformer-engine/">new Transformer Engine</a> to speed up these networks 6x without losing accuracy.</p>
<p>“Transformer model training can be reduced from weeks to days” Huang said.</p>
<p>H100 is in production, with availability starting in Q3, Huang announced.</p>
<p>Huang also announced the <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-introduces-grace-cpu-superchip">Grace CPU Superchip</a>, NVIDIA’s first discrete data center CPU for high-performance computing.</p>
<p>It comprises two CPU chips connected over a 900 gigabytes per second NVLink chip-to-chip interconnect to make a 144-core CPU with 1 terabyte per second of memory bandwidth, Huang explained.</p>
<p>“Grace is the ideal CPU for the world’s AI infrastructures,” Huang said.<b><br />
</b><b><br />
</b>Huang also announced <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-dgx-h100-systems-worlds-most-advanced-enterprise-ai-infrastructure">new Hopper GPU-based AI supercomputers — DGX H100, H100 DGX POD and DGX SuperPOD</a>.</p>
<p>NVIDIA will be first to build a DGX SuperPOD with the groundbreaking new AI architecture to power the work of NVIDIA researchers advancing climate science, digital biology and the future of AI.</p>
<p>Its “Eos” supercomputer is expected to be the world’s fastest AI system after it begins operations later this year, featuring a total of 576 DGX H100 systems with 4,608 DGX H100 GPUs.</p>
<p>To connect it all, NVIDIA’s new NVLink high-speed interconnect technology will be coming to all future NVIDIA chips — CPUs, GPUs, DPUs and SOCs, Huang said.</p>
<p>He also announced NVIDIA will <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-opens-nvlink-for-custom-silicon-integration">make NVLink available to customers and partners</a> to build companion chips.</p>
<p>“NVLink opens a new world of opportunities for customers to build semi-custom chips and systems that leverage NVIDIA’s platforms and ecosystems,” Huang said.</p>
<h2><strong>New Software — AI Has “Fundamentally Changed” Software</strong></h2>
<p>Thanks to acceleration unleashed by accelerated computing, the progress of AI is “stunning,” Huang declared.</p>
<p>“AI has fundamentally changed what software can make and how you make software,” Huang said.</p>
<p>Transformers, Huang explained, have opened self-supervised learning and unblocked the need for human-labeled data. As a result, Transformers are being unleashed in a growing array of fields.</p>
<p>“Transformers made self-supervised learning possible, and AI jumped to warp speed,” Huang said.</p>
<p>Google BERT for language understanding, NVIDIA MegaMolBART for drug discovery, and DeepMind AlphaFold2 are all breakthroughs traced to Transformers, Huang said.</p>
<p>Huang walked through new deep learning models for natural language understanding, physics, creative design, character animation and even — with NVCell — chip layout.</p>
<p>“AI is racing in every direction — new architectures, new learning strategies, larger and more robust models, new science, new applications, new industries — all at the same time,” Huang said.</p>
<p>NVIDIA is “all hands on deck” to speed new breakthroughs in AI and speed the adoption of AI and machine learning to every industry, Huang said.</p>
<p><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-ai-delivers-major-advances-in-speech-recommender-system-and-hyperscale-inference">The NVIDIA AI platform is getting major updates</a>, Huang said, including Triton Inference Server, the NeMo Megatron 0.9 framework for training <a href="https://blogs.nvidia.com/blog/what-are-large-language-models-used-for/" target="_blank" rel="noopener">large language models</a>, and the <a href="https://blogs.nvidia.com/blog/maxine-reinvents-communication-ai/">Maxine</a> framework for audio and video quality enhancement.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/3GPNsPMqY8o" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>The platform includes <a target="_blank" href="https://www.nvidia.com/en-us/data-center/products/ai-enterprise-suite/">NVIDIA AI Enterprise 2.0</a>, an end-to-end, cloud-native suite of AI and data analytics tools and frameworks, optimized and certified by NVIDIA and now supported across every major data center and cloud platform.</p>
<p>“We <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-introduces-60+-updates-to-cuda-x-libraries-opening-new-science-and-industries-to-accelerated-computing">updated 60 SDKs</a> at this GTC,” Huang said. “For our 3 million developers, scientists and AI researchers, and tens of thousands of startups and enterprises, the same NVIDIA systems you run just got faster.”</p>
<p>NVIDIA AI software and accelerated computing SDKs are now relied on by some of the world’s largest companies.</p>
<ul>
<li style="font-weight: 400;" aria-level="1"><a href="https://blogs.nvidia.com/blog/microsoft-translator-triton-inference/">Microsoft Translator accelerates global communications</a> with real-time translation capabilities powered by NVIDIA Triton.</li>
<li style="font-weight: 400;" aria-level="1"><a href="https://blogs.nvidia.com/blog/att-data-science-rapids/">AT&amp;T accelerates</a> their data science teams with NVIDIA RAPIDS software that makes it easier to process trillions of records.</li>
</ul>
<p>“NVIDIA SDKs serve healthcare, energy, transportation, retail, finance, media and entertainment — a combined $100 trillion of industries,” Huang said.</p>
<h2>‘The Next Evolution’: Omniverse for Virtual Worlds</h2>
<p>Half a century ago, the Apollo 13 lunar mission ran into trouble. To save the crew, Huang said, NASA engineers created a model of the crew capsule back on Earth to “work the problem.”</p>
<p>“Extended to vast scales, a digital twin is a virtual world that’s connected to the physical world,” Huang said. “And in the context of the internet, it is the next evolution.”</p>
<p>NVIDIA Omniverse software for building digital twins, and <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-launches-data-center-scale-omniverse-computing-system-for-industrial-digital-twins">new data-center-scale NVIDIA OVX systems</a>, will be integral for “action-oriented AI.”</p>
<p>“Omniverse is central to our robotics platforms,” Huang said, <a href="https://blogs.nvidia.com/blog/omniverse-ecosystem-expands/">announcing new releases and updates for Omniverse</a>. “And like NASA and Amazon, we and our customers in robotics and industrial automation realize the importance of digital twins and Omniverse.”</p>
<p>OVX will run Omniverse digital twins for large-scale simulations with multiple autonomous systems operating in the same space-time, Huang explained.</p>
<p>The backbone of OVX is its networking fabric, Huang said, announcing the <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-spectrum-high-performance-data-center-networking-infrastructure-platform">NVIDIA Spectrum-4 high-performance data networking infrastructure platform</a>.</p>
<p>The world’s first 400Gbps end-to-end networking platform, NVIDIA Spectrum-4 consists of the Spectrum-4 switch family, <a target="_blank" href="https://www.nvidia.com/content/dam/en-zz/Solutions/networking/ethernet-adapters/connectx-7-datasheet-Final.pdf?ncid=so-pr-955473#cid=nbu03_so-pr_en-us">NVIDIA ConnectX-7 SmartNIC</a>, <a target="_blank" href="https://www.nvidia.com/en-us/networking/products/data-processing-unit/?ncid=so-pr-328673#cid=nbu03_so-pr_en-us">NVIDIA BlueField-3 DPU</a> and NVIDIA DOCA data center infrastructure software.</p>
<p>And to make Omniverse accessible to even more users, Huang announced <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-omniverse-cloud-to-connect-tens-of-millions-of-designers-and-creators">Omniverse Cloud</a>. Now, with just a few clicks, collaborators can connect through Omniverse on the cloud.<br />
<img loading="lazy" decoding="async" class="aligncenter size-large wp-image-56218" src="https://blogs.nvidia.com/wp-content/uploads/2022/03/OV-Cloud_2-672x378.png" alt="" width="672" height="378" /></p>
<p>Huang showed how this works with a demo of four designers, one an AI, collaborating to build a virtual world.</p>
<p>He also showed how Amazon uses Omniverse Enterprise “to design and optimize their incredible fulfillment center operations.”</p>
<p>“Modern fulfillment centers are evolving into technical marvels — facilities operated by humans and robots working together,” Huang said.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/-VQLqs6s9y0" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<h2>The ‘Next Wave of AI’: Robots and Autonomous Vehicles</h2>
<p>New silicon, new software and new simulation capabilities will unleash “the next wave of AI,” Huang said, robots able to “devise, plan and act.”</p>
<p>NVIDIA Avatar, DRIVE, Metropolis, Isaac and Holoscan are robotics platforms built end to end and full stack around “four pillars”: ground-truth data generation, AI model training, the robotics stack and Omniverse digital twins, Huang explained.</p>
<p>The <a target="_blank" href="https://www.nvidia.com/en-us/self-driving-cars/">NVIDIA DRIVE autonomous vehicle system</a> is essentially an “AI chauffeur,” Huang said.</p>
<p>And Hyperion 8 — NVIDIA’s hardware architecture for self-driving cars on which NVIDIA DRIVE is built — can achieve full self-driving with a 360-degree camera, radar, lidar and ultrasonic sensor suite.</p>
<p>Hyperion 8 will ship in Mercedes-Benz cars starting in 2024, followed by Jaguar Land Rover in 2025, Huang said.</p>
<p>Huang announced that <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-enters-production-with-drive-orin-announces-byd-and-lucid-motors-as-new-ev-customers-unveils-next-gen-drive-hyperion-av-platform">NVIDIA Orin, a centralized AV and AI computer that acts as the engine of new-generation EVs, robotaxis, shuttles, and trucks started shipping this month</a>.</p>
<p>And <a href="https://blogs.nvidia.com/blog/drive-hyperion-9-atlan/">Huang announced Hyperion 9</a>, featuring the coming DRIVE Atlan SoC for double the performance of the current DRIVE Orin-based architecture, which will ship starting in 2026.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-56233" src="https://blogs.nvidia.com/wp-content/uploads/2022/03/hyperion-9-press-gtc22-spring-1600x900-1-672x378.jpg" alt="" width="672" height="378" /></p>
<p>BYD, the second-largest EV maker globally, will adopt the DRIVE Orin computer for cars starting production in the first half of 2023, Huang announced.</p>
<p>And <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-enters-production-with-drive-orin-announces-byd-and-lucid-motors-as-new-ev-customers-unveils-next-gen-drive-hyperion-av-platform">Lucid Motors revealed that its DreamDrive Pro advanced driver-assistance system is built on NVIDIA DRIVE</a>.</p>
<p>Overall, <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-enters-production-with-drive-orin-announces-byd-and-lucid-motors-as-new-ev-customers-unveils-next-gen-drive-hyperion-av-platform">NVIDIA’s automotive pipeline has increased to over $11 billion over the next six years</a>.</p>
<p><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-launches-ai-computing-platform-for-medical-devices-and-computational-sensing-systems">Clara Holoscan</a> puts much of the real-time computing muscle used in DRIVE to work supporting medical instruments and real-time sensors, such as RF ultrasound, 4K surgical video, high-throughput cameras and lasers.</p>
<p>Huang showed a video of Holoscan accelerating images from a light-sheet microscope — which creates a “movie” of cells moving and dividing.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/rXG27G3bWzY" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>It typically takes an entire day to process the 3TB of data these instruments produce in an hour.</p>
<p>At the Advanced Bioimaging Center at UC Berkeley, however, researchers using Holoscan are able to process this data in real-time, enabling them to auto-focus the microscope while experiments are running.</p>
<p>Holoscan development platforms are available for early access customers today, generally available in May, and medical-grade readiness in the first quarter of 2023.</p>
<p>NVIDIA is also working with thousands of customers and developers who are building robots for manufacturing, retail, healthcare, agriculture, construction, airports and entire cities, Huang said.</p>
<p>NVIDIA’s robotics platforms consist of Metropolis and Isaac — Metropolis is a stationary robot tracking moving things, while Isaac is a platform for things that move, Huang explained.</p>
<p>To help robots navigate indoor spaces — like factories and warehouses — <a href="https://blogs.nvidia.com/blog/nvidia-isaac-nova-orin-amrs/">NVIDIA announced Isaac Nova Orin</a>, built on <a target="_blank" href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/">Jetson AGX Orin</a>, a state-of-the-art compute and sensor reference platform to accelerate autonomous mobile robot development and deployment.</p>
<p>In a video, Huang showed how PepsiCo uses Metropolis and an Omniverse digital twin together.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/MXJIEB6CVtE" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<h2>Four Layers, Five Dynamics</h2>
<p>Huang ended by tying all the technologies, product announcements and demos back into a vision of how NVIDIA will drive forward the next generation of computing.</p>
<p>NVIDIA announced new products across its four-layer stack: hardware, system software and libraries, software platforms NVIDIA HPC, NVIDIA AI, and NVIDIA Omniverse; and AI and robotics application frameworks, Huang explained.</p>
<p><a href="https://blogs.nvidia.com/wp-content/uploads/2022/03/Overview_1.png"><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-56209" src="https://blogs.nvidia.com/wp-content/uploads/2022/03/Overview_1-672x378.png" alt="" width="672" height="378" /></a></p>
<p>Huang also ticked through the five dynamics shaping the industry: million-X computing speedups, transformers turbocharging AI, data centers becoming AI factories, which is exponentially increasing demand for robotics systems, and digital twins for the next era of AI.</p>
<p>“Accelerating across the full stack and at data center scale, we will strive for yet another million-X in the next decade,” Huang said, concluding his talk. “I can’t wait to see what the next million-X brings.”</p>
<p>Noting that Omniverse generated “every rendering and simulation you saw today,” Huang then introduced a stunning video put together by NVIDIA’s creative team featuring viewers “on one more trip into Omniverse” for a surprising musical jazz number set in the heart of NVIDIA’s campus featuring a cameo from Huang’s digital counterpart, Toy Jensen.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/DFKdU6AIseI" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2022/03/Hopper_1.png"
			type="image/png"
			width="1920"
			height="1080"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2022/03/Hopper_1-842x450.png"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[Keynote Wrap-Up: Turning Data Centers into ‘AI Factories,’ NVIDIA CEO Intros Hopper Architecture, H100 GPU, New Supercomputers, Software]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>GTC Wrap-Up: NVIDIA CEO Outlines Vision for Accelerated Computing, Data Center Architecture, AI, Robotics, Omniverse Avatars and Digital Twins in Keynote</title>
		<link>https://blogs.nvidia.com/blog/nvidia-ceo-accelerated-computing-ai-omniverse-avatars-robots-gtc/</link>
		
		<dc:creator><![CDATA[Brian Caulfield]]></dc:creator>
		<pubDate>Tue, 09 Nov 2021 09:34:35 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Corporate]]></category>
		<category><![CDATA[Data Center]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Driving]]></category>
		<category><![CDATA[Networking]]></category>
		<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[Supercomputing]]></category>
		<category><![CDATA[5G]]></category>
		<category><![CDATA[Conversational AI]]></category>
		<category><![CDATA[Cybersecurity]]></category>
		<category><![CDATA[Digital Twin]]></category>
		<category><![CDATA[Events]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[GTC 2021]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[NVIDIA DGX]]></category>
		<category><![CDATA[NVIDIA DRIVE]]></category>
		<category><![CDATA[NVIDIA EGX]]></category>
		<category><![CDATA[NVIDIA Modulus]]></category>
		<category><![CDATA[NVIDIA NeMo]]></category>
		<category><![CDATA[NVIDIA Quantum-2]]></category>
		<category><![CDATA[Omniverse]]></category>
		<category><![CDATA[Simulation and Design]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=53746</guid>

					<description><![CDATA[Bringing simulation of real and virtual worlds to everything from self-driving vehicles to avatars to robotics to modeling the planet’s climate, NVIDIA founder and CEO Jensen Huang Tuesday introduced technologies to transform multitrillion-dollar industries. Huang delivered a keynote at the company’s virtual GTC gathering where he unveiled NVIDIA Omniverse Avatar and NVIDIA Omniverse Replicator, among	<a class="read-more" href="https://blogs.nvidia.com/blog/nvidia-ceo-accelerated-computing-ai-omniverse-avatars-robots-gtc/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>Bringing simulation of real and virtual worlds to everything from self-driving vehicles to avatars to robotics to modeling the planet’s climate, <a target="_blank" href="https://nvidianews.nvidia.com/bios/jensen-huang">NVIDIA founder and CEO Jensen Huang</a> Tuesday introduced technologies to transform multitrillion-dollar industries.</p>
<p>Huang delivered a keynote at the company’s virtual <a target="_blank" href="https://www.nvidia.com/gtc/">GTC</a> gathering where he unveiled <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-platform-for-creating-ai-avatars">NVIDIA Omniverse Avatar</a> and <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-omniverse-replicator-synthetic-data-generation-engine-for-training-ais">NVIDIA Omniverse Replicator</a>, among a host of announcements, demos and far-reaching initiatives.</p>
<p>Huang showed how <a target="_blank" href="https://www.nvidia.com/en-us/omniverse/">NVIDIA Omniverse</a> — the company’s virtual world simulation and collaboration platform for 3D workflows — brings NVIDIA’s technologies together.</p>
<p>And he showed a demonstration of Project Tokkio for customer support and Project Maxine for video conferencing with Omniverse Avatar.</p>
<figure id="attachment_53965" aria-describedby="caption-attachment-53965" style="width: 672px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2021/11/toyjhh-1600x900-1.jpg"><img loading="lazy" decoding="async" class="wp-image-53965 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2021/11/toyjhh-1600x900-1-672x378.jpg" alt="" width="672" height="378" /></a><figcaption id="caption-attachment-53965" class="wp-caption-text">NVIDIA CEO Jensen Huang showed how Project Maxine for Omniverse Avatar connects computer vision, Riva speech AI, and avatar animation and graphics into a real-time conversational AI robot — the Toy Jensen Omniverse Avatar.</figcaption></figure>
<p>“A constant theme you’ll see — how Omniverse is used to simulate <a href="https://blogs.nvidia.com/blog/what-is-a-digital-twin/" target="_blank" rel="noopener">digital twins</a> of warehouses, plants and factories, of physical and biological systems, the 5G edge, robots, self-driving cars, and even avatars,” Huang said.</p>
<p>Huang ended by announcing NVIDIA will build a digital twin, called E-2, or Earth Two, to simulate and predict climate change.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/RLRx3sjZiqA" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<h2>‘Full Stack, Data Center Scale, and Open Platform’</h2>
<p><a href="https://blogs.nvidia.com/blog/what-is-accelerated-computing/">Accelerated computing</a> launched modern AI, and the waves it started are coming to science and the world’s industries, Huang said.</p>
<p>That starts with three chips, the <a href="https://blogs.nvidia.com/blog/whats-the-difference-between-a-cpu-and-a-gpu/">GPU, CPU</a> and <a href="https://blogs.nvidia.com/blog/whats-a-dpu-data-processing-unit/">DPU</a>, and systems — <a target="_blank" href="https://www.nvidia.com/en-us/data-center/dgx-systems/">DGX</a>, <a target="_blank" href="https://www.nvidia.com/en-sg/data-center/hgx-1/">HGX</a>, <a target="_blank" href="https://www.nvidia.com/en-us/data-center/products/egx/">EGX</a>, <a target="_blank" href="https://www.nvidia.com/en-us/geforce/20-series/">RTX</a> and <a target="_blank" href="https://www.nvidia.com/en-us/deep-learning-ai/products/agx-systems/">AGX</a> — spanning from cloud to the edge, he said.</p>
<p>NVIDIA has created 150 acceleration libraries for 3 million developers, from graphics and AI to sciences and robotics.</p>
<p>And <a href="https://blogs.nvidia.com/blog/new-accelerated-computing-libraries/">NVIDIA is introducing 65 new and updated SDKs</a> at GTC.</p>
<p>“NVIDIA accelerated computing is a full-stack, data-center-scale and open platform,” Huang said.</p>
<p>Huang introduced <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-quantum-2-takes-supercomputing-to-new-heights-into-the-cloud">NVIDIA Quantum-2</a>, “the most advanced networking platform ever built,” and with the BlueField-3 DPU, welcomes cloud-native supercomputing.</p>
<p>Quantum-2 offers the extreme performance, broad accessibility and strong security needed by cloud computing providers and supercomputing centers, he said.</p>
<p>Cybersecurity is a top issue for companies and nations, and Huang announced a three-pillar <a href="https://blogs.nvidia.com/blog/what-is-zero-trust/">zero-trust</a> framework to tackle the challenge.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/-eN_FnFrLM4" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen" data-mce-fragment="1"></iframe></p>
<p><a target="_blank" href="https://www.nvidia.com/en-us/networking/products/data-processing-unit/">BlueField DPUs</a> isolate applications from infrastructure. NVIDIA DOCA 1.2 — the latest BlueField SDK — enables next-generation distributed firewalls. <a target="_blank" href="https://developer.nvidia.com/morpheus-cybersecurity">NVIDIA Morpheus</a>, assuming the interloper is already inside, uses the “superpowers of accelerated computing and deep learning to detect intruder activities,” Huang said.</p>
<h2>Omniverse Avatar and Omniverse Replicator</h2>
<p>With Omniverse, “we now have the technology to create new 3D worlds or model our physical world,” Huang said.-</p>
<p>To help developers create interactive characters with Omniverse that can see, speak, converse on a wide range of subjects and understand naturally spoken intent, NVIDIA announced Omniverse Avatar.</p>
<p>Huang showed how Project Maxine for Omniverse Avatar connects computer vision, Riva speech AI, and avatar animation and graphics into a real-time conversational AI robot — the Toy Jensen Omniverse Avatar.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/U9Zh57dGsH4" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>He also showed a demo of Project Tokkio, a customer-service avatar in a restaurant kiosk, able to see, converse with and understand two customers.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/ox2Cc88I-Os" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe><br />
Huang additionally showed Project Maxine’s ability to add state-of-the-art video and audio features to virtual collaboration and content creation applications.</p>
<p>A demo showed a woman speaking English on a video call in a noisy cafe, but she can be heard clearly without background noise. As she speaks, her words are transcribed and translated in real time into French, German and Spanish. And, thanks to Omniverse, they’re spoken by an avatar able to engage in conversation with her same voice and intonation.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/jIaUhWXjzwo" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>To help developers to create the huge amounts of data needed to train AI, <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-quantum-2-next-generation-infrastructure-brings-together-the-worlds-of-cloud-computing-and-supercomputing-data-centers">NVIDIA announced Omniverse Replicator</a>,  a synthetic-data-generation engine for training deep neural networks.</p>
<p>NVIDIA has developed two replicators: one for general robotics, <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-quantum-2-next-generation-infrastructure-brings-together-the-worlds-of-cloud-computing-and-supercomputing-data-centers">Omniverse Replicator for Isaac Sim, and another for autonomous vehicles, Omniverse Replicator for DRIVE Sim</a>.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/F0huXhFfgw0" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Since its launch late last year, Omniverse has been downloaded 70,000 times by designers at 500 companies.</p>
<p>Omniverse Enterprise is now available starting at $9,000 a year.</p>
<h2>AI Models and Systems</h2>
<p>Huang introduced <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-brings-large-language-ai-models-to-enterprises-worldwide">Nemo Megatron to train large language models</a>. Such models “will be the biggest mainstream HPC application ever,” he said.</p>
<p>Graphs — a key data structure in modern data science — can now be projected into deep-neural network frameworks with Deep Graph Library, or DGL, a new Python package.</p>
<p><a href="https://blogs.nvidia.com/blog/modulus-framework/">NVIDIA Modulus</a>, introduced Tuesday, builds and trains physics-informed machine learning models that can learn and obey the laws of physics.</p>
<p><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-major-updates-to-triton-inference-server-as-25-000-companies-worldwide-deploy-nvidia-ai-inference">An upgrade to Triton</a>, an inference server for all workloads, now inferences forest models and does multi-GPU multi-node inference for <a href="https://blogs.nvidia.com/blog/what-are-large-language-models-used-for/" target="_blank" rel="noopener">large language models</a>.</p>
<p>And Huang introduced three new libraries.</p>
<ul>
<li style="font-weight: 400;" aria-level="1">cuOpt – for the logistics industry.</li>
<li style="font-weight: 400;" aria-level="1">cuQuantum – to accelerate quantum computing research.</li>
<li style="font-weight: 400;" aria-level="1">cuNumeric – to accelerate NumPy for scientists, data scientists and machine learning and AI researchers in the Python community.</li>
</ul>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/z5-gKQFqE_4" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>To help deliver services based on NVIDIA&#8217;s AI technologies to the edge, <a href="https://blogs.nvidia.com/blog/launchpad-global-expansion/">Huang announced NVIDIA Launchpad</a>.</p>
<p>NVIDIA is partnering with data center powerhouse Equinix to pre-install and integrate NVIDIA AI into data centers worldwide.</p>
<h2>Robotics</h2>
<p><a target="_blank" href="https://www.nvidia.com/en-us/deep-learning-ai/industries/robotics/">NVIDIA’s Isaac</a> ecosystem now has over 700 companies and partners, a number that has grown 5 times in the last 4 years.</p>
<p>Huang announced the NVIDIA Isaac robotics platform can now be easily integrated into the <a target="_blank" href="https://www.ros.org/">Robot Operating System</a>, or ROS, a widely-used set of software libraries and tools for robot applications.</p>
<p><a target="_blank" href="https://developer.nvidia.com/isaac-sim">Isaac Sim</a>, built on Omniverse, is the most realistic robotics simulator ever created, Huang explained.</p>
<p>“The goal is for the robot to not know whether it is inside a simulation or the real world,” Huang said.</p>
<p>To aid this process, Isaac Sim Replicator can generate synthetic data to train robots.</p>
<p>Replicator simulates the sensors, generates data that is automatically labeled, and with a domain randomization engine, creates rich and diverse training data sets, Huang explained.</p>
<h2>Autonomous Vehicles</h2>
<p>Everything that moves will be autonomous—fully or mostly autonomous, Huang said. “By 2024, the vast majority of new EVs will have substantial AV capability,” he added.</p>
<p><a target="_blank" href="https://developer.nvidia.com/drive">NVIDIA DRIVE</a> is NVIDIA’s full-stack and open platform for autonomous vehicles, and Hyperion 8 is NVIDIA’s latest complete hardware and software architecture.</p>
<p>Its sensor suite includes 12 cameras, nine radars, 12 ultrasonics and one front-facing lidar. All processed by two <a href="https://blogs.nvidia.com/blog/drive-orin-central-computer/">NVIDIA Orin</a> SoCs.</p>
<p>Huang detailed several new technologies built into Hyperion, including Omniverse Replicator for DRIVE Sim, a synthetic data generator for autonomous vehicles built on Omniverse.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/qVyN_chiLeo" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe><br />
NVIDIA is now running Hyperion 8 sensors, 4d perception, deep learning-based multisensor fusion, feature tracking and a new planning engine.</p>
<p>The inside of the car will be revolutionized, too.  The technology of NVIDIA Maxine will reimagine how we interact with our cars.</p>
<p>“With Maxine, your car will become a concierge,” Huang said.</p>
<h2>Earth Two</h2>
<p>Huang wrapped up by announcing NVIDIA will build Earth Two, or E-2, to simulate and predict climate change.</p>
<p>“All the technologies we’ve invented up to this moment are needed to make Earth Two possible,” Huang said.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/jhDiaUL_RaM" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2021/11/JHH_Screen04_ch4-edit.jpg"
			type="image/jpeg"
			width="1920"
			height="1080"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2021/11/JHH_Screen04_ch4-edit-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[GTC Wrap-Up: NVIDIA CEO Outlines Vision for Accelerated Computing, Data Center Architecture, AI, Robotics, Omniverse Avatars and Digital Twins in Keynote]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>GTC Offers Educational Sessions to Meet Exponential Demand for AI and Robotics Skills</title>
		<link>https://blogs.nvidia.com/blog/gtc-robotics-education/</link>
		
		<dc:creator><![CDATA[Lynette Farinas]]></dc:creator>
		<pubDate>Fri, 29 Oct 2021 21:24:46 +0000</pubDate>
				<category><![CDATA[Corporate]]></category>
		<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Deep Learning Institute]]></category>
		<category><![CDATA[Developer Program]]></category>
		<category><![CDATA[Education]]></category>
		<category><![CDATA[Events]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[Inception]]></category>
		<category><![CDATA[Jetson]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=53529</guid>

					<description><![CDATA[The demand for specialization in AI and robotics is growing more than ever. Roles in these two fields are multiplying across many industries, with titles such as AI specialist and robotics engineer, respectively, placing as the first- and second-fastest emerging jobs in the U.S., according to the most recent Emerging Jobs Report from LinkedIn. With	<a class="read-more" href="https://blogs.nvidia.com/blog/gtc-robotics-education/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>The demand for specialization in AI and robotics is growing more than ever.</p>
<p>Roles in these two fields are multiplying across many industries, with titles such as AI specialist and robotics engineer, respectively, placing as the first- and second-fastest emerging jobs in the U.S., according to the most recent <a href="https://business.linkedin.com/content/dam/me/business/en-us/talent-solutions/emerging-jobs-report/Emerging_Jobs_Report_U.S._FINAL.pdf" target="_blank" rel="nofollow noopener">Emerging Jobs Report</a> from LinkedIn.</p>
<p>With demand high, upskilling in these two fields is vital.</p>
<p>NVIDIA offers several opportunities and initiatives for students and developers to get hands-on experience in AI and robotics.</p>
<p><a href="https://www.nvidia.com/gtc/" target="_blank" rel="noopener">NVIDIA GTC</a>, running Nov. 8-11, will offer a number of sessions aimed at equipping educators with the appropriate tools to integrate into their curricula, and many more to inspire makers and enthusiasts just getting started or already innovating with AI and robotics.</p>
<p>Educators around the world — from <a href="https://blogs.nvidia.com/blog/emerging-chapters/" target="_blank" rel="noopener">Kenya</a> and <a href="https://developer.nvidia.com/blog/ai4kids-taiwan-inspires-ai-students-by-introducing-nvidia-jetson-nano/" target="_blank" rel="noopener">Taiwan</a>, to <a href="https://blogs.nvidia.com/blog/ai-pathways-boys-girls-clubs/" target="_blank" rel="noopener">Pittsburgh</a> and <a href="https://blogs.nvidia.com/blog/community-college-students-worlds-greatest-field-trip-gtc/?linkId=100000075878217" target="_blank" rel="noopener">Houston</a> — are bringing AI and robotics into the hands of students in their local communities. And they’ve been doing it with NVIDIA resources, such as the <a href="https://developer.nvidia.com/emerging-chapters" target="_blank" rel="noopener">NVIDIA Emerging Chapters program</a>, the <a href="https://www.nvidia.com/en-us/training/" target="_blank" rel="noopener">NVIDIA Deep Learning Institute</a>, <a href="https://mynvidia.force.com/NvidiaGrantProgram/s/Jetson-Nano-2GB-Application" target="_blank" rel="noopener">NVIDIA Jetson Nano 2GB Developer Kit Grant Program</a> and learning opportunities offered through GTC.</p>
<h2><b>Higher Ed at GTC</b></h2>
<p>GTC will feature several institutions in higher education that are offering application-based learning within the AI and robotics space.</p>
<p>Professors at the University of Oxford and the University of Maryland Baltimore College (pictured above) have collaborated with NVIDIA to create an <a target="_blank" rel="nofollow noopener" href="https://events.rainfocus.com/widget/nvidia/nvidiagtc/sessioncatalog?search=A31535">Edge AI and Robotics Teaching Kit</a> with interactive lectures and hands-on labs.</p>
<p>The course uses the <a href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/" target="_blank" rel="noopener">NVIDIA Jetson Edge AI platform</a> to teach fundamentals of computer vision-based deep neural networks, autonomous navigation, reinforcement learning, security and ethics, and <a href="https://blogs.nvidia.com/blog/what-is-conversational-ai/" target="_blank" rel="noopener">conversational AI</a>. The teaching kit is freely available and open source for educators to integrate into their own courses and curricula.</p>
<p>Moreover, the University of Manchester and Coursera have partnered with NVIDIA to design a <a href="https://www.manchester-robotics.com/" target="_blank" rel="nofollow noopener">series of courses</a> with content on both basic and advanced robotics principles, including camera-based navigation, map-based localization and computer vision.</p>
<p>The final project of the course will involve learning about intelligent navigation and exploration with the PuzzleBot Jetson Edition, a configurable robotics platform equipped with <a href="https://developer.nvidia.com/embedded/jetson-nano-developer-kit" target="_blank" rel="noopener">NVIDIA Jetson Nano</a> for robot vision and intelligent robot navigation.</p>
<p><iframe loading="lazy" title="Camera-based Navigation using NVIDIA Jetson Nano" width="500" height="281" src="https://www.youtube.com/embed/tmZfk0L7FN0?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<h2><b>AI &amp; Robotics 101</b></h2>
<p>GTC also offers an abundance of courses that provide basic education on AI and robotics, all of which are meant to spur interactive applications.</p>
<p>Beginners who’d like to delve into the basics of AI should add these sessions to their calendars: “Begin Your AI Journey With NVIDIA Jetson Nano” (<a href="https://events.rainfocus.com/widget/nvidia/nvidiagtc/sessioncatalog?search=A31723" target="_blank" rel="nofollow noopener">A31723</a>) and “Getting Started With the Edge AI and Robotics Teaching Kit” (<a href="https://events.rainfocus.com/widget/nvidia/nvidiagtc/sessioncatalog?search=A31535" target="_blank" rel="nofollow noopener">A31535</a>).</p>
<p>For those interested in learning more about robotics and autonomous machines, GTC includes sessions on how to develop features and increase capabilities of robots with NVIDIA Isaac ROS (<a href="https://events.rainfocus.com/widget/nvidia/nvidiagtc/sessioncatalog?search=A31574" target="_blank" rel="nofollow noopener">A31574</a>), updates on the NVIDIA Isaac Gym environment for high-performance reinforcement learning (<a href="https://events.rainfocus.com/widget/nvidia/nvidiagtc/sessioncatalog?search=A31118" target="_blank" rel="nofollow noopener">A31118</a>), and how to leverage Isaac Sim 2021.2 for simulation manipulation, navigation and synthetic data generation (<a href="https://events.rainfocus.com/widget/nvidia/nvidiagtc/sessioncatalog?search=A31573" target="_blank" rel="nofollow noopener">A31573</a>).</p>
<p>To explore how to apply some of this knowledge to real-world tools and use cases, check out these sessions: “Your First Steps to Design an Intelligent Assistant for Hands-Free Applications” (<a href="https://events.rainfocus.com/widget/nvidia/nvidiagtc/sessioncatalog?search=A31186" target="_blank" rel="nofollow noopener">A31186</a>) and “Train. Adapt. Optimize. Supercharge Your AI Development Workflow and Application Development With NVIDIA TAO Toolkit” (<a href="https://events.rainfocus.com/widget/nvidia/nvidiagtc/sessioncatalog?search=A31193" target="_blank" rel="nofollow noopener">A31193</a>).</p>
<h2><b>Register for GTC</b></h2>
<p>Register for free to learn more about NVIDIA Jetson during <a href="https://reg.rainfocus.com/flow/nvidia/nvidiagtc/digitalreg/login" target="_blank" rel="nofollow noopener">GTC</a>. And watch NVIDIA founder and CEO <a href="https://www.nvidia.com/gtc/keynote/" target="_blank" rel="noopener">Jensen Huang’s GTC keynote address</a> streaming on Nov. 9 and in replay.</p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2021/10/gtc-robotics-education.png.jpg"
			type="image/jpeg"
			width="1280"
			height="680"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2021/10/gtc-robotics-education.png-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[GTC Offers Educational Sessions to Meet Exponential Demand for AI and Robotics Skills]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>Wonders of the World: NVIDIA Emerging Chapters Program Spurs AI Innovation Across Developing Countries</title>
		<link>https://blogs.nvidia.com/blog/emerging-chapters/</link>
		
		<dc:creator><![CDATA[Kate Kallot]]></dc:creator>
		<pubDate>Thu, 07 Oct 2021 15:00:24 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Deep Learning Institute]]></category>
		<category><![CDATA[Developer Program]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[GTC 2021]]></category>
		<category><![CDATA[Inception]]></category>
		<category><![CDATA[Social Impact]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=53169</guid>

					<description><![CDATA[Two artists, if given the same paint set, would create distinct works — each showcasing their own points of view. This is especially likely if the artists were to hail from opposite ends of the world. Similarly, developers across the globe use the same tools to create different AI-based applications, each solving challenges relevant to	<a class="read-more" href="https://blogs.nvidia.com/blog/emerging-chapters/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>Two artists, if given the same paint set, would create distinct works — each showcasing their own points of view. This is especially likely if the artists were to hail from opposite ends of the world.</p>
<p>Similarly, developers across the globe use the same tools to create different AI-based applications, each solving challenges relevant to their local community.</p>
<p>Give an <a href="https://developer.nvidia.com/embedded/jetson-nano-developer-kit" target="_blank" rel="noopener">NVIDIA Jetson Nano Developer Kit</a> to an AI enthusiast in San Francisco, for example, and they will create <a href="https://blogs.nvidia.com/blog/yoga-app-among-10-winners-of-hackster-io-jetson-competition/" target="_blank" rel="noopener">an AI-based app that analyzes yoga poses</a>. With that same tool, developers in the <a href="https://docs.google.com/document/d/1riaIciXCMV4uLkw1vKvscSAuqWupGd019IlF781UiMk/edit?usp=sharing" target="_blank" rel="nofollow noopener">tinyML</a> Kenya community are building AI systems for wildlife monitoring and conservation. Innovation is happening in each place, so long as the developers have access to technologies and enablement tools.</p>
<p><a href="https://developer.nvidia.com/emerging-chapters" target="_blank" rel="noopener">NVIDIA Emerging Chapters</a> is a new program that enables local communities in emerging areas to build and scale their AI, data science and graphics projects. It provides technological tools, educational resources and co-marketing opportunities for these developers.</p>
<p>Members of Emerging Chapters have access to training and development opportunities through the <a href="https://www.nvidia.com/en-us/training/" target="_blank" rel="noopener">NVIDIA Deep Learning Institute</a> (DLI). This includes free passes to take select self- or instructor-led courses on AI and data science. Upon course completion, developers can receive an NVIDIA DLI Certificate to highlight their new skills and help advance their careers.</p>
<p>Since the program’s launch this year, 30+ African developer community groups — including seven founded by women — have joined Emerging Chapters, fostering a growing network of AI experts. The program is now expanding to Latin America, the Middle East, South Asia and other emerging markets that are hungry for more AI and related technology.</p>
<h2><b>Reflecting the World’s Diversity</b></h2>
<p>With Emerging Chapters, NVIDIA hopes to help mend what’s called the technology fracture — the gap between developers in the global North and those in emerging markets.</p>
<p>“This program is not about charity, it’s about innovation and business,” said Amulya Vishwanath, strategic program lead on NVIDIA’s emerging areas team. “It’s crucial to get communities in emerging areas access to AI technology, which they can then use to make their own, ensuring the developer community is more reflective of the world’s true diversity.”</p>
<p>By spurring innovation in collaboration with local communities, the program can cultivate AI solutions that most pertain to grassroot developers and their direct ecosystem, while also democratizing the global AI movement.</p>
<p>“With Emerging Chapters, NVIDIA is taking active steps toward positively contributing to the growing trend of technology in emerging markets,” said Michael Young, co-founder of the <a href="https://www.pythonghana.org/" target="_blank" rel="nofollow noopener">Python Ghana</a> community. “NVIDIA helped us bring growing AI engineers from across the country together for live, online training sessions with experts.”</p>
<h2><b>Fueling Innovation in Africa</b></h2>
<p>Africa is an emerging market in which the AI revolution is underway. African developers are using AI and NVIDIA technology, for example, <a href="https://blogs.nvidia.com/blog/ai-revolution-in-africa/" target="_blank" rel="noopener">to maximize crop yields and honor Olympic athletes</a>.</p>
<p>Early members of the Emerging Chapters program include <a href="https://www.deeplearning.ai/breaking-into-ai-juggling-work-projects-and-personal-life-with-kennedy-wangari/" target="_blank" rel="nofollow noopener">DeepLearning.AI Kenya</a>, an education technology company that empowers individuals in the AI workforce; <a href="https://developer.nvidia.com/blog/edge-computing-in-ethiopia-a-quest-for-an-ai-solution/" target="_blank" rel="noopener">NERD Ethiopia</a>, a youth center that provides a hacker space and educational resources for AI research; and <a target="_blank" href="https://www.meetup.com/tiny-ml-enabling-ultra-low-power-ml-at-the-edge-kenya/">tinyML Kenya</a>, a community of machine learning researchers and practitioners.</p>
<p>&#8220;TinyML Kenya joining NVIDIA Emerging Chapters was such a timely, game-changing move for our community,” said Clinton Oduor, the foundation’s lead organizer. “The program has allowed our members to meet industry leaders, learn new skills, earn certifications and solve real-world problems.&#8221;</p>
<p><a href="https://zindi.africa/about" target="_blank" rel="nofollow noopener">Zindi</a>, Africa’s first data science competition platform, is also a member. The organization has an ambassador program made up of data science community leaders from 20+ African countries.</p>
<p>“It&#8217;s great to be able to support people doing such excellent work with the help of the Emerging Chapters program,” said Celina Lee, Zindi’s co-founder and CEO.</p>
<h2><b>Allowing AI Access for All</b></h2>
<p>To bolster individuals who seek to work with AI, HPC, graphics and more, NVIDIA offers a range of opportunities including the <a href="https://developer.nvidia.com/join-nvidia-developer-program" target="_blank" rel="noopener">NVIDIA Developer Program</a> — which has more than 2.5 million members — and NVIDIA <a href="https://www.nvidia.com/en-us/deep-learning-ai/startups/" target="_blank" rel="noopener">Inception</a>, which offers go-to-market support, expertise and technology for AI and data science startups.</p>
<p>In addition, the company’s <a target="_blank" href="https://www.nvidia.com/gtc/">GTC</a> conference, running Nov. 8-11, will feature <a href="https://www.nvidia.com/gtc/sessions/emerging-markets/" target="_blank" rel="noopener">a track focused on emerging markets</a>. The conference is virtual, free and has sessions 24/7.</p>
<p>Over <a href="https://blogs.nvidia.com/blog/developers-emerging-markets-gtc/" target="_blank" rel="noopener">20,000 developers</a> from emerging markets tuned in to learn about AI innovation at our last GTC — and the appetite for developer resources is only growing.</p>
<p>For more on global AI innovation, <a href="https://reg.rainfocus.com/flow/nvidia/nvidiagtc/digitalreg/login?nvid=nv-int-cwmfg-606153" target="_blank" rel="nofollow noopener">register for GTC</a>.</p>
<p>Read more <a href="https://developer.nvidia.com/blog/tag/emerging-chapters/" target="_blank" rel="noopener">stories about NVIDIA Emerging Chapters</a> and <a target="_blank" href="https://developer.nvidia.com/emerging-chapters" rel="noopener">join the program</a>.</p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2021/10/zindi-emerging-chapters-1280x680-1.jpg"
			type="image/jpeg"
			width="1280"
			height="680"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2021/10/zindi-emerging-chapters-1280x680-1-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[Wonders of the World: NVIDIA Emerging Chapters Program Spurs AI Innovation Across Developing Countries]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>NVIDIA Chief Scientist Highlights New AI Research in GTC Keynote</title>
		<link>https://blogs.nvidia.com/blog/gtc-china-keynote-dally-ai/</link>
		
		<dc:creator><![CDATA[Rick Merritt]]></dc:creator>
		<pubDate>Tue, 15 Dec 2020 02:00:13 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Conversational AI]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[Inference]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[NVIDIA Research]]></category>
		<category><![CDATA[NVLink]]></category>
		<category><![CDATA[Ray Tracing]]></category>
		<category><![CDATA[Riva]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=48178</guid>

					<description><![CDATA[NVIDIA researchers are defining ways to make faster AI chips in systems with greater bandwidth that are easier to program, said Bill Dally, NVIDIA’s chief scientist, in a keynote released today for a virtual GTC China event. He described three projects as examples of how the 200-person research team he leads is working to stoke	<a class="read-more" href="https://blogs.nvidia.com/blog/gtc-china-keynote-dally-ai/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>NVIDIA researchers are defining ways to make faster AI chips in systems with greater bandwidth that are easier to program, said Bill Dally, NVIDIA’s chief scientist, in <a target="_blank" href="https://www.nvidia.com/en-us/gtc/keynote/">a keynote</a> released today for a virtual GTC China event.</p>
<p>He described three projects as examples of how the 200-person research team he leads is working to stoke Huang’s Law — the prediction named for NVIDIA CEO Jensen Huang that GPUs will double AI performance every year.</p>
<p>“If we really want to improve computer performance, Huang’s Law is the metric that matters, and I expect it to continue for the foreseeable future,” said Dally, who helped direct research at NVIDIA in AI, ray tracing and fast interconnects.</p>
<figure id="attachment_48180" aria-describedby="caption-attachment-48180" style="width: 672px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2020/12/Huangs-Law-slide-11-jpg.jpg"><img loading="lazy" decoding="async" class="size-large wp-image-48180" src="https://blogs.nvidia.com/wp-content/uploads/2020/12/Huangs-Law-slide-11-jpg-672x386.jpg" alt="Huang's Law slide 11 jpg" width="672" height="386" /></a><figcaption id="caption-attachment-48180" class="wp-caption-text">NVIDIA has more than doubled performance of GPUs on AI inference every year.</figcaption></figure>
<h2><b>An Ultra-Efficient Accelerator</b></h2>
<p>Toward that end, NVIDIA researchers created a tool called MAGNet that generated an AI inference accelerator that hit 100 tera-operations per watt in a simulation. That’s more than an order of magnitude greater efficiency than today’s commercial chips.</p>
<p>MAGNet uses new techniques to orchestrate the flow of information through a device in ways that minimize the data movement that burns most of the energy in today’s chips. The research prototype is implemented as a modular set of tiles so it can scale flexibly.</p>
<p>A separate effort seeks to replace today’s electrical links inside systems with faster optical ones.</p>
<h2><b>Firing on All Photons</b></h2>
<p>“We can see our way to doubling the speed of our <a target="_blank" href="https://www.nvidia.com/en-us/data-center/nvlink/">NVLink</a> [that connects GPUs] and maybe doubling it again, but eventually electrical signaling runs out of gas,” said Dally, who holds more than 120 patents and chaired the computer science department at Stanford before joining NVIDIA in 2009.</p>
<p>The team is collaborating with researchers at Columbia University on ways to harness techniques telecom providers use in their core networks to merge dozens of signals onto a single optical fiber.</p>
<p>Called dense wavelength division multiplexing, it holds the potential to pack multiple terabits per second into links that fit into a single millimeter of space on the side of a chip, more than 10x the density of today’s interconnects.</p>
<p>Besides faster throughput, the optical links enable denser systems. For example, Dally showed a mockup (below) of a future NVIDIA DGX system with more than 160 GPUs.</p>
<figure id="attachment_48183" aria-describedby="caption-attachment-48183" style="width: 400px" class="wp-caption alignleft"><a href="https://blogs.nvidia.com/wp-content/uploads/2020/12/GPU-tray-with-optical-links-slide-73.jpg"><img loading="lazy" decoding="async" class="size-medium wp-image-48183" src="https://blogs.nvidia.com/wp-content/uploads/2020/12/GPU-tray-with-optical-links-slide-73-400x275.jpg" alt="GPU tray with optical links slide 73" width="400" height="275" /></a><figcaption id="caption-attachment-48183" class="wp-caption-text">Optical links help pack dozens of GPUs in a system.</figcaption></figure>
<p>In software, NVIDIA’s researchers have prototyped a new programming system called Legate. It lets developers take a program written for a single GPU and run it on a system of any size — even a giant supercomputer like<a href="https://blogs.nvidia.com/blog/making-selene-pandemic-ai/"> Selene</a> that packs thousands of GPUs.</p>
<p>Legate couples a new form of programming shorthand with accelerated software libraries and an advanced runtime environment called Legion. It’s already being put to the test at U.S. national labs.</p>
<h2><b>Rendering a Vivid Future</b></h2>
<p>The three research projects make up just one part of Dally’s keynote, which describes NVIDIA’s domain-specific platforms for a variety of industries such as healthcare, self-driving cars and robotics. He also delves into data science, AI and graphics.</p>
<p>“In a few generations our products will produce amazing images in real time using path tracing with <a target="_blank" href="https://www.pbr-book.org/">physically based rendering</a>, and we’ll be able to generate whole scenes with AI,” said Dally.</p>
<p>He showed the first public demonstration that combines NVIDIA’s conversational AI framework called <a target="_blank" href="https://developer.nvidia.com/riva">Riva</a> with <a target="_blank" href="https://www.nvidia.com/en-us/research/ai-playground/">GauGAN</a>, a tool that uses generative adversarial networks to create beautiful landscapes from simple sketches. The demo lets users instantly generate photorealistic landscapes using simple voice commands.</p>
<p>In an interview between recording sessions for the keynote, Dally expressed particular pride for the team’s pioneering work in several areas.</p>
<p>“All our current ray tracing started in NVIDIA Research with prototypes that got our product teams excited. And in 2011, I assigned [NVIDIA researcher] Bryan Catanzaro to work with [Stanford professor] Andrew Ng on a project that became <a target="_blank" href="https://developer.nvidia.com/cuDNN">CuDNN</a>, software that kicked off much of our work in deep learning,” he said.</p>
<h2><b>A First Foothold in Networking</b></h2>
<p>Dally also spearheaded a collaboration that led to the first prototypes of <a target="_blank" href="https://www.nvidia.com/en-us/data-center/nvlink/">NVLink and NVSwitch</a>, interconnects that link GPUs running inside some of the world’s largest supercomputers today.</p>
<p>“The product teams grabbed the work out of our hands before we were ready to let go of it, and now we’re considered one of the most advanced networking companies,” he said.</p>
<p>With his passion for technology, Dally said he often feels like a kid in a candy store. He may hop from helping a group with an AI accelerator one day to helping another team sort through a complex problem in robotics the next.</p>
<p>“I have one of the most fun jobs in the company if not in the world because I get to help shape the future,” he said.</p>
<p>The keynote is just one of more than 220 sessions at <a target="_blank" href="https://www.nvidia.cn/gtc/">GTC China</a>. All the sessions are free and most are conducted in Mandarin.</p>
<h2><b>Panel, Startup Showcase at GTC China</b></h2>
<p>Following the keynote, <a target="_blank" href="https://www.nvidia.com/en-us/gtc/keynote/">a panel of senior NVIDIA executives</a> will discuss how the company’s technologies in AI, data science, healthcare and other fields are being adopted in China.</p>
<p>The event also includes a showcase of a dozen top startups in China, hosted by <a target="_blank" href="https://www.nvidia.com/en-us/deep-learning-ai/startups/">NVIDIA Inception</a>, an acceleration program for AI and data science startups.</p>
<p>Companies participating in GTC China include Alibaba, AWS, Baidu, ByteDance, China Telecom, Dell Technologies, Didi, New H3C Information Technologies, Inspur Electronic Information, Kuaishou, Lenovo, Microsoft, Ping An, Tencent, Tsinghua University and Xiaomi.</p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2020/12/GTC-China-2020-BillDallyKeynotePromotion-Blog-1280x680-1-2.jpg"
			type="image/jpeg"
			width="1280"
			height="680"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2020/12/GTC-China-2020-BillDallyKeynotePromotion-Blog-1280x680-1-2-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[NVIDIA Chief Scientist Highlights New AI Research in GTC Keynote]]></media:title>
			<media:description type="html">NVIDIA Chief Scientist Bill Dally delivers GTC China keynote</media:description>
			</media:content>
			</item>
		<item>
		<title>Locomation and Blackshark.ai Innovate in Real and Virtual Dimensions at GTC</title>
		<link>https://blogs.nvidia.com/blog/locomation-blackshark-innovate-gtc/</link>
		
		<dc:creator><![CDATA[Katie Washabaugh]]></dc:creator>
		<pubDate>Tue, 06 Oct 2020 15:32:40 +0000</pubDate>
				<category><![CDATA[Driving]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[NVIDIA DRIVE]]></category>
		<category><![CDATA[Transportation]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=47376</guid>

					<description><![CDATA[The NVIDIA DRIVE ecosystem is going multidimensional. During the NVIDIA GPU Technology Conference this week, autonomous trucking startup Locomation and simulation company Blackshark.ai announced technological developments powered by NVIDIA DRIVE. Locomation, a Pittsburgh-based provider of autonomous trucking technology, said it would integrate NVIDIA DRIVE AGX Orin in the upcoming rollout of its platooning system on	<a class="read-more" href="https://blogs.nvidia.com/blog/locomation-blackshark-innovate-gtc/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>The NVIDIA DRIVE ecosystem is going multidimensional.</p>
<p>During the <a target="_blank" href="https://www.nvidia.com/en-us/gtc/">NVIDIA GPU Technology Conference</a> this week, autonomous trucking startup Locomation and simulation company Blackshark.ai announced technological developments powered by NVIDIA DRIVE.</p>
<p><a target="_blank" href="https://locomation.ai/">Locomation</a>, a Pittsburgh-based provider of autonomous trucking technology, said it would integrate <a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-introduces-drive-agx-orin-advanced-software-defined-platform-for-autonomous-machines">NVIDIA DRIVE AGX Orin</a> in the upcoming rollout of its platooning system on public roads in 2022.</p>
<p>Innovating in the virtual world, <a target="_blank" href="https://www.blackshark.ai">Blackshark.ai</a> detailed its toolset to create buildings and landscape assets for simulation environments on <a target="_blank" href="https://www.nvidia.com/en-us/self-driving-cars/drive-constellation/">NVIDIA DRIVE Sim</a>.</p>
<p>Together, these announcements mark milestones in the path toward safer, more efficient autonomous transportation.</p>
<h2><b>Shooting for the Platoon</b></h2>
<p>Locomation recently announced its first commercial system, Autonomous Relay Convoy, which allows one driver to pilot a lead truck while a fully autonomous follower truck operates in tandem.</p>
<p>The ARC system will be deployed with Wilson Logistics, which will operate more than 1,000 Locomation-equipped trucks, powered by NVIDIA DRIVE AGX Orin, starting in 2022.</p>
<p>NVIDIA DRIVE AGX Orin is a highly advanced software-defined platform for autonomous vehicles.  The system features the new Orin system-on-a-chip, which delivers more than 200 trillion operations per second — nearly 7x the performance of NVIDIA’s previous-generation Xavier SoC.</p>
<p>In August, <a target="_blank" href="https://www.businesswire.com/news/home/20200812005021/en/Locomation-Wilson-Logistics-Perform-Regular-Autonomous-Freight-Deliveries-in-Groundbreaking-Pilot-Program">Locomation and Wilson Logistics successfully completed</a> the first-ever on-road pilot program transporting commercial freight using ARC. Two Locomation trucks, hauling Wilson Logistics trailers and freight, were deployed on a 420-mile long route along I-84 between Portland, Ore., and Nampa, Idaho. This stretch of interstate has some of the most challenging road conditions for truck driving, with curvatures, inclines and wind gusts.</p>
<p>“We’re moving rapidly toward autonomous trucking commercialization, and NVIDIA DRIVE presents a solution for providing a robust, safety-forward platform for our team to work with,” said Çetin Meriçli, CEO and cofounder of Locomation.</p>
<h2><b>Constructing a New Dimension</b></h2>
<p>While Locomation is deploying autonomous vehicles in the real world, Blackshark.ai is making it easier to create building and landscape assets used to enhance the virtual world on a global scale.</p>
<p>The startup has developed a digital twin platform that uses AI and cloud computing to automatically transform satellite data, aerial images or map and sensor data into building, landscape and infrastructure assets that contribute to a semantic photorealistic 3D environment.</p>
<p>During the <a target="_blank" href="https://www.nvidia.com/en-us/gtc/keynote/">opening GTC keynote</a>, NVIDIA founder and CEO Jensen Huang showcased the technology on NVIDIA DRIVE Sim. DRIVE Sim uses high-fidelity simulation to create a safe, scalable and cost-effective way to bring self-driving vehicles to our roads.</p>
<p>It taps into the computing horsepower of NVIDIA RTX GPUs to deliver a powerful, scalable, cloud-based computing platform. One that is capable of generating billions of qualified miles for autonomous vehicle testing.</p>
<p><iframe loading="lazy" title="NVIDIA DRIVE Sim Software Built on NVIDIA Omniverse" width="500" height="281" src="https://www.youtube.com/embed/DuK4ppn0g3A?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p>In the demo video, Blackshark’s AI automatically generated the trees and buildings used to reconstruct the city of San Jose in simulation for an immersive, authentic environment.</p>
<p>These latest announcements from Locomation and Blackshark.ai demonstrate the breadth of the DRIVE ecosystem, spanning the real and virtual worlds to push autonomous innovation further.</p>
<p><i>Watch NVIDIA CEO Jensen Huang </i><a target="_blank" href="https://www.youtube.com/watch?v=pzbhU4ttSvM&amp;feature=youtu.be&amp;t=86"><i>recap all the news</i></a><i> from GTC. It’s not too late to get access to hundreds of live and on-demand talks — </i><a target="_blank" href="https://www.nvidia.com/en-us/gtc/"><i>register now</i></a><i> through Oct. 9 using promo code CMB4KN to get 20 percent off.</i></p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2020/10/Locomation.jpg"
			type="image/jpeg"
			width="960"
			height="510"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2020/10/Locomation-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[Locomation and Blackshark.ai Innovate in Real and Virtual Dimensions at GTC]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>AI in the Hand of the Artist</title>
		<link>https://blogs.nvidia.com/blog/ai-art-gallery/</link>
		
		<dc:creator><![CDATA[Brian Caulfield]]></dc:creator>
		<pubDate>Tue, 22 Sep 2020 16:00:08 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Robotics]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[GTC 2020]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=46938</guid>

					<description><![CDATA[Humans are wielding AI to create art, and a virtual exhibit that’s part of NVIDIA’s GPU Technology Conference showcases the stunning results. The AI Art Gallery at NVIDIA GTC features pieces by a broad collection of artists, developers and researchers from around the world who are using AI to push the limits of artistic expression.	<a class="read-more" href="https://blogs.nvidia.com/blog/ai-art-gallery/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>Humans are wielding AI to create art, and <a target="_blank" href="https://www.nvidia.com/en-us/gtc/ai-art-gallery/">a virtual exhibit</a> that’s part of NVIDIA’s <a target="_blank" href="https://www.nvidia.com/en-us/gtc/">GPU Technology Conference</a> showcases the stunning results.</p>
<p><a target="_blank" href="https://www.nvidia.com/en-us/gtc/ai-art-gallery/">The AI Art Gallery at NVIDIA GTC</a> features pieces by a broad collection of artists, developers and researchers from around the world who are using AI to push the limits of artistic expression.</p>
<p>When AI is introduced into the artistic process, the artist feeds the machine data and code, explains NVIDIA’s Heather Schoell senior art director at NVIDIA, who curated the <a target="_blank" href="https://www.nvidia.com/en-us/gtc/ai-art-gallery/">online exhibit</a>.</p>
<p><a href="https://blogs.nvidia.com/wp-content/uploads/2020/09/ai-gallery-a.jpg"><img loading="lazy" decoding="async" class="size-full wp-image-46942 aligncenter" src="https://blogs.nvidia.com/wp-content/uploads/2020/09/ai-gallery-a.jpg" alt="" width="558" height="296" /></a></p>
<p>Once the output reveals itself, it’s up to the artist to determine if it stands up to their artistic style and desired message, or if the input needs to be adjusted, according to Schoell.</p>
<p>“The output reflects both the artist’s hand and the medium, in this case data, used for creation,” Schoell says.</p>
<p>The exhibit complements what has become the world’s premier AI conference.</p>
<p>GTC, running Oct. 5-9, will bring together researchers from industry and academia, startups and Fortune 500 companies.</p>
<p>So it’s only natural that artists would be among those putting modern AI to work.</p>
<p>“Through this collection we aim to share how the artist can partner with AI as both an artistic medium and creative collaborator,” Schoell explains.</p>
<p>The artists featured in the <a target="_blank" href="https://www.nvidia.com/en-us/gtc/ai-art-gallery/">AI Art Gallery</a> include:</p>
<ul>
<li><b>Daniel Ambrosi &#8211; </b>“Dreamscapes” fuses computational photography and AI to create a deeply textural environment.</li>
<li><b>Refik Anadol</b> &#8211; “<i>Machine Hallucinations,”</i> by the Turkish-born, Los Angeles-based conceptual artist known for his immersive architectural digital installations, such as a project at New York’s Chelsea Market that used projectors to splash AI generated images based of New York cityscapes to create what Anadol called a “machine hallucination.”</li>
<li><b>Sofia Crespo and Dark Fractures</b> &#8211; Work from the Argentina-born artist and Berlin-based studio led by Feileacan McCormick uses GANs and NLP models to generate 3D insects in a virtual, digital space.</li>
<li><b>Scott Eaton &#8211; </b>An artist, educator and creative technologist residing in London combines a deep understanding of human anatomy, traditional art techniques and modern digital tools in his uncanny, figurative artworks.</li>
<li><b>Oxia Palus</b> &#8211; The NVIDIA Inception startup will uncover a new masterpiece by Leonardo Da Vinci that resurrects a hidden sketch and reconstructs the painting style from one of the most famous artists of all time, DaVinci.</li>
<li><b>Anna Ridler &#8211;</b> Three displays showing images of tulips that change based on Bitcoin’s price, created by a U.K. artist and researcher known for her work exploring the intersection of machine learning, nature and history.</li>
<li><b>Helena Sarin</b> &#8211; Using her own drawings, sketches, and photographs as datasets, Helena trains her models to generate new visuals that serve as the basis of her compositions &#8212; in this case with type of neural network known as a generative adversarial network, or GAN. The Moscow-born artist has embedded 12 of these creations in a book of puns on the acronym GAN.</li>
<li><b>Pindar Van Arman</b> &#8211; Driven by a collection of algorithms programmed to work with — and against — one another, the U.S.-based artist and roboticist’s creation uses a paintbrush, paint and canvas to create portraits that fuse the look and feel of a photo and a handmade sketch.</li>
</ul>
<p>For a closer look, registered GTC attendees can go on a live, personal tour of two of our featured artists’ studios.</p>
<p><a href="https://blogs.nvidia.com/wp-content/uploads/2020/09/ai-gallery-b.jpg"><img loading="lazy" decoding="async" class="size-full wp-image-46948 aligncenter" src="https://blogs.nvidia.com/wp-content/uploads/2020/09/ai-gallery-b.jpg" alt="" width="558" height="296" /></a></p>
<p>On Thursday, Oct. 8, you can virtually tour Van Arman’s Fort Worth, Texas, studio between 11-12 p.m. Pacific time. And at 2 p.m. Pacific, you can tour Refik Anadol’s Los Angeles studio.</p>
<p>In addition, <a target="_blank" href="https://www.nvidia.com/en-us/gtc/session-catalog/?search.language=1594320459782001LCjF&amp;search=A21879%20a21881&amp;tab.catalogtabfields=1600209910618001TWM3">a pair of panel discussions</a>, Thursday, October 8, with AI Gallery artists will explore what led them to connect AI and fine art.</p>
<p>And starting Oct. 5, you can <a target="_blank" href="https://www.nvidia.com/en-us/gtc/session-catalog/?tab.catalogtabfields=1600209910618002Tlxt&amp;search=A22066">tune in to an on-demand GTC session featuring Oxia Palus co-founder George Cann</a>, a PhD candidate in space and climate physics at University College London.</p>
<p><b>Join us at the </b><a target="_blank" href="https://www.nvidia.com/en-us/gtc/ai-art-gallery/"><b>AI Art Gallery</b></a><b>.</b></p>
<p><a target="_blank" href="https://www.nvidia.com/gtc"><b>Register for GTC</b></a><b>. </b></p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2020/09/NVIDIA-GTC20-AI-Art-Gallery-Combo-Press.jpg"
			type="image/jpeg"
			width="1280"
			height="680"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2020/09/NVIDIA-GTC20-AI-Art-Gallery-Combo-Press-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[AI in the Hand of the Artist]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>Le Mans of Silicon Valley: Running Around the Clock, GTC to Feature Top Autonomous Vehicle Experts</title>
		<link>https://blogs.nvidia.com/blog/gtc-feature-top-autonomous-vehicle-experts/</link>
		
		<dc:creator><![CDATA[Katie Washabaugh]]></dc:creator>
		<pubDate>Thu, 17 Sep 2020 19:26:03 +0000</pubDate>
				<category><![CDATA[Driving]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[Transportation]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=46884</guid>

					<description><![CDATA[Strap in, because this fall you can access the greatest minds in autonomous driving from around the world at any hour, from anywhere. GTC returns Oct. 5, running 24 hours a day through Oct. 9 to deliver the best in AI sessions, training and more from the foremost experts. NVIDIA founder and CEO Jensen Huang	<a class="read-more" href="https://blogs.nvidia.com/blog/gtc-feature-top-autonomous-vehicle-experts/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>Strap in, because this fall you can access the greatest minds in autonomous driving from around the world at any hour, from anywhere.</p>
<p><a target="_blank" href="https://www.nvidia.com/en-us/gtc/">GTC returns Oct. 5</a>, running 24 hours a day through Oct. 9 to deliver the best in AI sessions, training and more from the foremost experts. NVIDIA founder and CEO Jensen Huang will kick off the conference with a keynote video highlighting the latest innovations in GPU computing.</p>
<p>With digital networking events and live sessions, GTC provides the unique opportunity to interact with researchers, engineers, developers and technologists in the autonomous vehicle space, as well as healthcare, robotics, graphics and numerous other industries.</p>
<p>Here’s a sneak peek at the automotive offerings at this year’s event.</p>
<h2><b>End-to-End Experts</b></h2>
<p>Global automakers, suppliers, startups and researchers will present their most recent work in <a target="_blank" href="https://www.nvidia.com/en-us/gtc/session-catalog/">live and recorded sessions</a>. These talks will cover every step of the autonomous vehicle development process, from training, to testing and validation, to deployment at scale.<b></b></p>
<ul>
<li><b>Alexander Amini, a doctoral candidate at MIT, </b>details a new end-to-end learning strategy for training self-driving neural networks, from perception to control.</li>
<li><b>Vijitha Chekuri, business strategy director at Microsoft, </b>discusses how hyperscale cloud and cutting-edge compute can be used for comprehensive autonomous vehicle simulation, training and validation.</li>
<li><b>Chuck Price, chief product officer at TuSimple, </b>outlines the future <a href="https://blogs.nvidia.com/blog/tusimple-navistar-build-autonomous-trucks-nvidia-drive/">deployment of autonomous trucks</a>, including capabilities, market expectations and the unique challenges in developing a self-driving commercial vehicle.</li>
<li><b>Neda Cvijetic, senior manager of autonomous vehicles at NVIDIA,</b> applies an engineering focus to widely acknowledged <a target="_blank" href="https://www.youtube.com/watch?v=ftsUg5VlzIE">autonomous vehicle challenges</a>, and explains how NVIDIA is tackling them.</li>
<li><b>Karl Greb, safety engineering director at NVIDIA, </b>describes the key elements of the processes, methodologies and technologies behind the safe and robust <a target="_blank" href="https://developer.nvidia.com/drive">NVIDIA DRIVE platform</a>.</li>
<li style="list-style-type: none"></li>
</ul>
<h2><b>Round-the-Clock Training</b></h2>
<p>In addition to the virtual keynote and sessions, GTC will offer hands-on deep learning training across a variety of time zones.</p>
<p>Hosted by the <a target="_blank" href="https://www.nvidia.com/en-us/deep-learning-ai/education/">NVIDIA Deep Learning Institute</a>, these courses will provide a foundation for developing AI applications, including autonomous driving. <a href="https://blogs.nvidia.com/blog/gtc-dli-courses/">Topics include</a> the fundamentals of deep learning, natural language processing and deep learning for multiple GPUs.</p>
<p><a target="_blank" href="https://www.nvidia.com/en-us/gtc/">Register today</a> for GTC Early Bird pricing, available for just $49 through Friday, Sept. 25. DLI training — which includes competency certification for many sessions — can be added to a Digital Conference Pass for just $99 per session.</p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2020/09/GTCF_blog.jpg"
			type="image/jpeg"
			width="960"
			height="510"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2020/09/GTCF_blog-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[Le Mans of Silicon Valley: Running Around the Clock, GTC to Feature Top Autonomous Vehicle Experts]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>While the World Works from Home, NVIDIA’s AV Fleet Drives in the Data Center</title>
		<link>https://blogs.nvidia.com/blog/nvidia-fleet-drives-in-the-data-center/</link>
		
		<dc:creator><![CDATA[Zvi Greenstein]]></dc:creator>
		<pubDate>Tue, 19 May 2020 16:00:15 +0000</pubDate>
				<category><![CDATA[Driving]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[NVIDIA DRIVE]]></category>
		<category><![CDATA[NVIDIA DRIVE Sim]]></category>
		<category><![CDATA[Omniverse]]></category>
		<category><![CDATA[Synthetic Data Generation]]></category>
		<category><![CDATA[Transportation]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=45638</guid>

					<description><![CDATA[As much of the world continues to conduct business from home, NVIDIA’s autonomous test vehicles are hard at work in the cloud. During the GTC 2020 keynote, NVIDIA CEO Jensen Huang demonstrated how NVIDIA DRIVE technology is being developed and tested in simulation. While physical testing is temporarily paused, the cloud-based NVIDIA DRIVE Constellation platform	<a class="read-more" href="https://blogs.nvidia.com/blog/nvidia-fleet-drives-in-the-data-center/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>As much of the world continues to conduct business from home, NVIDIA’s autonomous test vehicles are hard at work in the cloud.</p>
<p>During the <a target="_blank" href="https://www.nvidia.com/en-us/gtc/keynote/">GTC 2020 keynote</a>, NVIDIA CEO Jensen Huang demonstrated how NVIDIA DRIVE technology is being developed and tested in simulation. While physical testing is temporarily paused, the cloud-based <a target="_blank" href="https://www.nvidia.com/en-us/self-driving-cars/drive-constellation/">NVIDIA DRIVE Constellation</a> platform makes it possible to dispatch virtual vehicles in virtual environments to continue making great progress in self-driving technology.</p>
<p><iframe loading="lazy" title="NVIDIA DRIVE Sim — Autonomous Urban and Highway Drive Around Silicon Valley" width="500" height="281" src="https://www.youtube.com/embed/Ck7eXSkD72M?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p>In the video demonstration, a virtual NVIDIA BB8 test vehicle drives near NVIDIA headquarters in Silicon Valley, traveling through highways and urban streets &#8212; all in simulation. The 17-mile loop shows the NVIDIA DRIVE AV Software navigating the roadways, pedestrians and traffic in a highly accurate replica environment.</p>
<h2><b>Data Center Proving Ground</b></h2>
<p>NVIDIA DRIVE Constellation is a cloud-based simulation platform, designed from the ground up to support the development and validation of autonomous vehicles. The data center-based platform consists of two side-by-side servers.</p>
<p>The first server uses NVIDIA GPUs running DRIVE Sim software and generates the sensor output from the virtual car driving in a virtual world. The second server contains the actual vehicle computer, processing the simulated sensor data running the exact same <a target="_blank" href="https://www.nvidia.com/en-us/self-driving-cars/drive-platform/software/">DRIVE AV and DRIVE IX</a> software that’s being deployed in the real car.</p>
<p>The driving decisions from the second server are fed back into the first, enabling real-time, bit-accurate, hardware-in-the-loop development and testing.</p>
<figure id="attachment_45680" aria-describedby="caption-attachment-45680" style="width: 672px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2020/05/image6.png"><img loading="lazy" decoding="async" class="size-large wp-image-45680" src="https://blogs.nvidia.com/wp-content/uploads/2020/05/image6-672x378.png" alt="" width="672" height="378" /></a><figcaption id="caption-attachment-45680" class="wp-caption-text">DRIVE Constellation is composed of two side-by-side servers enabling bit-accurate, hardware-in-the-loop testing.</figcaption></figure>
<p>The system is designed to be deployed in a data center as a scalable virtual fleet. This provides development engineers with a vehicle on demand, and gives them the ability to conduct testing at scale. It also makes it possible to consistently test rare and dangerous scenarios that are difficult or impossible to encounter in the real world.</p>
<h2><b>Development and Testing from End to End</b></h2>
<p>Building an autonomous vehicle requires testing at every level — starting at subsystems and continuing all the way to full vehicle integration tests. DRIVE Constellation enables this type of end-to-end development and testing for autonomous vehicles in simulation, similar to developing a physical car.</p>
<p>End-to-end tests ensure timing and performance accuracy as well as accurate modeling of the complex interdependency of different systems in autonomous vehicle software.</p>
<figure id="attachment_45681" aria-describedby="caption-attachment-45681" style="width: 672px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2020/05/image2.png"><img loading="lazy" decoding="async" class="size-large wp-image-45681" src="https://blogs.nvidia.com/wp-content/uploads/2020/05/image2-672x281.png" alt="" width="672" height="281" /></a><figcaption id="caption-attachment-45681" class="wp-caption-text">DRIVE Sim creates a digital twin of the real world to provide a realistic driving environment.</figcaption></figure>
<p>Achieving this level of fidelity at scale is a major undertaking. The environment, traffic behavior, sensor inputs and vehicle dynamics must appear, act and feed into the car computer just as they would in the real world.</p>
<p>This requires multiple GPUs to generate <a href="https://blogs.nvidia.com/blog/what-is-synthetic-data/">synthetic data</a> in sync with precise timing. The vehicle software and hardware signals and interfaces must be replicated in simulation — and everything has to run in real time.</p>
<h2><b>Simulating Silicon Valley</b></h2>
<p>Comprehensive simulation starts with the environment. To accurately recreate the Silicon Valley driving loop, <a target="_blank" href="https://www.3d-mapping.de/en/">3D Mapping</a>, a member of the NVIDIA DRIVE ecosystem, scanned the roadways to within 5 centimeters of accuracy. The raw scanned data was then processed into a dataset format known as OpenDRIVE.</p>
<p>From there, NVIDIA developed a content creation pipeline to generate a highly accurate 3D environment using the <a target="_blank" href="https://developer.nvidia.com/nvidia-omniverse">NVIDIA Omniverse</a> collaboration platform. The environment includes accurate road networks and roadmarks. <a target="_blank" href="https://www.nvidia.com/en-us/design-visualization/technologies/material-definition-language/">Material properties</a> are also applied to ensure it interacts with light rays, radio waves and lidar rays in the same way real sensors interact with the physical world.</p>
<figure id="attachment_45682" aria-describedby="caption-attachment-45682" style="width: 672px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2020/05/image1.png"><img loading="lazy" decoding="async" class="size-large wp-image-45682" src="https://blogs.nvidia.com/wp-content/uploads/2020/05/image1-672x281.png" alt="" width="672" height="281" /></a><figcaption id="caption-attachment-45682" class="wp-caption-text">DRIVE Sim allows end-to-end testing, including in-car visualization.</figcaption></figure>
<figure id="attachment_45683" aria-describedby="caption-attachment-45683" style="width: 672px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2020/05/image3.png"><img loading="lazy" decoding="async" class="size-large wp-image-45683" src="https://blogs.nvidia.com/wp-content/uploads/2020/05/image3-672x281.png" alt="" width="672" height="281" /></a><figcaption id="caption-attachment-45683" class="wp-caption-text">DRIVE Sim provides a wide range of light and weather conditions for testing AV software.</figcaption></figure>
<h2><b>Recreating Sensor Data</b></h2>
<p>With an accurate environment in place, high-fidelity development and testing next requires accurately generated sensor data. The sensor models include those typically found on an autonomous test vehicle, such as camera, lidar, radar and inertial measurement unit. DRIVE Sim provides a flexible sensor pipeline and APIs that allow configuring sensors to match real-world vehicle architectures.</p>
<p>For camera data, the image pipeline starts by rendering an HDR image that is warped according to the lens properties of the camera used on the vehicle. Exposure control, black and white level balancing, and color grading are applied to the image to match the sensor profile. Finally, the pixel data is converted to its native output format using a sensor-specific encoder.</p>
<p>In addition to camera models, DRIVE Sim provides physically based lidar and radar sensors using <a href="https://blogs.nvidia.com/blog/whats-difference-between-ray-tracing-rasterization/">ray tracing</a>. NVIDIA RTX GPUs enable DRIVE Sim to run highly computationally intensive radar and lidar models in real time.</p>
<figure id="attachment_45691" aria-describedby="caption-attachment-45691" style="width: 1080px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2020/05/auto-web-drive-sim-radar-gif.gif"><img loading="lazy" decoding="async" class="size-full wp-image-45691" src="https://blogs.nvidia.com/wp-content/uploads/2020/05/auto-web-drive-sim-radar-gif.gif" alt="" width="1080" height="608" /></a><figcaption id="caption-attachment-45691" class="wp-caption-text">DRIVE Constellation provides powerful RTX GPUs that allow real-time rendering of sensors using ray tracing. The scene above shows the combined returns of eight radars being rendered in real time for the AV stack.</figcaption></figure>
<h2><b>Modeling Vehicle Behavior</b></h2>
<p>Finally, vehicle models are critical for accurate simulation. As control signals — steering, acceleration and braking — are sent to the in-vehicle computer, the car must respond just as it would in the physical world.</p>
<p>To do so, the simulation platform must recreate motion properly, including details such as interaction with the road surface. Vehicle models in DRIVE Sim are handled using a plugin system with the included PhysX models or third-party vehicle dynamics models from NVIDIA DRIVE ecosystem partners such as <a href="https://blogs.nvidia.com/blog/drive-constellation-carsim-vehicle-model/">Mechanical Simulation</a> or <a target="_blank" href="https://press.ipg-automotive.com/press-release/article/ipg-automotive-collaborating-with-nvidia-on-drive-constellation-cloud-based-av-simulation/">IPG</a>.</p>
<p>Vehicle dynamics also play a key role in accurate sensor data generation. As the vehicle operates, the position and pose of the vehicle changes significantly, affecting a sensor’s viewpoint. For example, the forward-facing cameras will pitch downward when a car is braking. Modeling vehicle dynamics correctly is important to generating sensor data properly.</p>
<p>By accurately simulating each of these components — environment, sensors, vehicle dynamics — on a single, end-to-end platform, NVIDIA DRIVE Constellation and DRIVE Sim are critical pieces to a comprehensive development and testing pipeline. They enable NVIDIA and its partners to work toward safer and more efficient autonomous vehicles as physical fleets remain in the garage.</p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2020/05/Endeavor_hires.jpg"
			type="image/jpeg"
			width="960"
			height="510"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2020/05/Endeavor_hires-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[While the World Works from Home, NVIDIA’s AV Fleet Drives in the Data Center]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>New Auto Industry Entrants Develop Innovative Vehicles on Scalable AI Platform</title>
		<link>https://blogs.nvidia.com/blog/startup-vehicles-drive-agx/</link>
		
		<dc:creator><![CDATA[Katie Washabaugh]]></dc:creator>
		<pubDate>Thu, 14 May 2020 12:58:04 +0000</pubDate>
				<category><![CDATA[Driving]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[NVIDIA DRIVE]]></category>
		<category><![CDATA[Transportation]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=45552</guid>

					<description><![CDATA[NVIDIA DRIVE AGX is giving automotive industry startups an AI-powered boost. During GTC Digital, electric and autonomous vehicle startups Pony.ai, Canoo and Faraday Future announced they are developing vehicles using the NVIDIA DRIVE AGX compute platform. The high-performance, energy-efficient platform enables automated and autonomous driving across all levels for robust, software-defined vehicle development. These companies	<a class="read-more" href="https://blogs.nvidia.com/blog/startup-vehicles-drive-agx/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p><a target="_blank" href="https://www.nvidia.com/en-us/self-driving-cars/drive-platform/hardware/">NVIDIA DRIVE AGX</a> is giving automotive industry startups an AI-powered boost.</p>
<p>During GTC Digital, electric and autonomous vehicle startups Pony.ai, Canoo and Faraday Future announced they are developing vehicles using the NVIDIA DRIVE AGX compute platform. The high-performance, energy-efficient platform enables automated and autonomous driving across all levels for robust, software-defined vehicle development.</p>
<p>These companies are joining a wide, <a target="_blank" href="https://www.nvidia.com/en-us/self-driving-cars/partners/">international ecosystem</a> of automakers, tier 1 suppliers, truck makers, sensor suppliers, robotaxi companies and software startups developing on NVIDIA DRIVE.</p>
<p>By selecting an open and scalable platform, the DRIVE ecosystem is developing autonomous vehicles that are always improving, with over-the-air update capabilities, for a safer, more efficient transportation future.</p>
<h2><b>Redefining Mobility and Delivery</b></h2>
<p>By augmenting the human driver with AI, autonomous driving technology promises to significantly improve everyday mobility and logistics.</p>
<p><b>Pony.ai</b>, an autonomous driving technology company, is developing its upcoming robotaxi fleet on NVIDIA DRIVE AGX Pegasus. The company has been operating autonomous ride-hailing test vehicles since 2018, in California and China.</p>
<p>In April, Pony.ai began providing <a target="_blank" href="https://www.reuters.com/article/us-health-coronavirus-pony-ai/toyota-backed-pony-ai-to-offer-autonomous-delivery-service-in-california-idUSKBN21Y3GK">autonomous delivery services</a> in Irvine, Calif., to help those in the area sheltering in place due to COVID-19.</p>
<p><a href="https://blogs.nvidia.com/wp-content/uploads/2020/05/Screen-Shot-2020-05-13-at-10.43.02-AM.png"><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-45553" src="https://blogs.nvidia.com/wp-content/uploads/2020/05/Screen-Shot-2020-05-13-at-10.43.02-AM-672x427.png" alt="" width="672" height="427" /></a></p>
<p>The startup said it will leverage the DRIVE AGX Pegasus self-driving platform to meet the massive computing demands required to bring robotaxis to market. The AI compute platform achieves 320 trillion operations per second (TOPS) of deep learning and integrates two NVIDIA Xavier processors and two NVIDIA Turing Tensor Core GPUs.</p>
<p>With the ability to process a range of redundant and diverse deep neural networks simultaneously, Pony.ai can focus on developing safe and sustainable mobility technology for passenger travel and delivery.</p>
<h2><b>A New Vision for Personal Vehicles</b></h2>
<p>Electric vehicle startup <b>Canoo</b> has unveiled a sleek EV resembling a futuristic take on the iconic Volkswagen Microbus. The vehicles, purpose-built for a shared mobility system, will go into production in late 2021.</p>
<p><a href="https://blogs.nvidia.com/wp-content/uploads/2020/05/Screen-Shot-2020-05-12-at-6.33.08-PM.png"><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-45554" src="https://blogs.nvidia.com/wp-content/uploads/2020/05/Screen-Shot-2020-05-12-at-6.33.08-PM-672x377.png" alt="" width="672" height="377" /></a></p>
<p>Canoo’s vehicles will feature AI-assisted driving features powered by NVIDIA DRIVE AGX Xavier. The compute platform delivers 30 TOPS of performance for object detection and sensor fusion, running state-of-the-art algorithms to provide cross-traffic alerts, blind spot detection and pedestrian detection, as well as convenience features such as adaptive cruise control and lane-centering control.</p>
<p>The software-defined DRIVE AGX Xavier also allows for more advanced features, like auto lane change, traffic light recognition and evasive steering, to be introduced when they become available.</p>
<p>Luxury EV maker <b>Faraday Future</b> also announced this week it will develop its upcoming FF91 vehicle using DRIVE AGX Xavier. With high-performance, energy-efficient compute at its core, the FF91 incorporates more than 36 sensors for advanced autonomous driving capabilities. The flagship EV is expected to begin deliveries by the end of this year.</p>
<p><a href="https://blogs.nvidia.com/wp-content/uploads/2020/05/Screen-Shot-2020-05-12-at-6.34.29-PM.png"><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-45555" src="https://blogs.nvidia.com/wp-content/uploads/2020/05/Screen-Shot-2020-05-12-at-6.34.29-PM-672x376.png" alt="" width="672" height="376" /></a></p>
<p>By developing on the scalable DRIVE AGX platform, these startups, as well as the entire DRIVE ecosystem, can continue to build more advanced features and continuously deliver truly intelligent transportation.</p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2020/05/Ecosystem.jpg"
			type="image/jpeg"
			width="960"
			height="510"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2020/05/Ecosystem-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[New Auto Industry Entrants Develop Innovative Vehicles on Scalable AI Platform]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
		<item>
		<title>Step Inside Our AI Garage: NVIDIA Experts Present Insights into Self-Driving Software and Infrastructure</title>
		<link>https://blogs.nvidia.com/blog/gtc-digital-self-driving-ai-infrastructure/</link>
		
		<dc:creator><![CDATA[Katie Washabaugh]]></dc:creator>
		<pubDate>Tue, 07 Apr 2020 20:56:59 +0000</pubDate>
				<category><![CDATA[Driving]]></category>
		<category><![CDATA[GTC]]></category>
		<category><![CDATA[NVIDIA DRIVE]]></category>
		<category><![CDATA[Transportation]]></category>
		<guid isPermaLink="false">https://blogs.nvidia.com/?p=45266</guid>

					<description><![CDATA[Intelligent vehicles require intelligent development. That’s why NVIDIA has built a complete AI-powered portfolio — from data centers to in-vehicle computers — that enables software-defined autonomous vehicles. And this month during GTC Digital, we’re providing an inside look at how this development process works, plus how we’re approaching safer, more efficient transportation. Autonomous vehicles must	<a class="read-more" href="https://blogs.nvidia.com/blog/gtc-digital-self-driving-ai-infrastructure/">
		Read Article		<span data-icon="y"></span>
	</a>
	]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div><p>Intelligent vehicles require intelligent development.</p>
<p>That’s why NVIDIA has built a complete AI-powered portfolio — from data centers to in-vehicle computers — that enables software-defined autonomous vehicles. And this month during <a target="_blank" href="https://www.nvidia.com/en-us/gtc/">GTC Digital</a>, we’re providing an inside look at how this development process works, plus how we’re approaching safer, more efficient transportation.</p>
<p>Autonomous vehicles must be able to operate in thousands of conditions around the world to be truly driverless. The key to reaching this level of capability is mountains of data.</p>
<p>To put that in perspective, a fleet of just 50 vehicles driving six hours a day generates about 1.6 petabytes of sensor data a day. If all that data were stored on standard 1GB flash drives, they’d cover more than 100 football fields. This data must then be curated and labeled to train the deep neural networks (DNNs) that will run in the car, performing a variety of dedicated functions, such as object detection and localization.</p>
<p>The infrastructure to train and test this software must include high-performance supercomputers to handle these enormous data needs. To run efficiently, the system must be able to intelligently curate and organize this data. Finally, it must be traceable — making it easy to find and fix bugs in the process — and repeatable, going over the same scenario over and over again to ensure a DNN’s proficiency.</p>
<p>As part of the GTC Digital series, we present this complete development and training infrastructure as well as some of the DNNs it has produced, driving progress toward deploying the car of the future.</p>
<h2><b>Born and Raised in the Data Center</b></h2>
<p>While today’s vehicles are put together on the factory floor assembly line, autonomous vehicles are born in the data center. In a GTC <a target="_blank" href="https://developer.nvidia.com/gtc/2020/video/s22355">digital session</a>, Clemént Farabet, vice president of AI Infrastructure at NVIDIA, details this high-performance, end-to-end platform for autonomous vehicle development.</p>
<figure id="attachment_45267" aria-describedby="caption-attachment-45267" style="width: 150px" class="wp-caption alignright"><a href="https://blogs.nvidia.com/wp-content/uploads/2020/04/Clement_4802.jpg"><img loading="lazy" decoding="async" class="wp-image-45267 size-thumbnail" src="https://blogs.nvidia.com/wp-content/uploads/2020/04/Clement_4802-150x150.jpg" alt="Clemént Farabet, NVIDIA VP of AI Infrastructure" width="150" height="150" /></a><figcaption id="caption-attachment-45267" class="wp-caption-text">Clemént Farabet, NVIDIA VP of AI Infrastructure</figcaption></figure>
<p>The NVIDIA internal AI infrastructure includes NVIDIA DGX servers that store and process the petabytes of driving data. For comprehensive training, developers must work with five to 10 billion frames to develop and then evaluate a DNN’s performance.</p>
<p>High-performance data center GPUs help speed up the time it takes to process this data. In addition, Farabet’s team optimizes development times using advanced learning methods such as <a href="https://blogs.nvidia.com/blog/what-is-active-learning/">active learning</a>.</p>
<p>Rather than rely solely on humans to curate and label driving data for DNN training, active learning makes it possible for the DNN to choose the data it needs to learn from. A dedicated neural network goes through a pool of frames, flagging those in which it demonstrates uncertainty. The flagged frames are then labeled manually and used to train the DNN, ensuring that it’s learning from the exact data that’s new or confusing.</p>
<figure id="attachment_45268" aria-describedby="caption-attachment-45268" style="width: 672px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2020/04/Constellation_rack.png"><img loading="lazy" decoding="async" class="wp-image-45268 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2020/04/Constellation_rack-672x278.png" alt="High-performance data center GPUs enables developers to train, test and validate self-driving DNNs at scale." width="672" height="278" /></a><figcaption id="caption-attachment-45268" class="wp-caption-text">High-performance data center GPUs enables developers to train, test and validate self-driving DNNs at scale.</figcaption></figure>
<p>Once trained, these DNNs can then be tested and validated on the <a href="https://blogs.nvidia.com/blog/drive-constellation-now-available/">NVIDIA DRIVE Constellation</a> simulation platform. The cloud-based solution enables millions of miles to be driven in virtual environments across a broad range of scenarios — from routine driving to rare or even dangerous situations — with greater efficiency, cost-effectiveness and safety than what is possible in the real world.</p>
<p>DRIVE Constellation’s high-fidelity simulation ensures these DNNs can be tested over and over, in every possible scenario and every possible condition before operating on public roads.</p>
<p>When combined with data center training, simulation allows developers to constantly improve upon their software in an automated, traceable and repeatable development process.</p>
<ul>
<li>Watch on-demand: <a target="_blank" href="https://developer.nvidia.com/gtc/2020/video/s22355">NVIDIA&#8217;s AI Infrastructure for Self-Driving Cars</a></li>
</ul>
<h2><b>DNNs at the Edge</b></h2>
<p>Once trained and validated, these DNNs can then operate in the car.</p>
<p>During GTC Digital, Neda Cvijetic, NVIDIA senior manager of autonomous vehicles and host of the <a target="_blank" href="https://www.nvidia.com/en-us/self-driving-cars/drive-labs/">DRIVE Labs video series</a>, gave  an <a target="_blank" href="https://developer.nvidia.com/gtc/2020/video/s22159">inside look</a> at a sampling of self-driving DNNs we’ve developed.</p>
<figure id="attachment_45269" aria-describedby="caption-attachment-45269" style="width: 150px" class="wp-caption alignright"><a href="https://blogs.nvidia.com/wp-content/uploads/2020/04/auto-feb-newsletter-email-thumbnail-neda-480x480.jpg"><img loading="lazy" decoding="async" class="wp-image-45269 size-thumbnail" src="https://blogs.nvidia.com/wp-content/uploads/2020/04/auto-feb-newsletter-email-thumbnail-neda-480x480-150x150.jpg" alt="Neda Cvijetic, NVIDIA senior manager of autonomous vehicles" width="150" height="150" /></a><figcaption id="caption-attachment-45269" class="wp-caption-text">Neda Cvijetic, NVIDIA senior manager of autonomous vehicles</figcaption></figure>
<p>Autonomous vehicles run an array of DNNs covering perception, mapping and localization to operate safely. To humans, these tasks seem straightforward, however, they’re all complex processes that require intelligent approaches to be performed successfully.</p>
<p>For example, to classify road objects, pedestrians and drivable space, one DNN uses a process known as <a href="https://blogs.nvidia.com/blog/drive-labs-panoptic-segmentation/">panoptic segmentation</a>, which can identify a scene with pixel-level accuracy.</p>
<p>To help it perceive parking spaces in a variety of environments, developers taught the <a href="https://blogs.nvidia.com/blog/drive-labs-ai-parking/">ParkNet DNN</a> to identify a spot as a four-sided polygon rather than a rectangle, so it could discern slanted spaces as well as their entry points.</p>
<p>And our <a href="https://blogs.nvidia.com/blog/drive-labs-multi-view-lidarnet-self-driving-cars/">LidarNet DNN</a> addresses challenges in processing lidar data for localization by fusing multiple perspectives for accurate and complete perception information.</p>
<figure id="attachment_45270" aria-describedby="caption-attachment-45270" style="width: 672px" class="wp-caption aligncenter"><a href="https://blogs.nvidia.com/wp-content/uploads/2020/04/Screen-Shot-2020-03-10-at-12.37.56-PM.png"><img loading="lazy" decoding="async" class="wp-image-45270 size-large" src="https://blogs.nvidia.com/wp-content/uploads/2020/04/Screen-Shot-2020-03-10-at-12.37.56-PM-672x171.png" alt="The LidarNet DNN uses multiple perspectives for highly accurate localization" width="672" height="171" /></a><figcaption id="caption-attachment-45270" class="wp-caption-text">The LidarNet DNN uses multiple perspectives for highly accurate localization.</figcaption></figure>
<p>By combining these and other DNNs and running them on high-performance in-vehicle compute, such as the <a target="_blank" href="https://www.nvidia.com/en-us/self-driving-cars/drive-platform/hardware/">NVIDIA DRIVE AGX</a> platform, an autonomous vehicle can perform comprehensive perception and planning and control without a human driver.</p>
<ul>
<li>Watch on-demand: <a target="_blank" href="https://developer.nvidia.com/gtc/2020/video/s22159">NVIDIA DRIVE Labs: An Inside Look at Autonomous Vehicle Software</a></li>
</ul>
<p>The GTC Digital site hosts these and other free sessions, with new content from NVIDIA experts and the DRIVE ecosystem added every Thursday until April 23. Stay up to date and register <a target="_blank" href="https://www.nvidia.com/en-us/gtc/">here</a>.</p>
]]></content:encoded>
					
		
		
		
			<media:content
			url="https://blogs.nvidia.com/wp-content/uploads/2020/04/auto-2024x2048.jpg"
			type="image/jpeg"
			width="2048"
			height="1024"
			>
			<media:thumbnail
			url="https://blogs.nvidia.com/wp-content/uploads/2020/04/auto-2024x2048-842x450.jpg"
			width="842"
			height="450"
			/>
			<media:title type="html"><![CDATA[Step Inside Our AI Garage: NVIDIA Experts Present Insights into Self-Driving Software and Infrastructure]]></media:title>
			<media:description type="html"></media:description>
			</media:content>
			</item>
	</channel>
</rss>
