<?xml version="1.0" encoding="UTF-8"?><feed
  xmlns="http://www.w3.org/2005/Atom"
  xmlns:thr="http://purl.org/syndication/thread/1.0"
  xml:lang=""
  xml:base="https://devblogs.nvidia.com/wp-atom.php"
  
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	 >
	<title type="text">NVIDIA Developer BlogNVIDIA Developer Blog</title>
	<subtitle type="text">Technical content: For developers, by developers</subtitle>

	<updated>2020-06-19T00:01:51Z</updated>

	<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com" />
	<id>https://devblogs.nvidia.com/feed/</id>
	<link rel="self" type="application/atom+xml" href="https://devblogs.nvidia.com/feed" />

	
		<entry>
		<author>
			<name>Vinh Nguyen</name>
					</author>
		<title type="html"><![CDATA[Optimizing the Deep Learning Recommendation Model on NVIDIA GPUs]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/optimizing-dlrm-on-nvidia-gpus/" />
		<id>https://devblogs.nvidia.com/?p=18109</id>
		<updated>2020-06-18T23:37:23Z</updated>
		<published>2020-06-18T23:36:41Z</published>
		<category scheme="https://devblogs.nvidia.com" term="AI / Deep Learning" /><category scheme="https://devblogs.nvidia.com" term="criteo terabyte dataset" /><category scheme="https://devblogs.nvidia.com" term="DLRM" /><category scheme="https://devblogs.nvidia.com" term="featured" /><category scheme="https://devblogs.nvidia.com" term="recommender systems" /><category scheme="https://devblogs.nvidia.com" term="Triton Inference Server" />		<summary type="html"><![CDATA[<img width="1089" height="664" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu.png 1089w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-300x183.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-625x381.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-179x109.png 179w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-768x468.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-492x300.png 492w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-148x90.png 148w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-362x221.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-180x110.png 180w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-1024x624.png 1024w" sizes="(max-width: 1089px) 100vw, 1089px" title="spark-performance-improvement-gpu" />Recommender systems help people find what they’re looking for among an exponentially growing number of options. They are a critical component for driving user engagement on many online platforms. With the rapid growth in scale of industry datasets, deep learning (DL) recommender models, which capitalize on large amounts of training data, have started to show […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/optimizing-dlrm-on-nvidia-gpus/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/optimizing-dlrm-on-nvidia-gpus/"><![CDATA[<img width="1089" height="664" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu.png 1089w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-300x183.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-625x381.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-179x109.png 179w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-768x468.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-492x300.png 492w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-148x90.png 148w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-362x221.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-180x110.png 180w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/spark-performance-improvement-gpu-1024x624.png 1024w" sizes="(max-width: 1089px) 100vw, 1089px" title="spark-performance-improvement-gpu" /><p>Recommender systems help people find what they’re looking for among an exponentially growing number of options. They are a critical component for driving user engagement on many online platforms. With the rapid growth in scale of industry datasets, deep learning (DL) recommender models, which capitalize on large amounts of training data, have started to show advantages over traditional methods.</p>
<p><a href="https://devblogs.nvidia.com/optimizing-dlrm-on-nvidia-gpus/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/optimizing-dlrm-on-nvidia-gpus/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/optimizing-dlrm-on-nvidia-gpus/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
		<entry>
		<author>
			<name>Itay Ozery</name>
					</author>
		<title type="html"><![CDATA[Accelerating Bare Metal Kubernetes Workloads, the Right Way]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/accelerating-bare-metal-kubernetes-workloads-the-right-way/" />
		<id>https://devblogs.nvidia.com/?p=18182</id>
		<updated>2020-06-18T19:53:31Z</updated>
		<published>2020-06-18T19:53:00Z</published>
		<category scheme="https://devblogs.nvidia.com" term="Networking" /><category scheme="https://devblogs.nvidia.com" term="DPU-programmable" /><category scheme="https://devblogs.nvidia.com" term="kubernetes" /><category scheme="https://devblogs.nvidia.com" term="SmartNICs" />		<summary type="html"><![CDATA[<img width="1430" height="953" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship.jpg" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship.jpg 1430w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-300x200.jpg 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-625x417.jpg 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-173x115.jpg 173w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-768x512.jpg 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-450x300.jpg 450w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-135x90.jpg 135w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-362x241.jpg 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-165x110.jpg 165w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-1024x682.jpg 1024w" sizes="(max-width: 1430px) 100vw, 1430px" title="helm-the-ship" />In my previous Kubernetes post, Provision Bare-Metal Kubernetes Like a Cloud Giant!, I discussed the benefits of using BlueField DPU-programmable SmartNICs to simplify provisioning of Kubernetes clusters in bare-metal infrastructures. A key takeaway from this post was the current rapid shift toward bare metal Kubernetes, for delivering high-performance workloads across public, on-premises, and edge environments. […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/accelerating-bare-metal-kubernetes-workloads-the-right-way/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/accelerating-bare-metal-kubernetes-workloads-the-right-way/"><![CDATA[<img width="1430" height="953" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship.jpg" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship.jpg 1430w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-300x200.jpg 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-625x417.jpg 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-173x115.jpg 173w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-768x512.jpg 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-450x300.jpg 450w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-135x90.jpg 135w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-362x241.jpg 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-165x110.jpg 165w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/helm-the-ship-1024x682.jpg 1024w" sizes="(max-width: 1430px) 100vw, 1430px" title="helm-the-ship" /><p>This post was originally published on the Mellanox blog. In my previous Kubernetes post, Provision Bare-Metal Kubernetes Like a Cloud Giant!, I discussed the benefits of using BlueField DPU-programmable SmartNICs to simplify provisioning of Kubernetes clusters in bare-metal infrastructures. A key takeaway from this post was the current rapid shift toward bare metal Kubernetes...</p>
<p><a href="https://devblogs.nvidia.com/accelerating-bare-metal-kubernetes-workloads-the-right-way/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/accelerating-bare-metal-kubernetes-workloads-the-right-way/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/accelerating-bare-metal-kubernetes-workloads-the-right-way/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
		<entry>
		<author>
			<name>Ash Bhalgat</name>
						<uri>https://www.linkedin.com/in/ashbhalgat/</uri>
					</author>
		<title type="html"><![CDATA[Transforming Next-Generation Wireless with 5T for 5G and the NVIDIA Aerial SDK]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/transforming-next-gen-wireless-with-5t-for-5g-and-aerial-sdk/" />
		<id>https://devblogs.nvidia.com/?p=18045</id>
		<updated>2020-06-18T19:15:09Z</updated>
		<published>2020-06-18T13:00:00Z</published>
		<category scheme="https://devblogs.nvidia.com" term="Networking" /><category scheme="https://devblogs.nvidia.com" term="5G" /><category scheme="https://devblogs.nvidia.com" term="cloudRAN" /><category scheme="https://devblogs.nvidia.com" term="featured" /><category scheme="https://devblogs.nvidia.com" term="telco" /><category scheme="https://devblogs.nvidia.com" term="Telecommunications" />		<summary type="html"><![CDATA[<img width="1313" height="566" src="https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G.png 1313w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-300x129.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-625x269.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-179x77.png 179w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-768x331.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-500x216.png 500w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-160x69.png 160w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-362x156.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-255x110.png 255w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-1024x441.png 1024w" sizes="(max-width: 1313px) 100vw, 1313px" title="5T-for-5G" />NVIDIA Mellanox 5T for 5G technology provides a real-time and high-performance solution for building an efficient, time-synchronized CloudRAN infrastructure. Time synchronization and achieving high time accuracy for network traffic between O-RAN 7.2x compliant front-haul, mid-haul, and back-haul components in a cloud-native RAN (CloudRAN) environment has always been a challenge. Further, maintaining real-time and precise time-bound […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/transforming-next-gen-wireless-with-5t-for-5g-and-aerial-sdk/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/transforming-next-gen-wireless-with-5t-for-5g-and-aerial-sdk/"><![CDATA[<img width="1313" height="566" src="https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G.png 1313w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-300x129.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-625x269.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-179x77.png 179w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-768x331.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-500x216.png 500w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-160x69.png 160w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-362x156.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-255x110.png 255w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/5T-for-5G-1024x441.png 1024w" sizes="(max-width: 1313px) 100vw, 1313px" title="5T-for-5G" /><p>Figure 1. 5G wireless uses an open and cloud-native radio area network (CloudRAN). NVIDIA Mellanox 5T for 5G technology provides a real-time and high-performance solution for building an efficient, time-synchronized CloudRAN infrastructure. Time synchronization and achieving high time accuracy for network traffic between O-RAN 7.2x compliant front-haul, mid-haul, and back-haul components in a...</p>
<p><a href="https://devblogs.nvidia.com/transforming-next-gen-wireless-with-5t-for-5g-and-aerial-sdk/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/transforming-next-gen-wireless-with-5t-for-5g-and-aerial-sdk/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/transforming-next-gen-wireless-with-5t-for-5g-and-aerial-sdk/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
		<entry>
		<author>
			<name>Nandini Shankarappa</name>
					</author>
		<title type="html"><![CDATA[Accelerating with XDP over Mellanox ConnectX NICs]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/accelerating-with-xdp-over-mellanox-connectx-nics/" />
		<id>https://devblogs.nvidia.com/?p=18171</id>
		<updated>2020-06-18T00:40:39Z</updated>
		<published>2020-06-18T00:40:00Z</published>
		<category scheme="https://devblogs.nvidia.com" term="Networking" /><category scheme="https://devblogs.nvidia.com" term="ConnectX" /><category scheme="https://devblogs.nvidia.com" term="Mellanox" /><category scheme="https://devblogs.nvidia.com" term="NICs" /><category scheme="https://devblogs.nvidia.com" term="XDP" />		<summary type="html"><![CDATA[<img width="2560" height="1736" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-scaled.jpg" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-scaled.jpg 2560w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-300x203.jpg 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-625x424.jpg 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-170x115.jpg 170w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-768x521.jpg 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-1536x1042.jpg 1536w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-2048x1389.jpg 2048w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-442x300.jpg 442w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-133x90.jpg 133w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-362x246.jpg 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-162x110.jpg 162w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-1024x695.jpg 1024w" sizes="(max-width: 2560px) 100vw, 2560px" title="featured" />XDP (eXpress Data Path) is a programmable data path in the Linux kernel network stack. It provides a framework to BPF and can enable high performance packet processing at runtime. XDP works in concert with the Linux network stack and is not a kernel bypass. Because XDP runs in the kernel network driver, it can […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/accelerating-with-xdp-over-mellanox-connectx-nics/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/accelerating-with-xdp-over-mellanox-connectx-nics/"><![CDATA[<img width="2560" height="1736" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-scaled.jpg" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-scaled.jpg 2560w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-300x203.jpg 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-625x424.jpg 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-170x115.jpg 170w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-768x521.jpg 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-1536x1042.jpg 1536w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-2048x1389.jpg 2048w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-442x300.jpg 442w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-133x90.jpg 133w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-362x246.jpg 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-162x110.jpg 162w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/featured-1024x695.jpg 1024w" sizes="(max-width: 2560px) 100vw, 2560px" title="featured" /><p>This post was originally published on the Mellanox blog. XDP (eXpress Data Path) is a programmable data path in the Linux kernel network stack. It provides a framework to BPF and can enable high performance packet processing at runtime. XDP works in concert with the Linux network stack and is not a kernel bypass. Because XDP runs in the kernel network driver, it can read the ethernet frames from...</p>
<p><a href="https://devblogs.nvidia.com/accelerating-with-xdp-over-mellanox-connectx-nics/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/accelerating-with-xdp-over-mellanox-connectx-nics/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/accelerating-with-xdp-over-mellanox-connectx-nics/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
		<entry>
		<author>
			<name>Raphael Boissel</name>
					</author>
		<title type="html"><![CDATA[Announcing CUDA on Windows Subsystem for Linux 2]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/announcing-cuda-on-windows-subsystem-for-linux-2/" />
		<id>https://devblogs.nvidia.com/?p=18337</id>
		<updated>2020-06-18T21:10:51Z</updated>
		<published>2020-06-17T17:00:00Z</published>
		<category scheme="https://devblogs.nvidia.com" term="HPC" /><category scheme="https://devblogs.nvidia.com" term="AI" /><category scheme="https://devblogs.nvidia.com" term="CUDA" /><category scheme="https://devblogs.nvidia.com" term="DL" /><category scheme="https://devblogs.nvidia.com" term="DX" /><category scheme="https://devblogs.nvidia.com" term="featured" /><category scheme="https://devblogs.nvidia.com" term="GeForce" /><category scheme="https://devblogs.nvidia.com" term="GPU paravirtualization" /><category scheme="https://devblogs.nvidia.com" term="Linux on Windows" /><category scheme="https://devblogs.nvidia.com" term="Microsoft" /><category scheme="https://devblogs.nvidia.com" term="ML" /><category scheme="https://devblogs.nvidia.com" term="MxNet" /><category scheme="https://devblogs.nvidia.com" term="PyTorch" /><category scheme="https://devblogs.nvidia.com" term="Quadro" /><category scheme="https://devblogs.nvidia.com" term="TensorFlow" /><category scheme="https://devblogs.nvidia.com" term="WSL" />		<summary type="html"><![CDATA[<img width="800" height="450" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2.png 800w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-300x169.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-625x352.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-179x101.png 179w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-768x432.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-500x281.png 500w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-160x90.png 160w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-362x204.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-196x110.png 196w" sizes="(max-width: 800px) 100vw, 800px" title="wddm-model-supporting-cuda-user-mode-linux-guest" />In response to popular demand, Microsoft announced a new feature of the Windows Subsystem for Linux 2 (WSL 2)—GPU acceleration—at the Build conference in May 2020. This feature opens the gate for many compute applications, professional tools, and workloads currently available only on Linux, but which can now run on Windows as-is and benefit from […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/announcing-cuda-on-windows-subsystem-for-linux-2/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/announcing-cuda-on-windows-subsystem-for-linux-2/"><![CDATA[<img width="800" height="450" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2.png 800w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-300x169.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-625x352.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-179x101.png 179w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-768x432.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-500x281.png 500w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-160x90.png 160w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-362x204.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/wddm-model-supporting-cuda-user-mode-linux-guest-2-196x110.png 196w" sizes="(max-width: 800px) 100vw, 800px" title="wddm-model-supporting-cuda-user-mode-linux-guest" /><p>Figure 1. Stack image showing layers involved while running Linux AI frameworks in WSL 2 containers. In response to popular demand, Microsoft announced a new feature of the Windows Subsystem for Linux 2 (WSL 2)—GPU acceleration—at the Build conference in May 2020. This feature opens the gate for many compute applications, professional tools, and workloads currently available only on Linux...</p>
<p><a href="https://devblogs.nvidia.com/announcing-cuda-on-windows-subsystem-for-linux-2/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/announcing-cuda-on-windows-subsystem-for-linux-2/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/announcing-cuda-on-windows-subsystem-for-linux-2/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
		<entry>
		<author>
			<name>David Williams</name>
					</author>
		<title type="html"><![CDATA[Training and Fine-tuning BERT Using NVIDIA NGC]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/training-and-fine-tuning-bert-using-nvidia-ngc/" />
		<id>https://devblogs.nvidia.com/?p=17909</id>
		<updated>2020-06-19T00:01:51Z</updated>
		<published>2020-06-16T17:25:49Z</published>
		<category scheme="https://devblogs.nvidia.com" term="AI / Deep Learning" /><category scheme="https://devblogs.nvidia.com" term="BERT" /><category scheme="https://devblogs.nvidia.com" term="conversational AI" /><category scheme="https://devblogs.nvidia.com" term="natural language processing" /><category scheme="https://devblogs.nvidia.com" term="natural language understanding" /><category scheme="https://devblogs.nvidia.com" term="NGC" /><category scheme="https://devblogs.nvidia.com" term="NLP" /><category scheme="https://devblogs.nvidia.com" term="NLU" /><category scheme="https://devblogs.nvidia.com" term="speech recognition" />		<summary type="html"><![CDATA[<img width="1100" height="734" src="https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo.jpg" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo.jpg 1100w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-300x200.jpg 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-625x417.jpg 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-172x115.jpg 172w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-768x512.jpg 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-450x300.jpg 450w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-135x90.jpg 135w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-362x242.jpg 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-165x110.jpg 165w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-1024x683.jpg 1024w" sizes="(max-width: 1100px) 100vw, 1100px" title="bert-photo" />Imagine an AI program that can understand language better than humans can. Imagine building your own personal Siri or Google Search for a customized domain or application. Google BERT (Bidirectional Encoder Representations from Transformers) provides a game-changing twist to the field of natural language processing (NLP). BERT runs on supercomputers powered by NVIDIA GPUs to […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/training-and-fine-tuning-bert-using-nvidia-ngc/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/training-and-fine-tuning-bert-using-nvidia-ngc/"><![CDATA[<img width="1100" height="734" src="https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo.jpg" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo.jpg 1100w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-300x200.jpg 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-625x417.jpg 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-172x115.jpg 172w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-768x512.jpg 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-450x300.jpg 450w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-135x90.jpg 135w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-362x242.jpg 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-165x110.jpg 165w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/bert-photo-1024x683.jpg 1024w" sizes="(max-width: 1100px) 100vw, 1100px" title="bert-photo" /><p>Imagine an AI program that can understand language better than humans can. Imagine building your own personal Siri or Google Search for a customized domain or application. Google BERT (Bidirectional Encoder Representations from Transformers) provides a game-changing twist to the field of natural language processing (NLP). BERT runs on supercomputers powered by NVIDIA GPUs to train its huge neural...</p>
<p><a href="https://devblogs.nvidia.com/training-and-fine-tuning-bert-using-nvidia-ngc/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/training-and-fine-tuning-bert-using-nvidia-ngc/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/training-and-fine-tuning-bert-using-nvidia-ngc/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
		<entry>
		<author>
			<name>Vinh Nguyen</name>
					</author>
		<title type="html"><![CDATA[Improving Computer Vision with NVIDIA A100 GPUs]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/improving-computer-vision-with-nvidia-a100-gpus/" />
		<id>https://devblogs.nvidia.com/?p=18363</id>
		<updated>2020-06-18T19:15:33Z</updated>
		<published>2020-06-16T17:23:00Z</published>
		<category scheme="https://devblogs.nvidia.com" term="AI / Deep Learning" /><category scheme="https://devblogs.nvidia.com" term="A100" /><category scheme="https://devblogs.nvidia.com" term="Computer Vision" /><category scheme="https://devblogs.nvidia.com" term="DALI" /><category scheme="https://devblogs.nvidia.com" term="Deep Learning" /><category scheme="https://devblogs.nvidia.com" term="nvJPEG" />		<summary type="html"><![CDATA[<img width="207" height="138" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/bi3d-estimate-binary-depth-2.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/bi3d-estimate-binary-depth-2.png 207w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/bi3d-estimate-binary-depth-2-173x115.png 173w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/bi3d-estimate-binary-depth-2-135x90.png 135w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/bi3d-estimate-binary-depth-2-165x110.png 165w" sizes="(max-width: 207px) 100vw, 207px" title="bi3d-estimate-binary-depth (2)" />During the 2020 NVIDIA GPU Technology Conference keynote address, NVIDIA founder and CEO Jensen Huang introduced the new NVIDIA A100 GPU based on the NVIDIA Ampere GPU architecture. In this post, we detail the exciting new features of the A100 that make NVIDIA GPUs an ever-better powerhouse for computer vision workloads. We also showcase two […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/improving-computer-vision-with-nvidia-a100-gpus/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/improving-computer-vision-with-nvidia-a100-gpus/"><![CDATA[<img width="207" height="138" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/bi3d-estimate-binary-depth-2.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/bi3d-estimate-binary-depth-2.png 207w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/bi3d-estimate-binary-depth-2-173x115.png 173w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/bi3d-estimate-binary-depth-2-135x90.png 135w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/bi3d-estimate-binary-depth-2-165x110.png 165w" sizes="(max-width: 207px) 100vw, 207px" title="bi3d-estimate-binary-depth (2)" /><p>During the 2020 NVIDIA GPU Technology Conference keynote address, NVIDIA founder and CEO Jensen Huang introduced the new NVIDIA A100 GPU based on the NVIDIA Ampere GPU architecture. In this post, we detail the exciting new features of the A100 that make NVIDIA GPUs an ever-better powerhouse for computer vision workloads. We also showcase two recent CV research projects from NVIDIA Research...</p>
<p><a href="https://devblogs.nvidia.com/improving-computer-vision-with-nvidia-a100-gpus/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/improving-computer-vision-with-nvidia-a100-gpus/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/improving-computer-vision-with-nvidia-a100-gpus/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
		<entry>
		<author>
			<name>Janusz Lisiecki</name>
					</author>
		<title type="html"><![CDATA[Loading Data Fast with DALI and the New Hardware JPEG Decoder in NVIDIA A100 GPUs]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/loading-data-fast-with-dali-and-new-jpeg-decoder-in-a100/" />
		<id>https://devblogs.nvidia.com/?p=18130</id>
		<updated>2020-06-16T17:28:41Z</updated>
		<published>2020-06-15T23:10:00Z</published>
		<category scheme="https://devblogs.nvidia.com" term="AI / Deep Learning" /><category scheme="https://devblogs.nvidia.com" term="A100" /><category scheme="https://devblogs.nvidia.com" term="DALI" /><category scheme="https://devblogs.nvidia.com" term="data processing" /><category scheme="https://devblogs.nvidia.com" term="Deep Learning" /><category scheme="https://devblogs.nvidia.com" term="machine learning" /><category scheme="https://devblogs.nvidia.com" term="NVIDIA Ampere" /><category scheme="https://devblogs.nvidia.com" term="nvJPEG" />		<summary type="html"><![CDATA[<img width="786" height="524" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1.png 786w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-300x200.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-625x417.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-173x115.png 173w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-768x512.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-450x300.png 450w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-135x90.png 135w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-362x241.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-165x110.png 165w" sizes="(max-width: 786px) 100vw, 786px" title="end-to-end-data-processing-pipeline-throughput-featured" />Today, smartphones, the most popular device for taking pictures, can capture images as large as 4K UHD (3840×2160 image), more than 25 MB of raw data. Even considering the embarrassingly low HD resolution (1280×720), a raw image requires more than 2.5 MB of storage. Storing as few as 100 UHD images would require almost 3 […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/loading-data-fast-with-dali-and-new-jpeg-decoder-in-a100/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/loading-data-fast-with-dali-and-new-jpeg-decoder-in-a100/"><![CDATA[<img width="786" height="524" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1.png 786w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-300x200.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-625x417.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-173x115.png 173w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-768x512.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-450x300.png 450w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-135x90.png 135w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-362x241.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/end-to-end-data-processing-pipeline-throughput-2-1-165x110.png 165w" sizes="(max-width: 786px) 100vw, 786px" title="end-to-end-data-processing-pipeline-throughput-featured" /><p>Today, smartphones, the most popular device for taking pictures, can capture images as large as 4K UHD (3840×2160 image), more than 25 MB of raw data. Even considering the embarrassingly low HD resolution (1280×720), a raw image requires more than 2.5 MB of storage. Storing as few as 100 UHD images would require almost 3 GB of free space. Clearly, if you store data this way, you quickly run out of...</p>
<p><a href="https://devblogs.nvidia.com/loading-data-fast-with-dali-and-new-jpeg-decoder-in-a100/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/loading-data-fast-with-dali-and-new-jpeg-decoder-in-a100/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/loading-data-fast-with-dali-and-new-jpeg-decoder-in-a100/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
		<entry>
		<author>
			<name>Mahesh Khadatare</name>
					</author>
		<title type="html"><![CDATA[Leveraging the Hardware JPEG Decoder and NVIDIA nvJPEG Library on NVIDIA A100 GPUs]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/leveraging-hardware-jpeg-decoder-and-nvjpeg-on-a100/" />
		<id>https://devblogs.nvidia.com/?p=18226</id>
		<updated>2020-06-18T19:15:59Z</updated>
		<published>2020-06-15T22:51:02Z</published>
		<category scheme="https://devblogs.nvidia.com" term="AI / Deep Learning" /><category scheme="https://devblogs.nvidia.com" term="HPC" /><category scheme="https://devblogs.nvidia.com" term="A100" /><category scheme="https://devblogs.nvidia.com" term="nvJPEG" />		<summary type="html"><![CDATA[<img width="512" height="341" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2.png 512w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2-300x200.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2-173x115.png 173w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2-450x300.png 450w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2-135x90.png 135w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2-362x241.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2-165x110.png 165w" sizes="(max-width: 512px) 100vw, 512px" title="compressed-butterfly (2)" />According to surveys, the average person produces 1.2 trillion images that are captured by either a phone or a digital camera. The storage of such images, especially in high-resolution raw format, uses lots of memory. JPEG refers to the Joint Photographic Experts Group, which celebrated its 25th birthday in 2017. The JPEG standard specifies the […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/leveraging-hardware-jpeg-decoder-and-nvjpeg-on-a100/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/leveraging-hardware-jpeg-decoder-and-nvjpeg-on-a100/"><![CDATA[<img width="512" height="341" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2.png 512w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2-300x200.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2-173x115.png 173w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2-450x300.png 450w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2-135x90.png 135w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2-362x241.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/compressed-butterfly-2-165x110.png 165w" sizes="(max-width: 512px) 100vw, 512px" title="compressed-butterfly (2)" /><p>According to surveys, the average person produces 1.2 trillion images that are captured by either a phone or a digital camera. The storage of such images, especially in high-resolution raw format, uses lots of memory. JPEG refers to the Joint Photographic Experts Group, which celebrated its 25th birthday in 2017. The JPEG standard specifies the codec, which defines how an image is compressed into...</p>
<p><a href="https://devblogs.nvidia.com/leveraging-hardware-jpeg-decoder-and-nvjpeg-on-a100/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/leveraging-hardware-jpeg-decoder-and-nvjpeg-on-a100/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/leveraging-hardware-jpeg-decoder-and-nvjpeg-on-a100/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
		<entry>
		<author>
			<name>Brandon Lloyd</name>
					</author>
		<title type="html"><![CDATA[Implementing Stochastic Levels of Detail with Microsoft DirectX Raytracing]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/implementing-stochastic-lod-with-microsoft-dxr/" />
		<id>https://devblogs.nvidia.com/?p=18139</id>
		<updated>2020-06-16T17:28:19Z</updated>
		<published>2020-06-15T18:49:54Z</published>
		<category scheme="https://devblogs.nvidia.com" term="Graphics / Simulation" /><category scheme="https://devblogs.nvidia.com" term="level of detail" /><category scheme="https://devblogs.nvidia.com" term="LOD" /><category scheme="https://devblogs.nvidia.com" term="Microsoft DirectX Raytracing" /><category scheme="https://devblogs.nvidia.com" term="Microsoft DXR" /><category scheme="https://devblogs.nvidia.com" term="ray tracing" />		<summary type="html"><![CDATA[<img width="781" height="520" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1.png 781w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-300x200.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-625x416.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-173x115.png 173w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-768x511.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-451x300.png 451w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-135x90.png 135w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-362x241.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-165x110.png 165w" sizes="(max-width: 781px) 100vw, 781px" title="dxr-sample-lod-level-featured" />Level-of-detail (LOD) refers to replacing high-resolution meshes with lower-resolution meshes in the distance, where details may not be significant. This technique can help reduce memory footprint and geometric aliasing. Most importantly, it has long been used to improve rasterization performance in games. But does that apply equally to ray tracing? The render time for rasterization […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/implementing-stochastic-lod-with-microsoft-dxr/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/implementing-stochastic-lod-with-microsoft-dxr/"><![CDATA[<img width="781" height="520" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1.png 781w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-300x200.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-625x416.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-173x115.png 173w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-768x511.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-451x300.png 451w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-135x90.png 135w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-362x241.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/dxr-sample-lod-level-2-1-165x110.png 165w" sizes="(max-width: 781px) 100vw, 781px" title="dxr-sample-lod-level-featured" /><p>Level-of-detail (LOD) refers to replacing high-resolution meshes with lower-resolution meshes in the distance, where details may not be significant. This technique can help reduce memory footprint and geometric aliasing. Most importantly, it has long been used to improve rasterization performance in games. But does that apply equally to ray tracing? The render time for rasterization is...</p>
<p><a href="https://devblogs.nvidia.com/implementing-stochastic-lod-with-microsoft-dxr/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/implementing-stochastic-lod-with-microsoft-dxr/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/implementing-stochastic-lod-with-microsoft-dxr/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
		<entry>
		<author>
			<name>Paula Jukarainen</name>
					</author>
		<title type="html"><![CDATA[Creating Physically Based Materials for Minecraft with NVIDIA RTX]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/creating-physically-based-materials-for-minecraft-with-nvidia-rtx/" />
		<id>https://devblogs.nvidia.com/?p=18259</id>
		<updated>2020-06-12T18:32:58Z</updated>
		<published>2020-06-12T18:29:43Z</published>
		<category scheme="https://devblogs.nvidia.com" term="Graphics / Simulation" /><category scheme="https://devblogs.nvidia.com" term="Minecraft" /><category scheme="https://devblogs.nvidia.com" term="physical materials" /><category scheme="https://devblogs.nvidia.com" term="rendering" /><category scheme="https://devblogs.nvidia.com" term="RTX" />		<summary type="html"><![CDATA[<img width="300" height="168" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/minecraft.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/minecraft.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/minecraft-179x100.png 179w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/minecraft-160x90.png 160w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/minecraft-196x110.png 196w" sizes="(max-width: 300px) 100vw, 300px" title="minecraft" />Are you an experienced Minecraft content creator, but new to physically based materials? Or someone who just wants to learn the basics behind physically based rendering to create your own PBR resource packs? Great! This talk is for you. In “Creating Physically Based Materials for Minecraft with RTX,” we introduce you to the new look […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/creating-physically-based-materials-for-minecraft-with-nvidia-rtx/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/creating-physically-based-materials-for-minecraft-with-nvidia-rtx/"><![CDATA[<img width="300" height="168" src="https://devblogs.nvidia.com/wp-content/uploads/2020/06/minecraft.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/06/minecraft.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/minecraft-179x100.png 179w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/minecraft-160x90.png 160w, https://devblogs.nvidia.com/wp-content/uploads/2020/06/minecraft-196x110.png 196w" sizes="(max-width: 300px) 100vw, 300px" title="minecraft" /><p>Are you an experienced Minecraft content creator, but new to physically based materials? Or someone who just wants to learn the basics behind physically based rendering to create your own PBR resource packs? Great! This talk is for you. In “Creating Physically Based Materials for Minecraft with RTX...</p>
<p><a href="https://devblogs.nvidia.com/creating-physically-based-materials-for-minecraft-with-nvidia-rtx/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/creating-physically-based-materials-for-minecraft-with-nvidia-rtx/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/creating-physically-based-materials-for-minecraft-with-nvidia-rtx/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
		<entry>
		<author>
			<name>Andrew Tao</name>
					</author>
		<title type="html"><![CDATA[Using Multi-Scale Attention for Semantic Segmentation]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/using-multi-scale-attention-for-semantic-segmentation/" />
		<id>https://devblogs.nvidia.com/?p=17964</id>
		<updated>2020-06-15T23:10:26Z</updated>
		<published>2020-06-12T17:40:00Z</published>
		<category scheme="https://devblogs.nvidia.com" term="IVA/IoT" /><category scheme="https://devblogs.nvidia.com" term="dense prediction" /><category scheme="https://devblogs.nvidia.com" term="semantic segmentation" />		<summary type="html"><![CDATA[<img width="1274" height="766" src="https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes.jpg" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes.jpg 1274w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-300x180.jpg 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-625x376.jpg 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-179x108.jpg 179w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-768x462.jpg 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-500x300.jpg 500w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-150x90.jpg 150w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-362x218.jpg 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-183x110.jpg 183w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-1024x616.jpg 1024w" sizes="(max-width: 1274px) 100vw, 1274px" title="qualitative-comparison-cityscapes" />There’s an important technology that is commonly used in autonomous driving, medical imaging, and even Zoom virtual backgrounds: semantic segmentation. That’s the process of labelling pixels in an image as belonging to one of N classes (N being any number of classes), where the classes can be things like cars, roads, people, or trees. In […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/using-multi-scale-attention-for-semantic-segmentation/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/using-multi-scale-attention-for-semantic-segmentation/"><![CDATA[<img width="1274" height="766" src="https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes.jpg" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes.jpg 1274w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-300x180.jpg 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-625x376.jpg 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-179x108.jpg 179w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-768x462.jpg 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-500x300.jpg 500w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-150x90.jpg 150w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-362x218.jpg 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-183x110.jpg 183w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/qualitative-comparison-cityscapes-1024x616.jpg 1024w" sizes="(max-width: 1274px) 100vw, 1274px" title="qualitative-comparison-cityscapes" /><p>There’s an important technology that is commonly used in autonomous driving, medical imaging, and even Zoom virtual backgrounds: semantic segmentation. That’s the process of labelling pixels in an image as belonging to one of N classes (N being any number of classes), where the classes can be things like cars, roads, people, or trees. In the case of medical images, classes correspond to different...</p>
<p><a href="https://devblogs.nvidia.com/using-multi-scale-attention-for-semantic-segmentation/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/using-multi-scale-attention-for-semantic-segmentation/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/using-multi-scale-attention-for-semantic-segmentation/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
		<entry>
		<author>
			<name>Pradeep Gupta</name>
					</author>
		<title type="html"><![CDATA[CUDA Refresher: The GPU Computing Ecosystem]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/cuda-refresher-the-gpu-computing-ecosystem/" />
		<id>https://devblogs.nvidia.com/?p=18011</id>
		<updated>2020-05-21T23:20:01Z</updated>
		<published>2020-05-21T23:20:00Z</published>
		<category scheme="https://devblogs.nvidia.com" term="HPC" /><category scheme="https://devblogs.nvidia.com" term="CUDA" /><category scheme="https://devblogs.nvidia.com" term="CUDA Refresher" /><category scheme="https://devblogs.nvidia.com" term="Parallel Programming" />		<summary type="html"><![CDATA[<img width="881" height="588" src="https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2.png 881w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-300x200.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-625x417.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-172x115.png 172w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-768x513.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-449x300.png 449w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-135x90.png 135w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-362x242.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-165x110.png 165w" sizes="(max-width: 881px) 100vw, 881px" title="cuda-ecosystem-2" />This is the third post in the CUDA Refresher series, which has the goal of refreshing key concepts in CUDA, tools, and optimization for beginning or intermediate developers. Ease of programming and a giant leap in performance is one of the key reasons for the CUDA platform’s widespread adoption. The second biggest reason for the […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/cuda-refresher-the-gpu-computing-ecosystem/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/cuda-refresher-the-gpu-computing-ecosystem/"><![CDATA[<img width="881" height="588" src="https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2.png 881w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-300x200.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-625x417.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-172x115.png 172w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-768x513.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-449x300.png 449w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-135x90.png 135w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-362x242.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/cuda-ecosystem-2-165x110.png 165w" sizes="(max-width: 881px) 100vw, 881px" title="cuda-ecosystem-2" /><p>This is the third post in the CUDA Refresher series, which has the goal of refreshing key concepts in CUDA, tools, and optimization for beginning or intermediate developers. Ease of programming and a giant leap in performance is one of the key reasons for the CUDA platform’s widespread adoption. The second biggest reason for the success of the CUDA platform is the availability of a broad and rich...</p>
<p><a href="https://devblogs.nvidia.com/cuda-refresher-the-gpu-computing-ecosystem/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/cuda-refresher-the-gpu-computing-ecosystem/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/cuda-refresher-the-gpu-computing-ecosystem/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
		<entry>
		<author>
			<name>Priya Tikoo</name>
					</author>
		<title type="html"><![CDATA[Enabling Scalable User Experiences with Modern Workloads on Windows Virtual Desktop]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/enabling-scalable-user-experiences-with-modern-workloads-on-windows-virtual-desktop/" />
		<id>https://devblogs.nvidia.com/?p=17410</id>
		<updated>2020-05-15T00:22:54Z</updated>
		<published>2020-05-14T23:08:51Z</published>
		<category scheme="https://devblogs.nvidia.com" term="Graphics / Simulation" /><category scheme="https://devblogs.nvidia.com" term="Azure" /><category scheme="https://devblogs.nvidia.com" term="N-series VMs" /><category scheme="https://devblogs.nvidia.com" term="nVector" /><category scheme="https://devblogs.nvidia.com" term="vGPU" /><category scheme="https://devblogs.nvidia.com" term="Virtual GPU" /><category scheme="https://devblogs.nvidia.com" term="Windows Virtual Desktop" />		<summary type="html"><![CDATA[<img width="1694" height="954" src="https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop.png 1694w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-300x169.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-625x352.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-179x101.png 179w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-768x433.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-1536x865.png 1536w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-500x282.png 500w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-160x90.png 160w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-362x204.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-195x110.png 195w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-1024x577.png 1024w" sizes="(max-width: 1694px) 100vw, 1694px" title="windows-virtual-desktop" />If you’re supporting the recent influx in remote work, you’ve probably noticed that business applications are more graphics-heavy than ever before. Applications such as Microsoft Office, Google Chrome, and PDF readers now offer graphics-rich features that require more power. In addition, 4K and multiple high-resolution monitors, as well as multimedia streaming, are becoming the new […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/enabling-scalable-user-experiences-with-modern-workloads-on-windows-virtual-desktop/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/enabling-scalable-user-experiences-with-modern-workloads-on-windows-virtual-desktop/"><![CDATA[<img width="1694" height="954" src="https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop.png 1694w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-300x169.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-625x352.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-179x101.png 179w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-768x433.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-1536x865.png 1536w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-500x282.png 500w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-160x90.png 160w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-362x204.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-195x110.png 195w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/windows-virtual-desktop-1024x577.png 1024w" sizes="(max-width: 1694px) 100vw, 1694px" title="windows-virtual-desktop" /><p>Windows Virtual Desktop If you’re supporting the recent influx in remote work, you’ve probably noticed that business applications are more graphics-heavy than ever before. Applications such as Microsoft Office, Google Chrome, and PDF readers now offer graphics-rich features that require more power. In addition, 4K and multiple high-resolution monitors, as well as multimedia streaming...</p>
<p><a href="https://devblogs.nvidia.com/enabling-scalable-user-experiences-with-modern-workloads-on-windows-virtual-desktop/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/enabling-scalable-user-experiences-with-modern-workloads-on-windows-virtual-desktop/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/enabling-scalable-user-experiences-with-modern-workloads-on-windows-virtual-desktop/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
		<entry>
		<author>
			<name>Vinh Nguyen</name>
					</author>
		<title type="html"><![CDATA[Announcing NVIDIA Merlin: An Application Framework for Deep Recommender Systems]]></title>
		<link rel="alternate" type="text/html" href="https://devblogs.nvidia.com/announcing-nvidia-merlin-application-framework-for-deep-recommender-systems/" />
		<id>https://devblogs.nvidia.com/?p=17680</id>
		<updated>2020-05-26T19:52:13Z</updated>
		<published>2020-05-14T20:10:45Z</published>
		<category scheme="https://devblogs.nvidia.com" term="AI / Deep Learning" /><category scheme="https://devblogs.nvidia.com" term="AI applications" /><category scheme="https://devblogs.nvidia.com" term="HugeCTR" /><category scheme="https://devblogs.nvidia.com" term="NVIDIA Merlin" /><category scheme="https://devblogs.nvidia.com" term="NVTabular" /><category scheme="https://devblogs.nvidia.com" term="recommendation engines" /><category scheme="https://devblogs.nvidia.com" term="recommender systems" /><category scheme="https://devblogs.nvidia.com" term="TensorRT" /><category scheme="https://devblogs.nvidia.com" term="Triton Inference Server" />		<summary type="html"><![CDATA[<img width="978" height="547" src="https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3.png 978w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-300x168.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-625x350.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-179x100.png 179w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-768x430.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-500x280.png 500w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-160x90.png 160w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-362x202.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-197x110.png 197w" sizes="(max-width: 978px) 100vw, 978px" title="nvidia-merlin-architecture-3" />Recommender systems drive every action that you take online, from the selection of this web page that you’re reading now to more obvious examples like online shopping. They play a critical role in driving user engagement on online platforms, selecting a few relevant goods or services from the exponentially growing number of available options. On […]<div style="margin-top: 0px; margin-bottom: 0px;" class="sharethis-inline-share-buttons" data-url=https://devblogs.nvidia.com/announcing-nvidia-merlin-application-framework-for-deep-recommender-systems/></div>]]></summary>
		<content type="html" xml:base="https://devblogs.nvidia.com/announcing-nvidia-merlin-application-framework-for-deep-recommender-systems/"><![CDATA[<img width="978" height="547" src="https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3.png" class="attachment-feed-main-image size-feed-main-image wp-post-image" alt="" srcset="https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3.png 978w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-300x168.png 300w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-625x350.png 625w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-179x100.png 179w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-768x430.png 768w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-500x280.png 500w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-160x90.png 160w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-362x202.png 362w, https://devblogs.nvidia.com/wp-content/uploads/2020/05/nvidia-merlin-architecture-3-197x110.png 197w" sizes="(max-width: 978px) 100vw, 978px" title="nvidia-merlin-architecture-3" /><p>Recommender systems drive every action that you take online, from the selection of this web page that you’re reading now to more obvious examples like online shopping. They play a critical role in driving user engagement on online platforms, selecting a few relevant goods or services from the exponentially growing number of available options. On some of the largest commercial platforms...</p>
<p><a href="https://devblogs.nvidia.com/announcing-nvidia-merlin-application-framework-for-deep-recommender-systems/" rel="nofollow" data-wpel-link="internal">Source</a></p>]]></content>
		<link rel="replies" type="text/html" href="https://devblogs.nvidia.com/announcing-nvidia-merlin-application-framework-for-deep-recommender-systems/#comments" thr:count="0"/>
		<link rel="replies" type="application/atom+xml" href="https://devblogs.nvidia.com/announcing-nvidia-merlin-application-framework-for-deep-recommender-systems/feed/" thr:count="0"/>
		<thr:total>0</thr:total>
	</entry>
	</feed>