<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:blogger="http://schemas.google.com/blogger/2008" xmlns:gd="http://schemas.google.com/g/2005" xmlns:georss="http://www.georss.org/georss" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/" xmlns:thr="http://purl.org/syndication/thread/1.0" version="2.0"><channel><atom:id>tag:blogger.com,1999:blog-7560610393342650347</atom:id><lastBuildDate>Wed, 25 Mar 2026 14:42:17 +0000</lastBuildDate><category>analysis</category><category>videogames</category><category>hardware</category><category>analyse this</category><category>curmudgeon</category><category>Post thoughts</category><category>Podcast</category><category>screenestate</category><category>Sci-fi tropes</category><category>Blood Bowl</category><category>Game diary</category><category>In defence of</category><category>Roundup</category><category>mid-range hardware reviews</category><category>mid-thoughts</category><category>Comic</category><category>PSA</category><category>old-guy-gaming</category><category>pron</category><title>The Easy Button</title><description></description><link>http://hole-in-my-head.blogspot.com/</link><managingEditor>noreply@blogger.com (Unknown)</managingEditor><generator>Blogger</generator><openSearch:totalResults>355</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><language>en-us</language><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-4312719559894296176</guid><pubDate>Wed, 18 Mar 2026 06:21:00 +0000</pubDate><atom:updated>2026-03-18T06:21:44.448+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">screenestate</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>The DLSS 5 lie...</title><description>&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUF3jJRGYkDlJkhxE7p8v5hoBvyQ3fytM-eihWNBabWz8qrhYtTbiqp_5_jSPquAph5hHpxh1_sXUM9yGvsERxLr3cUFZ3WClYT2H21eC2kA8-RN9em4d8rxCI3L4XTJfZRC7fpRo3NwXy816ZoaZnYKF2mkotV8YF4dX18qWEpucOJbm827y92jDqzhk/s1920/Title.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUF3jJRGYkDlJkhxE7p8v5hoBvyQ3fytM-eihWNBabWz8qrhYtTbiqp_5_jSPquAph5hHpxh1_sXUM9yGvsERxLr3cUFZ3WClYT2H21eC2kA8-RN9em4d8rxCI3L4XTJfZRC7fpRo3NwXy816ZoaZnYKF2mkotV8YF4dX18qWEpucOJbm827y92jDqzhk/w640-h360/Title.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;A tongue-in-cheek header image...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;DLSS 5 has exploded into our lives the last couple of days and it's been one hell of a ride. However, I think a lot of this comes from not understanding the tech because Nvidia's CEO keeps lying...&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;Let's get to the central conceit of this post - Nvidia's Jensen Huang has said the following:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;blockquote&gt;&lt;b&gt;&lt;i&gt;"&lt;a href="https://www.nvidia.com/en-us/geforce/news/dlss5-breakthrough-in-visual-fidelity-for-games/"&gt;Twenty-five years after NVIDIA invented the programmable shader, we are renventing computer graphics once again. DLSS 5 is the GPT moment for graphics - blending hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression.&lt;/a&gt;"&lt;/i&gt;&lt;/b&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;blockquote&gt;"&lt;a href="https://www.tomshardware.com/pc-components/gpus/jensen-huang-says-gamers-are-completely-wrong-about-dlss-5-nvidia-ceo-responds-to-dlss-5-backlash"&gt;[...] DLSS 5 fuses controllability of the of [sic - Tom's Hardware] geometry and textures and everything about the game with generative AI. [...] It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level. [...] This is very different from generative AI; it's content-control generative AI. That's why we call it neural rendering.&lt;/a&gt;"&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;These statements are nonsensical and contradictory and this is for a simple reason - there is very little truth here.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjynBeb_pYyW8aOirFPdO3QS2AYrkM2g-jqxjw2KVOlAuUMMIwHV68UjwbaPEmuuzatD22B-uayxv702EYsaESGBe7mHCGv7c6eCU0q4ysrXQjJfK7G3V6fpHTW_k5LSQHMwp93K2RxONgUs-81_RhKRoPck2lbz3iF2T1TLxBm8oQQTsMjpf2XdyQ3xtM/s3337/how-nvidia-dlss-5-works.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1345" data-original-width="3337" height="258" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjynBeb_pYyW8aOirFPdO3QS2AYrkM2g-jqxjw2KVOlAuUMMIwHV68UjwbaPEmuuzatD22B-uayxv702EYsaESGBe7mHCGv7c6eCU0q4ysrXQjJfK7G3V6fpHTW_k5LSQHMwp93K2RxONgUs-81_RhKRoPck2lbz3iF2T1TLxBm8oQQTsMjpf2XdyQ3xtM/w640-h258/how-nvidia-dlss-5-works.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;DLSS 5 doesn't touch geometry, it doesn't touch the textures...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;If you look at the actual technical information provided by Nvidia &lt;a href="https://www.nvidia.com/en-us/geforce/news/dlss5-breakthrough-in-visual-fidelity-for-games/"&gt;in their release&lt;/a&gt; and the short summary image above, you can see that DLSS neural rendering is simply a more advanced neural net guided lighting filter. It's a more advanced version of &lt;a href="http://enbdev.com/"&gt;ENB&lt;/a&gt;&amp;nbsp;and&amp;nbsp;&lt;a href="https://reshade.me/"&gt;Reshade&lt;/a&gt;&amp;nbsp;because it is fed the data from the frame (and prior frames)&amp;nbsp;&lt;i&gt;just like the DLSS upscaler and ray reconstruction&lt;/i&gt;. It's generating data as much (or maybe slightly less) than those two technologies do and that's why it's stable - that's why it's not mush that is uncontrollable, which real Gen AI is.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;DLSS 5 is Machine Learning (ML).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;From this perspective it's obvious why this is placed under the umbrella of "DLSS" instead of having its own technology nomenclature.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The big problem has been that Jensen's priority is not gaming or communicating to gamer: his priority is keeping the AI investment economy chugging along for as long as he can. Through that lens, his comments and messaging makes complete sense.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Gen AI has no commercial product - it is not making money. Jensen, re-labelling this tech as Gen AI gives him something to point to, "Hey, we're shipping this! Look, it's a real product! Gen AI has real applications! It's not a bubble - we have infinite growth!" He's talking to investors, the market, Wall Street, etc.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Chat GPT&amp;nbsp;&lt;i&gt;isn't&lt;/i&gt;&amp;nbsp;Gen AI. It's an LLM. Yes, it can be linked to Gen AI through agentic means but it's not Gen AI. Generative AI&amp;nbsp;&lt;i&gt;can't&amp;nbsp;&lt;/i&gt;be hand-crafted or controlled. If it could be, it would be stable, repeatable, and not require multiple generations to get what you want as an output.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Look at the flow description of DLSS 5 - it's literally a post-processing effect. It needs the colour, the textures, the heightmaps, buffers, etc. to be in place for it to work. &lt;a href="https://www.digitalfoundry.net/features/nvidias-new-dlss-5-brings-photo-realistic-lighting-to-rtx-50-series"&gt;The higher the quality of those things, the better it will be&lt;/a&gt;. &lt;a href="https://x.com/kiaran_ritchie/status/2033704806079205471?s=20"&gt;What some people are freaking out about&lt;/a&gt; &lt;a href="https://x.com/SebAaltonen/status/2033900101656060261?s=20"&gt;is not on the cards&lt;/a&gt;, at all... It's&amp;nbsp;&lt;i&gt;literally&lt;/i&gt;&amp;nbsp;post-processing at the frame level.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;blockquote&gt;&lt;b&gt;&lt;i&gt;"DLSS 5 takes a game's color and motion vectors for each frame as input, and uses an AI model to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame."&lt;/i&gt;&lt;/b&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The problem is that not only are these lies and misdirections from Jensen hurting the brand in consumers eyes, they're creating confusion as to what, exactly, this technology is. Twitter has been a cesspool of people hating on this technology simply because he used the words "generative AI", people are completely unable to tell that &lt;a href="https://x.com/Avroveks/status/2033742885137334681?s=20"&gt;the detail is&lt;/a&gt; &lt;a href="https://x.com/synaesthesiajp/status/2034002081103622484?s=20"&gt;being brought out&lt;/a&gt; &lt;a href="https://x.com/nimlot26/status/2033996275947917774?s=20"&gt;of the underlying&lt;/a&gt; &lt;a href="https://x.com/CMoney4Ever/status/2033773976351121652?s=20"&gt;models and textures&lt;/a&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Worse still, it's exposing how poorly people understand lighting. I've been an amateur photographer for 20 years and I worked at a Newspaper Techdesk, editing photos for print for 2-3 years. I'm not an expert but I have a lot of grounding in &lt;a href="https://x.com/hoytehablode/status/2033817842739454388?s=20"&gt;how things look and work&lt;/a&gt; from a lighting standpoint and the images shown of DLSS 5, while not perfect, are &lt;u&gt;&lt;i&gt;more&amp;nbsp;&lt;/i&gt;&lt;/u&gt;&lt;i&gt;&lt;u&gt;correct&lt;/u&gt;&lt;/i&gt;&amp;nbsp;from a lighting standpoint. I've seen many people nitpicking the images and most of them are wrong/not considering all of the facts of each scene.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I could spend an inordinate amount of energy trying to convince them by refuting &lt;a href="https://x.com/EmilyAYoung1/status/2034026759075860859?s=20"&gt;some or all of their claims&lt;/a&gt; but it's not worth it for me, plus I have things going on in my life that demand more attention. At the end of the day, we will see proper implementations towards the end of the year and then the proof will be in the pudding. So, I won't waste time arguing about half finished tech demos &lt;a href="https://www.reddit.com/r/hardware/comments/1rvwube/dlss_5_fixing_it_in_post/?utm_source=share&amp;amp;utm_medium=web3x&amp;amp;utm_name=web3xcss&amp;amp;utm_term=1&amp;amp;utm_content=share_button"&gt;which can easily be fixed&lt;/a&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Conclusion...&lt;/span&gt;&amp;nbsp;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Coming back to the point of this blogpost, Jensen is lying in his public statements in order to keep the AI bubble from popping, the stock price of Nvidia high, and his huge bonuses rolling in. He is saying things that are contradicting the technical information released by Nvidia on their website and in-person to press and users at the GTC reveal event.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Even looking at the available information allows people with some technical background to decipher what DLSS 5 actually is. However, many are ignoring the information and not using their brains. I think this technology has a lot of potential - just like how ReShade and ENB have been incredible tools for enhancing the look of older games. We now have a superior tool that can be more deeply integrated into game engines for developers to use that enhances the available art in the game - and requires higher quality, more detailed art for better results.&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2026/03/the-dlss-5-lie.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUF3jJRGYkDlJkhxE7p8v5hoBvyQ3fytM-eihWNBabWz8qrhYtTbiqp_5_jSPquAph5hHpxh1_sXUM9yGvsERxLr3cUFZ3WClYT2H21eC2kA8-RN9em4d8rxCI3L4XTJfZRC7fpRo3NwXy816ZoaZnYKF2mkotV8YF4dX18qWEpucOJbm827y92jDqzhk/s72-w640-h360-c/Title.jpg" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-3767284937738834503</guid><pubDate>Sat, 07 Mar 2026 17:16:00 +0000</pubDate><atom:updated>2026-03-07T17:27:48.824+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analyse this</category><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>Analyse This: The Rendering Cost of AI...</title><description>&lt;div style="text-align: left;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6gS-K5otacpNRnwtt57fd2zMGsfUcDNQgYixcaskjIgzPI0OuAiKuEweFpluGoC4jRFXiyMRo9r1cVUzuuhQ7_KvxJssv0G_mRHEZ4Ae9JUyzcBlL5ibI0gQYWPni2I2iasUamnuv_wz7ho8NzoXhspCqJ8hncnZTXDHy-7rYtvpZ2318b76CGSomstI/s1344/Title.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="768" data-original-width="1344" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6gS-K5otacpNRnwtt57fd2zMGsfUcDNQgYixcaskjIgzPI0OuAiKuEweFpluGoC4jRFXiyMRo9r1cVUzuuhQ7_KvxJssv0G_mRHEZ4Ae9JUyzcBlL5ibI0gQYWPni2I2iasUamnuv_wz7ho8NzoXhspCqJ8hncnZTXDHy-7rYtvpZ2318b76CGSomstI/w640-h366/Title.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Unironically, I used AI for this...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The interest surrounding the rumours of the next gen consoles has been slowly boiling over the last few months. Now, with &lt;a href="https://x.com/asha_shar/status/2029645713962156149?s=20"&gt;new confirmations from Xbox's newly-minted CEO&lt;/a&gt;, Asha Sharma we have a firmer grasp of what the next gen holds for one side of the equation. On the other side, Playstation 6 also has a lot of hardware specification leaks but one thing still confuses me: why all the memory?!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In this post, I'm going to look at the logic of having a lot of RAM in the consoles and why, to my mind, the leaked specs for the memory configurations &lt;i&gt;just don't make sense&lt;/i&gt;...&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZzladCsWw4GPL9i_jNun6SdmKoEGzZPqu3ZGOU1h2GicgH4oY5QZCY8wdn9I8vPSqEAeo8nSGkgtVi9w8xOPBXY3y2sLXvmM2DvNYvCfk3acLyxYMWh54Jc-_P7-mEFYtM8TDn_TG-8Fy2dG3qFx_HotTQmF_PKWlaA7ZOblTrvFIzEf5ZR3fzJI9zzg/s1920/GDDR7%20roadmap%20-%20Micron.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZzladCsWw4GPL9i_jNun6SdmKoEGzZPqu3ZGOU1h2GicgH4oY5QZCY8wdn9I8vPSqEAeo8nSGkgtVi9w8xOPBXY3y2sLXvmM2DvNYvCfk3acLyxYMWh54Jc-_P7-mEFYtM8TDn_TG-8Fy2dG3qFx_HotTQmF_PKWlaA7ZOblTrvFIzEf5ZR3fzJI9zzg/w640-h360/GDDR7%20roadmap%20-%20Micron.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;As of 2023, 24+ Gbit modules were not on the roadmap... (&lt;a href="https://www.techpowerup.com/311794/micron-updates-roadmap-promises-32-gbit-ddr5-and-gddr7-for-2024#:~:text=During%20yesterday%27s%20HBM3%20Gen2%20memory,96%20GB/s%20per%20chip."&gt;TechPowerUp&lt;/a&gt;)&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Spinning Around...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Xbox have come out swinging after the negative PR surrounding Phil Spencer's and Sarah Bond's exit from the division. Asha has given some small murmurings so far until a tweet yesterday which confirmed that Microsoft is going for a system which will be both a console and PC. This is a very interesting move and something which was being called for since back when the Xbox Series consoles were on their way.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The sad thing is that this endeavour has obviously been in the works under the leadership of Phil and Sarah - and neither will see the credit for such a bold project.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;blockquote class="twitter-tweet"&gt;&lt;p dir="ltr" lang="en"&gt;Great start to the morning with Team Xbox, where we talked about our commitment to the return of Xbox including Project Helix, the code name for our next generation console.&lt;br /&gt;&lt;br /&gt;Project Helix will lead in performance and play your Xbox and PC games. Looking forward to chatting about… &lt;a href="https://t.co/Xx5rpVnAZI"&gt;pic.twitter.com/Xx5rpVnAZI&lt;/a&gt;&lt;/p&gt;— Asha (@asha_shar) &lt;a href="https://twitter.com/asha_shar/status/2029645713962156149?ref_src=twsrc%5Etfw"&gt;March 5, 2026&lt;/a&gt;&lt;/blockquote&gt; &lt;script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"&gt;&lt;/script&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Getting back to the discussion at hand, Tom over at Moore's Law is Dead is &lt;a href="https://www.youtube.com/watch?v=VQxBW7JgkHw"&gt;talking about how the higher specs of the next gen Xbox (the Magnus APU) will play to the strengths of that focus&lt;/a&gt; and I can't really deny that - especially with the included &lt;a href="https://en.wikipedia.org/wiki/Neural_processing_unit"&gt;Neural Processing Unit (NPU)&lt;/a&gt;&amp;nbsp;that will be able to run some AI models independently from the GPU compute infrastructre. However, Tom, himself, &lt;a href="https://youtu.be/X_pjrZQDerw?si=yilRBEm8O8j1fKjp&amp;amp;t=1103"&gt;stated that it this is supporting up to 48 GB&lt;/a&gt; GDDR7 - which would require 32 Gbit / 4 GB modules.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, I have been unable to find any mention of such a capacity for two of the three GDDR7 manufacturers and, as far as I can tell, only Micron has listed &lt;a href="https://videocardz.com/newz/first-generation-of-gddr7-graphics-cards-sticking-to-16gbit-2gb-modules-3gb-on-roadmaps"&gt;an updated roadmap&lt;/a&gt; that shows above 24 Gbit / 3 GB modules... and there have been no announcements of commencement of manufacturing of these products aside from &lt;a href="https://nvidianews.nvidia.com/news/nvidia-unveils-rubin-cpx-a-new-class-of-gpu-designed-for-massive-context-inference"&gt;Nvidia's announcement&lt;/a&gt; &lt;a href="https://newsletter.semianalysis.com/p/another-giant-leap-the-rubin-cpx-specialized-accelerator-rack"&gt;of a product&lt;/a&gt; which is &lt;i&gt;expected&lt;/i&gt;&amp;nbsp;to launch at the end of 2026. So, that leaves us with the possibility of 36 GB RAM as a maximum on the next gen Xbox console.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Why 36 GB? Well, it's simply because no recent console has ever used the cutting edge memory tech. &lt;a href="https://www.techpowerup.com/gpu-specs/playstation-4-gpu.c2085"&gt;Playstation 4&lt;/a&gt; was using 5.5 Gbps (&lt;a href="https://www.techpowerup.com/gpu-specs/xbox-one-gpu.c2086"&gt;Xbox One&lt;/a&gt; was even lower spec!) GDDR5 when a year prior, Nvidia graphics cards were &lt;a href="https://www.techpowerup.com/gpu-specs/geforce-gtx-680.c342"&gt;already using 6 Gbps modules&lt;/a&gt;. The mid-gen refreshes of the &lt;a href="https://www.techpowerup.com/gpu-specs/xbox-one-x-gpu.c2977"&gt;One X&lt;/a&gt; and &lt;a href="https://www.techpowerup.com/gpu-specs/playstation-4-pro-gpu.c2876"&gt;PS4 Pro&lt;/a&gt;&amp;nbsp;were also lagging behind the already released &lt;a href="https://www.techpowerup.com/gpu-specs/geforce-gtx-960.c2637"&gt;GTX 9&lt;/a&gt; and &lt;a href="https://www.techpowerup.com/gpu-specs/geforce-gtx-980.c2621"&gt;10 series&lt;/a&gt;.&amp;nbsp;&lt;a href="https://www.techpowerup.com/gpu-specs/playstation-5-gpu.c3480"&gt;Playstation 5&lt;/a&gt;&amp;nbsp;and &lt;a href="https://www.techpowerup.com/gpu-specs/xbox-series-x-gpu.c3482"&gt;Xbox&lt;/a&gt; &lt;a href="https://www.techpowerup.com/gpu-specs/xbox-series-s-gpu.c3683"&gt;Series&lt;/a&gt; were using 14 Gbps GDDR6 when two years earlier Nvidia cards were using the same memory in the &lt;a href="https://www.techpowerup.com/gpu-specs/geforce-rtx-2080.c3224"&gt;RTX 20 series&lt;/a&gt; (with &lt;a href="https://www.techpowerup.com/gpu-specs/radeon-rx-5700-xt.c3339"&gt;RDNA 1 cards&lt;/a&gt; following a year later in 2019). Even the &lt;a href="https://www.techpowerup.com/gpu-specs/playstation-3-gpu-90nm.c3405"&gt;Playstation 3&lt;/a&gt; was using 1300 Mbps and the &lt;a href="https://www.techpowerup.com/gpu-specs/xbox-360-gpu-90nm.c1919"&gt;Xbox 360&lt;/a&gt; was using 1400 Mbps GDDR3 when the &lt;a href="https://www.techpowerup.com/gpu-specs/geforce-7800-gtx-512.c144"&gt;7800 GTX 512&lt;/a&gt; was using 1600 Mbps a year earlier.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, for people to assume that a memory density which was not officially announced that I can find, or referenced outside of &lt;a href="https://www.jedec.org/news/pressreleases/jedec-publishes-gddr7-graphics-memory-standard"&gt;a technical paper from JEDEC&lt;/a&gt; - nor from the big three (Micron, SK Hynix, and Samsung) will suddenly appear on the market for a mass-produced consumer good is pretty crazy! Hell, we've been hearing about 3 GB GDDR7 modules for years and we still don't have them on consumer GPUs! (&lt;a href="https://videocardz.com/newz/nvidia-rtx-5050-with-9gb-gddr7-memory-to-launch-around-computex"&gt;Though it seems that's about to change...&lt;/a&gt;)&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That likely means we have at least a two year gap between spin-up and datacentre exclusivity before they appear on high-end, specialised consumer products (i.e. not consoles). There simply won't be the bulk production of the part, nor secondary supplier availability to supply a console release.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1VDC3LCGfRjKaP5-U0U7YJBs5NJVa6ZN0FWl7afbKLIzVpVyiX8f9btm_OdmztjrIrMxq1jh1xxWdxSFvVyd_zN8L6c_01zjP9M883mSKzx0oYr56GmZuLUAjqcmIIGe3A6s093olXc8tsTf3WmhZ7RXJeZNKh7TIRz-FTL9DLOavYGKBmj4tZl_PTIE/s793/Memory%20configs.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="357" data-original-width="793" height="288" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1VDC3LCGfRjKaP5-U0U7YJBs5NJVa6ZN0FWl7afbKLIzVpVyiX8f9btm_OdmztjrIrMxq1jh1xxWdxSFvVyd_zN8L6c_01zjP9M883mSKzx0oYr56GmZuLUAjqcmIIGe3A6s093olXc8tsTf3WmhZ7RXJeZNKh7TIRz-FTL9DLOavYGKBmj4tZl_PTIE/w640-h288/Memory%20configs.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;My appraisal of the potential memory configurations of the next gen consoles with a comparison for two currently available...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Written in the Stars...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Moving over to Playstation, Tom shared that Sony &lt;a href="https://youtu.be/mmAX6XFQsA8?si=VhIWaonkA-heuFjC&amp;amp;t=938"&gt;could have 30 or 40 GB RAM&lt;/a&gt;&amp;nbsp;and, more recently, he provided the caveat that &lt;a href="https://www.youtube.com/live/VQxBW7JgkHw?si=EjULc51TOFmHLZVH&amp;amp;t=4062"&gt;nothing is decided yet for both consoles and that they could have less&lt;/a&gt;. That's fair enough - and is often something which is left out of "leak" information. They're all too often provided with the certainty of an AI chatbot.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Here's where I begin to have a problem, though: &lt;u&gt;&lt;b&gt;most consoles in recent memory haven't used a clamshell design&lt;/b&gt;&lt;/u&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Seriously, let's look at: &lt;a href="https://www.ifixit.com/Teardown/PlayStation+3+Teardown/1260"&gt;PS3&lt;/a&gt;, &lt;a href="https://www.ifixit.com/Teardown/Xbox+360+Teardown/72725"&gt;Xbox 360&lt;/a&gt;, &lt;a href="https://www.ifixit.com/Teardown/PlayStation+4+Pro+Teardown/72946"&gt;PS4 Pro&lt;/a&gt;, &lt;a href="https://www.ifixit.com/Teardown/Xbox+One+Teardown/19718"&gt;Xbox One&lt;/a&gt;, &lt;a href="https://www.ifixit.com/Teardown/Xbox+One+X+Teardown/99609"&gt;Xbox One X&lt;/a&gt;&amp;nbsp;and &lt;a href="https://www.ifixit.com/Teardown/Xbox+One+S+Teardown/65572"&gt;One S&lt;/a&gt;, &lt;a href="https://www.ifixit.com/Teardown/PlayStation+5+Teardown/138280"&gt;PS5&lt;/a&gt;, Xbox &lt;a href="https://www.ifixit.com/Teardown/Xbox+Series+X+Teardown/138451"&gt;Series X&lt;/a&gt; &lt;a href="https://www.ifixit.com/products/xbox-series-s-motherboard"&gt;and S&lt;/a&gt;, and &lt;a href="https://www.ifixit.com/Guide/PS5+Pro+Chip+ID/178952"&gt;PS5 Pro&lt;/a&gt;, &lt;a href="https://www.ifixit.com/News/57101/steam-deck-teardown"&gt;Steam Deck&lt;/a&gt;, (I believe all Aya Neos, &lt;a href="https://youtu.be/OtHDejr7vx0?si=cXcT2DKRMdmyQf02&amp;amp;t=186"&gt;but here's one&lt;/a&gt;), &lt;a href="https://www.ifixit.com/News/77167/rog-ally-teardown-hot-hardware-held-back"&gt;ROG Ally&lt;/a&gt;&amp;nbsp;and &lt;a href="https://youtu.be/7OnnTDfwAls?si=ydV1_-RyrmusWlCJ&amp;amp;t=260"&gt;Ally X&lt;/a&gt;, &lt;a href="https://www.ifixit.com/Teardown/Nintendo+Switch+Teardown/78263"&gt;Switch&lt;/a&gt; and &lt;a href="https://www.ifixit.com/Guide/Nintendo+Switch+2+Chip+ID/188725"&gt;Switch 2&lt;/a&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The only modern console to feature a clamshell design was the base Playstation 4 - and that's likely an outcome of the late specification change to double the memory capacity - &lt;a href="https://www.playstationlifestyle.net/2013/06/12/if-you-go-with-4gb-of-gddr5-ram-on-ps4-you-are-done-said-randy-pitchford/"&gt;a la Randy Pitchford&lt;/a&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There are three reasons (that I'm aware of) why clamshell design isn't generally favoured and implemented:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;Complexity of circuit board design&lt;/li&gt;&lt;li&gt;Complexity of device design&lt;/li&gt;&lt;li&gt;Cost to implement&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjpsI0erjW9k4qEnK1SdCy38pYezj4AVgyKYd0F0fIQlt3DcrBK3fxQh0ir3UeAwlczrc_WaE1Fg-EloxdWLi8Qj5koFAajAb0O-04ehJQIDLtKyjH02bbAvwz7XVHzqaSSCla3oMB9UPn3zVdMm5O8m4197LiorBVWIcqyiNgLhyTuLTHxiwvWkMEeZ0/s551/Memory%20clamshell.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="410" data-original-width="551" height="476" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjpsI0erjW9k4qEnK1SdCy38pYezj4AVgyKYd0F0fIQlt3DcrBK3fxQh0ir3UeAwlczrc_WaE1Fg-EloxdWLi8Qj5koFAajAb0O-04ehJQIDLtKyjH02bbAvwz7XVHzqaSSCla3oMB9UPn3zVdMm5O8m4197LiorBVWIcqyiNgLhyTuLTHxiwvWkMEeZ0/w640-h476/Memory%20clamshell.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Looking at the circuit boards of most modern consoles (the &lt;a href="https://www.ifixit.com/Guide/PS5+Pro+Chip+ID/178952"&gt;PS5 Pro&lt;/a&gt; and &lt;a href="https://www.ifixit.com/Teardown/Xbox+Series+X+Teardown/138451"&gt;Series X&lt;/a&gt; pictured here), surface mounted devices are on the rear of memory module locations...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The first point is pretty self explanatory - you need to design the motherboard differently to allow for clamshell connections. This may require &lt;a href="https://www.quantum-controls.com/how-many-layers-does-your-pcb-need/"&gt;more layers to avoid signal issues&lt;/a&gt;, or even to allow the &lt;a href="https://docs.amd.com/r/en-US/ug863-versal-pcb-design/Clamshell-Topology"&gt;extra routing&lt;/a&gt; which must take place to account for the different spatial locations of the DRAM pins.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The device (i.e. the console) may also now have a more complex design - especially with regards to thermal management. Having GDDR7 chips on top of each other may result in a thermal trap which could lead to long-term reliability issues/higher failure rates. Thus, it's likely that more heatsink design must take place - and that could potentially affect the physical design and shape of the product!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;These aspects cause a higher cost to both design and manufacture the circuit board - something which a console manufacturer will be keen to keep at a minimum! Additionally, purchasing more memory modules, in the middle or tail-end of a RAM shortage is not going to be favourable from a procurement standpoint, nor a pure cost standpoint. Fewer modules are better, in general! Along with this, there are additional costs related to the testing of the memory, once mounted on the board, along with having to address potential issues with cooling the chips on both sides of the board instead of just using one point of contact (either on the chip or underneath it).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;All in all, these are not last minute additions to a project.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, in my read of the situation, either Microsoft and Sony have already accounted for increased numbers of chips on a single side of the circuit board or they have already designed the board and cooling solutions to allow for clamshell arrangements of the memory modules...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The latter seems unlikely to me. The former? Well, let's move onto that.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;What's the Point..?&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Here's where I get into the logic of having a lot of RAM on the consoles because &lt;i&gt;I just can't get past the "WHY?"&lt;/i&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Why would you have such over-provision of memory?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Consoles are historically very price sensitive devices but even beyond that, mass produced, commercial devices are all very cost optimised - &lt;a href="https://en.wikipedia.org/wiki/Tata_Nano#Safety"&gt;sometimes to&lt;/a&gt;&amp;nbsp;&lt;a href="https://en.wikipedia.org/wiki/Xbox_360_technical_problems#Scratched_discs"&gt;their detriment&lt;/a&gt;. So, going on this logic, the amount of memory must be utilised for &lt;i&gt;something&lt;/i&gt;&amp;nbsp;and not present, &lt;i&gt;just because&lt;/i&gt;...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisz9Z_nH0rriJKqxWSPVVOi4nfit7IobW9rcPvtKm8Hl_urxAR5iH2tvAsDlAIRLSOZrvoyUIiiZkAGn12-fo98vdh_ANWwFdJu1JymoFOwxnKXSfu5qx2L7gdZ1HLuMajzDJOUXu-to7PzjkI9EM8g2Bid5JMT2yqGxj79NsB1hVPDqPvVyHZ0TWd8aU/s577/RE%209_VRAM.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="547" data-original-width="577" height="606" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisz9Z_nH0rriJKqxWSPVVOi4nfit7IobW9rcPvtKm8Hl_urxAR5iH2tvAsDlAIRLSOZrvoyUIiiZkAGn12-fo98vdh_ANWwFdJu1JymoFOwxnKXSfu5qx2L7gdZ1HLuMajzDJOUXu-to7PzjkI9EM8g2Bid5JMT2yqGxj79NsB1hVPDqPvVyHZ0TWd8aU/w640-h606/RE%209_VRAM.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;A modern, graphics-heavy title is typically using 15 - 18 GB VRAM when using all modern features &lt;i&gt;at 4K resolution&lt;/i&gt;...&lt;i&gt;&amp;nbsp;&lt;/i&gt;(&lt;a href="https://www.techpowerup.com/review/resident-evil-requiem-performance-benchmark/9.html"&gt;TechPowerUp&lt;/a&gt;)&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Games are typically using around 8 GB system RAM combined with 8-10 GB VRAM on a PC and approximately 12-13 GB of memory on the consoles. This puts even the maximum game requirements at 13 - 18 GB memory - even for a next gen title since even next gen consoles won't be rendering games at native 4K, they will be using upscaling from a lower resolution - such as 1080p (or lower).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Checking the recently released Resident Evil Requiem's VRAM usage on an RX 9060 XT 16 GB, at 1080p, Max quality settings with RT High - the game was averaging around 11 GB VRAM at native and when using upscaling at the balanced setting. There are other games which also use these advanced features which use less VRAM. E.g. &lt;a href="https://www.techpowerup.com/review/doom-the-dark-ages-performance-benchmark/8.html"&gt;Doom The Dark Ages&lt;/a&gt; and &lt;a href="https://www.techpowerup.com/review/assassin-s-creed-shadows-performance-benchmark/8.html"&gt;Assassin's Creed Shadows&lt;/a&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, if a theoretical next gen console could get away with remaining on 16 GB VRAM or even 18 GB for a modest increase, why would Sony and Microsoft be looking at double that quantity?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The big explanation appears to be "AI". But, again, this is where I become skeptical.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;"AI" doesn't just &lt;i&gt;happen&lt;/i&gt;. It either needs to be contracted out to a cloud-based service - in which case games wouldn't be able to rely on such a feature because 1) latency and 2) connectivity issues - &lt;b&gt;OR&lt;/b&gt;&amp;nbsp;it needs to be run locally.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Local models are generally expensive - both from a computational standpoint but also from a memory and storage footprint aspect. Let's leave aside the storage issue for the time being because, in the grand scheme of things, it's irrelevant to this discussion. The memory footprint is a bigger deal, though.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Image generation is unlikely to be "useful" for use during games, so I think we can discount that (though I will include such a scenario in a moment) but AI models might be conceivably used to generate dynamic text or voice content.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;LLMs have&amp;nbsp;&lt;a href="https://apxml.com/tools/vram-calculator"&gt;a variety of models available&lt;/a&gt;&amp;nbsp;which&amp;nbsp;&lt;a href="https://apxml.com/models/qwen3-0-6b"&gt;can require anywhere&lt;/a&gt;&amp;nbsp;&lt;a href="https://www.microcenter.com/site/mc-news/article/best-local-llms-8gb-16gb-32gb-memory-guide.aspx"&gt;from 2 GB&lt;/a&gt;&amp;nbsp;&lt;a href="https://medium.com/@jameshugo598/the-2026-local-llm-hardware-guide-surviving-the-ram-crisis-fa67e8c95804"&gt;to 12+ GB&lt;/a&gt;. Typically, larger models are more accurate but we don't expect a 4+ billion parameter model to be shoehorned into a console.... do we?!&amp;nbsp;For voice generation, finding numbers was a bit more difficult but it seems that &lt;a href="https://dev.to/czmilo/glm-tts-complete-guide-2025-revolutionary-zero-shot-voice-cloning-with-reinforcement-learning-m8m#installation"&gt;around 4-8 GB is a reasonable range&lt;/a&gt;. So, adding those together, you'd expect to be needing around another 16 - 18 GB of memory, on top of what's required for the OS and game.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, yeah. I can see why people would be expecting 30 - 40 GB on both next gen consoles.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;But there's another problem...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbuVGWLVrDnTaidALlNtEeIw5SSjB16XIduXWzkmkuz2W5s8SGgru1GryXc0u7kFmXoLTGWY3QVv9ZPLqnWuI5bPMF_lxdn1EYC0qor90Fe8NEWgiDqInjJ8R5SOuFHdXOKAdqnGF3HdC7cCavFqlo1b4FdCdohEcWVIdLUGUZM4yD0z0PbGYm6qNN0j0/s753/AI%20usage.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="423" data-original-width="753" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbuVGWLVrDnTaidALlNtEeIw5SSjB16XIduXWzkmkuz2W5s8SGgru1GryXc0u7kFmXoLTGWY3QVv9ZPLqnWuI5bPMF_lxdn1EYC0qor90Fe8NEWgiDqInjJ8R5SOuFHdXOKAdqnGF3HdC7cCavFqlo1b4FdCdohEcWVIdLUGUZM4yD0z0PbGYm6qNN0j0/w640-h360/AI%20usage.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Performance numbers for a couple of games when running two types of AI in parallel...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Reality Strikes...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The problem is that "AI" isn't free. In addition to the memory, it needs both large amounts of compute as well as memory bandwidth - both of which are anathema to actually running a game on the hardware. Games, and specifically high-end effects like ray tracing, require a lot of bandwidth from the memory. To run an AI workload in parallel with the rendering workload, along with the frame draw calls and various simulation systems (e.g. collision detection, pathfinding, etc) will absolutely kill performance.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Even without running a game, the memory controller on the GPU was doing some serious work and the compute requirements were also really loading up the GPU core! These are not light workloads and they are not designed to be run concurrently with a real gaming workload.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5QInPngGCA8WWPKicQ-dP08wQsBKzmo1QsJ3jIObEqWJxn3LAYHGrLjavYqYCYneJdM5Y2nr-kT-94I2-cDlI00T8Nu5DlkWFPmKdfml1A5bmIDpnJjBk8sBHpU4PCzzVZpj3Qi1361SKkK3LCkaJprw1BFUQQPb7g7EL1skOPxl9gqgBOAgzvp186oo/s652/AI_usage.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="141" data-original-width="652" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5QInPngGCA8WWPKicQ-dP08wQsBKzmo1QsJ3jIObEqWJxn3LAYHGrLjavYqYCYneJdM5Y2nr-kT-94I2-cDlI00T8Nu5DlkWFPmKdfml1A5bmIDpnJjBk8sBHpU4PCzzVZpj3Qi1361SKkK3LCkaJprw1BFUQQPb7g7EL1skOPxl9gqgBOAgzvp186oo/s16000/AI_usage.jpg" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Memory controller saturation and compute requirements for running even simplistic applications is overwhelming...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The informal testing I performed for this blogpost showed between 15 - 45 % fps loss when using a chatbot during games where the VRAM limit wasn't exceeded, depending on the specific game. Obviously, for image generation, it was quite easy to exceed that limit but for God of War and AC:Odyssey, I was able to keep within the VRAM capacity and still saw significant losses.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As I said above, image generation won't be utilised during a game - I just can't see a useful application. However, you could envision this as a stand-in test for concurrent text generation and text-to-voice for use in a game. In that light, the performance loss is within the expected ballpark for such a nightmare scenario.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Unfortunately, I wasn't able to test any text-to-speech solutions as I didn't find any that could be set up easily (considering I'm not planning on using them)...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Looking at the supposed specs of the PS6 and Next Gen Xbox, I don't think they're that far beyond what we currently have - an RX 9070 XT with an extra 10% rendering performance (considering potential clock frequency and/or power limits that can and are usually applicable for the consoles) isn't going to magically make these costs disappear.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Yes, the Xbox supposedly has an NPU with 110 TOPs (Int 8?) but the &lt;a href="https://www.amd.com/en/products/graphics/desktops/radeon/9000-series/amd-radeon-rx-9060xt.html"&gt;9060 XT has double that&lt;/a&gt;, and the &lt;a href="https://www.amd.com/en/products/graphics/desktops/radeon/9000-series/amd-radeon-rx-9070xt.html"&gt;9070 XT has 3.5x that&lt;/a&gt;&amp;nbsp;- but the NPU still needs to pull from the memory bandwidth at the same time as a game is trying to render.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Imagine the game is running and all of a sudden there are dips and drops in performance due to AI being activated. It's an untenable situation. The only logical thing for developers to do would be to assume that they can only count on the lower performance available in worst case scenarios. Thereby negating the true power of the GPU in the device. That doesn't sound logical to me!&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Conclusion...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Yes, this is testing on a low-end GPU but the principle still stands. AI will absolutely murder the available performance of both next gen consoles as described in the leaks. Therefore, I don't feel that there is a real reason for them to have huge amounts of memory installed.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If I look back up at the table I compiled and think about the timelines for availability combined with the expectations of what each console is going to be doing (probably going to be a separate post!) I don't think either console will have a clamshell design for the memory due to the issues I pointed out above. Additionally, I think 3 GB GDDR7 modules are the most likely to be used and available in sufficient quantity to satisfy mass production of both consoles.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;For the Xbox, I think 18 GB would be enough to satisfy the demands of next gen games (maybe something like 14 GB for games and 4 GB for OS - even if it's going to be running PC applications as part of its remit, this quantity shouldn't pose a problem.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The PS6 wouldn't make sense regressing to 15 GB total capacity. However, if we look at the PS4 Pro and PS5 Pro, they have additional, slower memory that seems to share part of the memory bus to the main portion of memory. While it hasn't been leaked, I could see a similar situation play out for this next gen console instead of waiting for the mid-gen refresh. A console with 15 GB for games and 4 GB separate memory for the OS would work really well.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Yes, I could be wrong about all of this - I've been wrong before when it came to speculation. I am just struggling to see why Microsoft or Sony would vastly inflate the memory capacity - especially if developers are unlikely to make use of it. After all, current gen consoles will still exist, in parallel with PC gamers, too. Are we really going to be expecting 20 - 30 GB games when 12 - 16 GB VRAM GPUs are the limit on PC?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Additionally to this, why would they handicap the performance of their systems by running heavy AI workloads on the system? This is the Kinect situation all over again! Didn't we learn anything from that?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We've never seen that sort of a move before in the gaming space... I'd like to see someone give a good reason why we should expect a change, now.&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2026/03/analyse-this-rendering-cost-of-ai.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6gS-K5otacpNRnwtt57fd2zMGsfUcDNQgYixcaskjIgzPI0OuAiKuEweFpluGoC4jRFXiyMRo9r1cVUzuuhQ7_KvxJssv0G_mRHEZ4Ae9JUyzcBlL5ibI0gQYWPni2I2iasUamnuv_wz7ho8NzoXhspCqJ8hncnZTXDHy-7rYtvpZ2318b76CGSomstI/s72-w640-h366-c/Title.png" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-3353273345499894488</guid><pubDate>Tue, 10 Feb 2026 13:54:00 +0000</pubDate><atom:updated>2026-02-10T13:54:08.792+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>What could Valve do with the Steam Machine...?</title><description>&lt;div style="text-align: left;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZXXsKZkzaT7OiBqF2CdVXaNkNuwSctKIpmahwVY4nbpNkwRjTukKHGnXXUEBpDBeAmCr9FCptF4_SfhsC-8VIu0cDzVfxRQgcEWdgMgVHF1Oj9hcMtnizzibpivYzPVfd8CTXVD8UPpSlUSwe9YH52lxGh_boe48ES9TtIycIR7qDKN-_DU3SC6Eo3mU/s1920/videoframe_2627_clean.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZXXsKZkzaT7OiBqF2CdVXaNkNuwSctKIpmahwVY4nbpNkwRjTukKHGnXXUEBpDBeAmCr9FCptF4_SfhsC-8VIu0cDzVfxRQgcEWdgMgVHF1Oj9hcMtnizzibpivYzPVfd8CTXVD8UPpSlUSwe9YH52lxGh_boe48ES9TtIycIR7qDKN-_DU3SC6Eo3mU/w640-h360/videoframe_2627_clean.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;I've been thinking some more about the Steam Machine's troubles and the latest summary article &lt;a href="https://arstechnica.com/gaming/2026/02/why-a-bump-to-700-could-be-a-death-sentence-for-the-steam-machine/"&gt;from Ars Technica&lt;/a&gt; has me thinking about what their options might be.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, let's explore the possible scenarios...&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If all of the issues surrounding RAM and SSD prices are pushing up the unit price to unsellable levels then, as I see it, Valve has two possible ways out, depending on what has already occurred behind the scenes.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We now know that both the SSD and RAM are upgradable, meaning that &lt;a href="https://store.steampowered.com/news/group/45479024?emclan=103582791475000432&amp;amp;emgid=625565405086220583"&gt;the RAM takes the SODIMM form factor&lt;/a&gt;.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This actually complicates things for Valve compared to companies like Sony and Microsoft because they are buying the modules directly to attach them to their custom circuit boards, whereas Valve needs to purchase a finished product. That's two levels removed!&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;First their supplier/suppliers need to compete to get the modules from some combination of Micron, SK Hynix, and/or Samsung. Then they need to produce the DIMMs, taking into account binning to produce various different qualities, and &lt;i&gt;then&lt;/i&gt; Valve needs to compete with other customers (typically OEM laptop and mini PC suppliers), along with supply that is meant for the direct to consumer market, i.e. us filthy peasants!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That just makes getting the physical modules more difficult in the first place - and a similar mechanic is occurring for the SSDs, too. Though we don't know what form factor that will be but it could be a 2230 m.2 device*, given the relatively small size of the Steam Machine and potential cost savings due to the associated economies of scale that having an overlap with the already existing Steam Deck would provide**.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;i&gt;&lt;p style="font-weight: bold;"&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;*Yes, we know from Valve's post that the machine supports both 2230 and 2280 m.2 devices (why not 2242?!)&lt;/b&gt;&lt;/span&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;**At least for the 512 GB&amp;nbsp; version...&lt;/span&gt;&lt;/b&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;p style="font-weight: bold;"&gt;&lt;/p&gt;&lt;/i&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Given that we're having this conversation at all, it seems that AMD are, or will be, providing further quantities of the CPU and GPU in the Steam Machine for Valve to assemble with the circuit board.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, it's unclear whether this future quantity will be for continued maintenance of stock to sell through to the public or if it is delivery of the already agreed initial quantity Valve and AMD signed on for.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Since AMD usually sells the GDDR with the GPU core, then I think we can assume that this is not the issue for Valve in terms of procurement difficulty - though AMD might want to renegotiate the price of the kit...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXXfv0kZIKgNgxOGaj_i4BAsOQQ_NPuy6JDLBncxpqELldMeErZ1gpc0JGyvURjQphFAI8wZVM6cE4CbXc0oGwCk91U4DJNM1Vo4Vnu_C-nbOpcDaL7ARf1vFb-apB88R9RJ6gdp87Y35I_WOrtTKKs77jvvygHvEPDgQhXoWhWqPeYS1G90pqHTMGIvk/s1920/videoframe_23052_clean.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXXfv0kZIKgNgxOGaj_i4BAsOQQ_NPuy6JDLBncxpqELldMeErZ1gpc0JGyvURjQphFAI8wZVM6cE4CbXc0oGwCk91U4DJNM1Vo4Vnu_C-nbOpcDaL7ARf1vFb-apB88R9RJ6gdp87Y35I_WOrtTKKs77jvvygHvEPDgQhXoWhWqPeYS1G90pqHTMGIvk/w640-h360/videoframe_23052_clean.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The LED indicator strip on the front of the device is really cool...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;That distinction makes a difference because it defines the outcomes I theorise below:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;First scenario:&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If Valve already procured some quantity of RAM modules for assembly of the Steam Machines then the price for those first units are, or could be, set.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, with a highly volatile memory shortage, the smart strategy could be to not pay that game.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In this scenario, Valve makes the 2026 version of the Steam Machine a limited run - not a truly volume product. The price is set by what they have in stock and when stock runs out, it's done.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Valve doesn't receive extra units of the CPU and GPU. Though, in reality, I would say that they would be okay keeping the CPU for the next phase of this scenario as we see that CPU scaling is not necessarily the deciding factor at 1440p and 4K resolutions and the gen-on-gen advances for CPU are relatively minor - a decent modern CPU will last a gamer for multiple GPU upgrades...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Valve then reassesses the situation in 2027 and releases the Steam machine '27 or '28. This would be a new run with newly sourced RAM, SSDs and, potentially, GPU.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The beauty of this scenario is that it allows Valve to weather-out the memory storm, improve the situation for the GPU and actually release a product they can sell.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Second scenario:&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If Valve hasn't already procured the RAM modules, or if they have an imbalance in the CPU/GPU dies to DDR5 DIMMs, then they could go down a route floated by Tom over at &lt;a href="https://youtu.be/v1JN8nCD1JM?si=Qw1HOe807n4kDjr4"&gt;Moore's Law Is Dead&lt;/a&gt;&amp;nbsp;whereby they sell all or part of the Steam Machine inventory without storage or memory.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This allows Valve an escape route from already bought-up and depreciating inventory whilst still getting the hardware out there.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Thinking Things Through...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The first scenario really depends on whether the GPU is provided full-furnished with the GDDR6 and whether both the CPU and GPU are provided as a single volume shipment or an already pre-agreed ongoing manufacturing at TSMC and whether that agreement with AMD could be re-negotiated, or not.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The benefit to Valve of scenario 1 is that they will manage to sell what they produce. i.e. having the product in a limited quantity (say, a few million at most) means that they will be able to find consumers willing to pay virtually any price - within reason - to own a piece of gaming history; to support Valve and Linux-centric gaming; etc. They get to release the product, receive data from a large-enough install base and&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There will be no stock left lying around.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The potential negatives to Valve are the cost of development not being recouped, other component prices becoming greater due to reduced order numbers, and also some negative press (though I think this would be minor) when they run out of stock...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Pairing the existing CPU with potentially a new GPU in 2027 or 2028 with more VRAM capacity could also bring huge benefits but would require new development for the motherboard and heatsink design modifications. This would be a more capable device and would be able to reach 1440p upscaled to 4K with higher quality settings. I'm struggling to see a downside to this point other than the added cost of development.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Moving on...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Personally, I find the second scenario unlikely because it negates the point of the Steam Machine and doesn't grow Valve's market or mindshare. A barebones Steam Machine is not equivalent to a barebones pre-built PC or laptop.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Valve isn't &lt;i&gt;really&lt;/i&gt;&amp;nbsp;selling the hardware, they're attempting to sell the ecosystem and that conceptual difference means they are likely to make different decisions to companies whose sole purpose is to move software agnostic hardware bundles.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There are also implications for Valve's support team(s): shipping half-finished hardware might lead to more support-related queries, along with potential claims for refunds if the user is unable to manage the hardware and software installation process.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There are &lt;i&gt;absolutely&lt;/i&gt; benefits to shipping a product fully formed.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghGn7-84ObWYUBSq24U5VVH3x7ExgfxL_EVrkTFvlhSFXUv9g1sHMNNRVnISw7GDxNXtq1cJYoD_azDhSQxGE7EciUa0phhyphenhyphen1TEOAXjhMtW8XxdsSFQ2L0sCHjerLfAxaXHi_aPnzOlaWWErQQjcxSHs2CdGxEYlKx1NMSd9fsF6RwOk1i6ptFnht0INM/s1920/videoframe_18042.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghGn7-84ObWYUBSq24U5VVH3x7ExgfxL_EVrkTFvlhSFXUv9g1sHMNNRVnISw7GDxNXtq1cJYoD_azDhSQxGE7EciUa0phhyphenhyphen1TEOAXjhMtW8XxdsSFQ2L0sCHjerLfAxaXHi_aPnzOlaWWErQQjcxSHs2CdGxEYlKx1NMSd9fsF6RwOk1i6ptFnht0INM/w640-h360/videoframe_18042.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Valve seems a little cursed when it comes to the Steam Machine...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Conclusion...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I think Valve do have a few avenues out of this situation but there is no clean escape available to them. Luckily, they are propped up by the other parts of their business so a short term financial loss on one product won't lead to their untimely demise...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The question will be which contracts are locked-in and immutable, which can be renegotiated? What hardware is already in-hand, what is on order, and what is already paid for?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;a href="https://hole-in-my-head.blogspot.com/2025/12/analyse-this-steam-machines-specs-are.html"&gt;I was quite underwhelmed&lt;/a&gt; by the announced specifications of the Steam Machine but the situation surrounding memory and storage might lead to a better outcome for consumers if Valve makes any decision that leads to them reducing the scope of this particular stage of the project...&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2026/02/what-could-valve-do-with-steam-machine.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZXXsKZkzaT7OiBqF2CdVXaNkNuwSctKIpmahwVY4nbpNkwRjTukKHGnXXUEBpDBeAmCr9FCptF4_SfhsC-8VIu0cDzVfxRQgcEWdgMgVHF1Oj9hcMtnizzibpivYzPVfd8CTXVD8UPpSlUSwe9YH52lxGh_boe48ES9TtIycIR7qDKN-_DU3SC6Eo3mU/s72-w640-h360-c/videoframe_2627_clean.png" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-5861980660595471145</guid><pubDate>Mon, 02 Feb 2026 14:12:00 +0000</pubDate><atom:updated>2026-02-02T14:12:28.526+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">curmudgeon</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><title>Now is not the time to buy a PC...</title><description>&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQBaxkF7l1rs8tLqwNR-1hqiqPnQ9iz-7ayLNwF9bhKnP9FH2fMm_NJ13wBRcCpdGtce8Q4EFiZWkfa2LBq9wwAAiGH9Zrr9ow6OtKUgShBgr2W3Am5EP-GWB93XOiQKuuRIN_0ZZO9AJITMainIc8zFjMhmr9wWbBqx6fdh2fiWC8Xlr67XJ1tcM7w2A/s1736/shutterstock_2471574373_splash%20vector.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1164" data-original-width="1736" height="430" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQBaxkF7l1rs8tLqwNR-1hqiqPnQ9iz-7ayLNwF9bhKnP9FH2fMm_NJ13wBRcCpdGtce8Q4EFiZWkfa2LBq9wwAAiGH9Zrr9ow6OtKUgShBgr2W3Am5EP-GWB93XOiQKuuRIN_0ZZO9AJITMainIc8zFjMhmr9wWbBqx6fdh2fiWC8Xlr67XJ1tcM7w2A/w640-h430/shutterstock_2471574373_splash%20vector.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Normally, I'm a pretty big advocate of PC gaming - I would class myself as an enabler because I love technology, enjoy playing around with the hardware, but also using it to enjoy technologically advanced games. I spend time online helping people not only to build and buy PCs but also troubleshoot their issues in order to improve their experience with that technology.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, as part of the moral and ethical framework I hold in my head I just can't advise people to buy a PC right now. Let's get into why...&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Gold Rush...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The analogy people often enjoy making in the tech space is of &lt;a href="https://www.reddit.com/r/cscareerquestions/comments/14ufo8l/during_a_gold_rush_sell_shovels_what_are_the/"&gt;the seller&lt;/a&gt; &lt;a href="https://garymarcus.substack.com/p/nvidias-earnings-report-and-the-calculus"&gt;of shovels&lt;/a&gt; &lt;a href="https://futurism.com/nvidia-ai-gold-rush"&gt;to those&lt;/a&gt; &lt;a href="https://seekingalpha.com/article/4864442-amd-facing-its-moment-of-truth"&gt;who want&lt;/a&gt; to dig. The problem I have always had is "what is the gold in this analogy?" As far as I can tell, there is no gold.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Looking at the tool landscape, CPU, NPU, and GPU are the current hoe, spade, and shovel (with dedicated, efficient &lt;a href="https://en.wikipedia.org/wiki/Application-specific_integrated_circuit"&gt;ASICs&lt;/a&gt; still yet to take hold as they did for blockcoin mining). The shovel (GPU) is still the only tool able to effectively "move" dirt both in training and inference workloads but we still see repeated attempts to "AI-ify" the former tools as a selling point for the mass consumer market. And, although we've seen price increases in all of these tools due to this chasing of AI profit, we're still not seeing prices skyrocketing by a percentage anywhere near what they were during the various crypto crazes.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The big issue for all of these tools is that AI requires memory (exponential amounts, based on current AI technological frameworks) and this is what has been driving up most costs to date. However, that has now changed...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The tool-making market has saturated its supply chain with demand* and, as far as I can tell, this has been an issue for at least half a year. Now, though, I see effects which are manifesting that show that the status quo will no longer be tolerated and that worries me on one hand, and brings me to write this blogpost on the other.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*Note, I'm being very careful with my wording here - not the demand from the consumer market, the demand of the tool-making market...&lt;/span&gt;&lt;/blockquote&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;img alt="" height="339" src="https://blogger.googleusercontent.com/img/a/AVvXsEhx_dahF8a2jlF19Gtux-S79pA3RjWKrlbIz6-LnEbAul3R2IaozVBr_-x8UvywYcB_2JPKLAnX2ajMJBYKqSH_1bH1vup4LkW_GPya-jEkP1aDz-fDkD49iIzUvpVIzJ9XAOkJxDPHIVNFTnmUxm3TfWyXPS3BIBt8-X4A3dWDZM8ifKdwTqdS0xr1R1k=w640-h339" style="margin-left: auto; margin-right: auto;" width="640" /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;CPUs are starting to decrease in supply and increase in cost...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/i&gt;&lt;/b&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;b&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The changes I'm seeing are chronic stock shortages of products that people can buy. The quantities of last gen CPUs from AM4 and Intel 1700 platforms are drying up, either becoming entirely out of stock or pushed up in price towards their 2023 levels. Ryzen 7000 models are not plentiful in the market and the 9000 series is also hovering between being in stock and increasing slightly in price. Even Intel's Core Ultra series are not in great supply - aside from the models that people&amp;nbsp;&lt;i&gt;really&lt;/i&gt;&amp;nbsp;don't want, like the 225 and 245*.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;AMD have just announced that the Ryzen 7 9850X3D is only $500, compared 
to $450 of the based 9800X3D - the fastest gaming CPU out of the box and
 likely to be so for the next year - at least - and they didn't price 
gouge! That tells you that they don't think they can get away with it and we know these companies&amp;nbsp;&lt;i&gt;hate&lt;/i&gt;&amp;nbsp;to announce official price drops after release... something they could be forced to do if priced too high...&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;&lt;u&gt;&lt;i&gt;&lt;/i&gt;&lt;/u&gt;&lt;i&gt;&lt;blockquote&gt;*Though that may be more likely due to Intel seeing the Core Ultra 200 series as an albatross and TSMC happy to instead prioritise customers with big, expensive orders (such as Nvidia), it still doesn't explain why Intel's CPUs (manufactured at TSMC and packaged in New Mexico) are &lt;a href="https://www.techpowerup.com/345620/dual-pressure-from-rising-cpu-and-memory-prices-to-drive-global-notebook-shipments-down-14-8-qoq-in-1q26"&gt;hit by this shortage&lt;/a&gt; when the reporting indicates that it is the &lt;a href="https://www.techpowerup.com/345535/intel-reallocates-pc-production-capacity-to-server-cpus-amid-tight-wafer-supply"&gt;Xeon products&lt;/a&gt; which are primarily affected...&lt;/blockquote&gt;&lt;/i&gt;&lt;/span&gt;&lt;u&gt;&lt;i&gt;&lt;/i&gt;&lt;/u&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This recent trend comes as &lt;a href="https://www.gamesindustry.biz/hardware-and-software-sales-drop-to-an-all-time-low-in-november-2025-us-monthly-charts"&gt;gaming hardware&lt;/a&gt; and &lt;a href="https://www.pcgamer.com/hardware/gaming-pcs/pc-shipments-grew-in-2025-but-the-market-will-be-far-different-in-12-months-given-how-quickly-the-memory-situation-is-evolving/"&gt;PC hardware&lt;/a&gt;&amp;nbsp;are struggling coming into 2026 which, you would think, would lead to a softening of prices to encourage consumers to purchase them. I &lt;i&gt;am&lt;/i&gt;&amp;nbsp;seeing&amp;nbsp;some reductions in motherboard and power supply pricing, which tallies with that logic but the situation with CPUs tells me a different story.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The consumer GPU supply, &lt;a href="https://www.thefpsreview.com/2026/01/16/nvidia-denies-eol-rumors-for-geforce-rtx-50-series-lays-blame-on-memory-supply-shortages-while-amd-says-its-working-to-keep-its-gpus-at-msrp/"&gt;denied to be restricted&lt;/a&gt; &lt;a href="https://www.techpowerup.com/340615/nvidia-refutes-claims-of-h100-h200-gpu-supply-constraints"&gt;by Nvidia&lt;/a&gt; and partners, is also severely impacted through &lt;a href="https://finance.yahoo.com/news/tsmc-says-no-more-nvidia-161151468.html?guccounter=1"&gt;over-booking of AI product&lt;/a&gt; sales - TSMC cannot keep up with the demand. As a result, &lt;a href="https://youtu.be/yteN21aJEvE?si=DuekUGxe2V8Po4qS"&gt;restrictions&lt;/a&gt; &lt;a href="https://videocardz.com/newz/nvidias-new-geforce-rtx-50-allocation-scheme-prioritizes-the-top-sku-in-each-memory-tier"&gt;are clearly in place&lt;/a&gt; as &lt;a href="https://www.tomshardware.com/pc-components/gpus/nvidia-reportedly-cuts-program-designed-to-keep-gaming-gpus-near-msrp-pricing-end-of-opp-pricing-support-scheme-does-not-bode-well-for-gamers"&gt;the cost of&lt;/a&gt; &lt;a href="https://overclock3d.net/news/gpu-displays/nvidia-plans-heavy-cuts-to-gpu-supply-in-early-2026/"&gt;all consumer dedicated graphics cards&lt;/a&gt; &lt;a href="https://x.com/Duoae/status/2016826071526605285?s=20"&gt;has increased by a significant margin&lt;/a&gt;. The end result? It seems that Nvidia (in particular, but also others) have become very sensitive to &lt;a href="https://i10x.ai/fr/news/nvidia-denies-openai-gpu-deal-concerns"&gt;the perception of market conditions&lt;/a&gt; and that they are not being truthful in their disclosures to the press, if not also investors.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;img alt="" height="372" src="https://blogger.googleusercontent.com/img/a/AVvXsEgSEqbLx-_9HF6kn_BbufCcH5eNxDLZ5MlieWSOb4orssc8xmxeHyCwNhU4qvJfCrWW9IFE2SLViUKM2kA7x0_RaOhQ1Kl_hqtlybb2g9R-r1a_i1EZ9G-KtbQ6pEjkfCRSi6Kxko9ZbaBqcOQZTnTUTBYKclhZHjKcbGSSgctGCUkAyNYpwmQ6qY8pw78=w640-h372" style="margin-left: auto; margin-right: auto;" width="640" /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;In markets where there is supply, we're beginning to see price rises...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What worries me, specifically, is that the companies mentioned above appear to be in a rush to capitalise on the current conditions and that whispers in my ear that&amp;nbsp;&lt;i&gt;they&lt;/i&gt;&amp;nbsp;perceive that the time to do so is coming to a close...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The financial year is coming to a close relatively soon, with reporting beginning tomorrow (3rd Feb) for AMD and the 25th Feb for Nvidia. Intel already released their Q4 results at the end of January and, I believe they have the least to lose from any AI contraction. I can imagine that both AMD and Nvidia will want to put as rosy a tint on the situation as possible, and prioritising the AI backorders is going to be the way to do that. That's not crazy insight, just basic logic!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, as I implied above the level of deception and focus on the narrative over reality leads me to think that these companies are seeing the end of&amp;nbsp; their ability to profit from all things "AI" within the near-term future. That could be 6 to 12 months, for all I know.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/a/AVvXsEgCCWRyRYwxCd770aLNQhmn2JciJegmwvdYJHyP3c-mVkEIaiUeN5TBMyAJ_sdogdOx2XP2nzfoXEm0hmKrj2F_nUyE6uZqDortmsctaaLnTPGODtrnCGGNpwmQBz4QNQ8uc0JSHEpZEqoHBRMURsJATioYC4gdbgmF_oaHrMkenhnNiKbI9EIAo756KDQ" style="margin-left: auto; margin-right: auto;"&gt;&lt;img alt="" data-original-height="513" data-original-width="821" height="400" src="https://blogger.googleusercontent.com/img/a/AVvXsEgCCWRyRYwxCd770aLNQhmn2JciJegmwvdYJHyP3c-mVkEIaiUeN5TBMyAJ_sdogdOx2XP2nzfoXEm0hmKrj2F_nUyE6uZqDortmsctaaLnTPGODtrnCGGNpwmQBz4QNQ8uc0JSHEpZEqoHBRMURsJATioYC4gdbgmF_oaHrMkenhnNiKbI9EIAo756KDQ=w640-h400" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;GPUs aren't any better...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Shop Smart...&amp;nbsp;&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, with that groundwork underneath us, it's a really&amp;nbsp;&lt;i&gt;bad&lt;/i&gt;&amp;nbsp;time to purchase a PC but more specifically, it's the worst time to build a PC - also something that pretty much everyone knows because of the memory prices (and now GPU prices) but I'm extending that further - everything you are buying now is&amp;nbsp;&lt;i&gt;the worst&lt;/i&gt;&amp;nbsp;it will be.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's like deciding to purchase a Celeron right before the new generation cheap parts release. Yeah, you saved money by going with the cheapest but you could have waited and gotten a cheap but more performant product. Nothing you are buying is good value for the money in this current market.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You're forced to buy worse memory and storage for more than double their worth, you're not getting good deals on CPUs, you have to buy a GPU tier down for the same money, motherboard prices on current gen are pretty bad, in general. The only saving grace are power supplies and cases - both of which are pretty decently priced (in my opinion).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Sure, buying a pre-built or mini PC is likely going to be the best way to get a PC in the short term but, eventually their existing stock will dry up and prices will also shoot up or the products will get worse. Currently, the site that I track for pre-builts (&lt;a href="https://www.cyberpowersystem.co.uk/system/amd-9000-series-ryzen-custom-pc-builder"&gt;CyberPower UK&lt;/a&gt;) is charging around £1715 for a PC which cost around £1400 6 months ago. That's not that bad until you realise that they've made their default cases cheaper, I removed the AIO where it was previously present, and the SSD is now a 1TB Western Digital "Green" model... with proper 2TB NVMe drives costing an extra £120-300 more...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You could call it shrinkflation but it's just getting a worse overall product. However, just looking at the PC builder in that link, I can see that Ryzen 7000 series is no longer cheaper than the 9000 series - also pointing towards supply issues.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Conclusion...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I believe that, unless you need to purchase a PC/laptop now, you should wait until the end of this year. That's a risky bet, I know - and I don't blame you if you don't want to take it, but I think the signs are pointing to a correction that's going to result in prices heading back to where they should be by then for a large proportion of components.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Of course, whether retails will be stuck with over-priced stock, as happened after crypto/covid, will be a good question. In which case, there will be no normalcy until next generation/new products launch.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, it's a "damnned if you do, damnned if you don't" situation... Not great!&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2026/02/now-is-not-time-to-buy-pc.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQBaxkF7l1rs8tLqwNR-1hqiqPnQ9iz-7ayLNwF9bhKnP9FH2fMm_NJ13wBRcCpdGtce8Q4EFiZWkfa2LBq9wwAAiGH9Zrr9ow6OtKUgShBgr2W3Am5EP-GWB93XOiQKuuRIN_0ZZO9AJITMainIc8zFjMhmr9wWbBqx6fdh2fiWC8Xlr67XJ1tcM7w2A/s72-w640-h430-c/shutterstock_2471574373_splash%20vector.jpg" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-7218689475286498910</guid><pubDate>Fri, 30 Jan 2026 10:45:00 +0000</pubDate><atom:updated>2026-01-30T10:45:53.420+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">mid-range hardware reviews</category><title>Building a Home NAS...</title><description>&lt;div style="text-align: left;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7KpEav5V8WeAy3zdYs-l2iWNo1a3dzJs09LBbvBEzkWg8oCbJvRrS2TOTCiBcVqUU2JylswsXEORc18I2s9T14uWqNruJHJXGZG538muS443eWpyzWcWlL4Lj5IYxPQ3IDYHIgGpBQoPikZIzKE05EypAPsHN8it-In8fyP08giswSyPWWcWoTV7IiPM/s1805/Title_sm.jpg" style="margin-left: auto; margin-right: auto; text-align: center;"&gt;&lt;img border="0" data-original-height="1296" data-original-width="1805" height="460" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7KpEav5V8WeAy3zdYs-l2iWNo1a3dzJs09LBbvBEzkWg8oCbJvRrS2TOTCiBcVqUU2JylswsXEORc18I2s9T14uWqNruJHJXGZG538muS443eWpyzWcWlL4Lj5IYxPQ3IDYHIgGpBQoPikZIzKE05EypAPsHN8it-In8fyP08giswSyPWWcWoTV7IiPM/w640-h460/Title_sm.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;br /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: left;"&gt;&lt;span style="text-align: justify;"&gt;What do we do in a time when certain component prices have gone through the roof? We make do with what we have and upgrade other aspects of our systems. Speaking of which, I've been meaning to sort out my data storage situation for a long time. So, let's take the plunge!&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Re-writing the Past...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You know what they say about backing up data? &lt;a href="https://www.backblaze.com/blog/the-3-2-1-backup-strategy/"&gt;Use the 3-2-1 rule&lt;/a&gt;:&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;ul style="text-align: left;"&gt;&lt;li style="text-align: justify;"&gt;Make three copies&lt;/li&gt;&lt;li style="text-align: justify;"&gt;Have two copies locally&lt;/li&gt;&lt;li style="text-align: justify;"&gt;Have one off-site copy&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Well, this isn't exactly that. I'm the perfect do as I say and not as I do role model - I don't have an off-site copy of my data* and I am putting all my eggs in one storage basket...&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;i&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;blockquote style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;*I don't trust any cloud providers with my data - let's be honest, there's a high probability of it being scraped for AI training whether they admit it or not... Plus, I'm not really willing to pay a monthly fee for my data.&lt;/span&gt;&lt;/blockquote&gt;&lt;/b&gt;&lt;/i&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This carries on a long-held tradition of keeping most of my data either on my PC or on an external hard disc. The situation was temporarily improved back in 2016/17 with the purchase of a two bay Synology DS216j which I used in RAID mode 1 to ensure that the data could not be lost with a single disk failure.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5ZBXvh28ZTLnq1bBzsG0a2MNzI5RxCTFMb4Ndchhw4WeNMziibVpZLleRTESWTvpWFhzlFaIJEAOGhE8MDawh4_WMZx3js4KU_Wt6obaJK7eohJRHxXPbZpYIT4V4-zEIxnQdKGAr9UFyLNHRn3FWp6K1YnjcrDg7iFre-U_z-2jFLnrmuN1Eb5F1ifw/s2592/Synology.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1484" data-original-width="2592" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5ZBXvh28ZTLnq1bBzsG0a2MNzI5RxCTFMb4Ndchhw4WeNMziibVpZLleRTESWTvpWFhzlFaIJEAOGhE8MDawh4_WMZx3js4KU_Wt6obaJK7eohJRHxXPbZpYIT4V4-zEIxnQdKGAr9UFyLNHRn3FWp6K1YnjcrDg7iFre-U_z-2jFLnrmuN1Eb5F1ifw/w640-h366/Synology.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Trusy, old but increasingly cantankerous...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Pre-made NAS devices like those from Synology and QNAP (though there are a few newer entrants on the consumer market nowadays) are great because they practically set themselves up and there is a self-curated app ecosystem to draw upon for the end-user.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The primary issue with the Synology I had was that it just didn't like certain HDDs. That's when I found that there's an &lt;a href="https://www.synology.com/en-global/compatibility"&gt;official compatibility list&lt;/a&gt;&amp;nbsp;but, of course, my HDDs &lt;i&gt;are on the list&lt;/i&gt;&amp;nbsp;- &lt;a href="https://youtu.be/oK9qqvo1zf4?si=al0PzFrye3ke9AUZ&amp;amp;t=1466"&gt;it just didn't like one of them&lt;/a&gt;. From what I can gather, Synology decided to &lt;a href="https://www.techpowerup.com/335824/synology-forcing-owners-of-its-plus-series-nas-appliances-to-use-own-brand-hard-drives"&gt;mess around with "compatibility"&lt;/a&gt; in order to push users to use their own branded HDDs to gain extra revenue. I do remember that this was a thing that was &lt;a href="https://youtu.be/vnOfmHY2Nb0?si=FE6RpiMlX3b8Ag9X"&gt;backtracked on&lt;/a&gt; after initial user outrage but there are &lt;a href="https://kb.synology.com/en-global/DSM/tutorial/Drive_compatibility_policies"&gt;still limitations listed on their website&lt;/a&gt;...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The problem is that I'm pretty sure that they did something in their software which caused incompatibility &lt;i&gt;on purpose&lt;/i&gt;, resulting in the issues that I experienced:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvmiC4jdDsJpR3hkVA5NVvQF8gLsRX2KgRvui78fgiNegA4aGu3O5bZYVJvf-rtYJKQnxFeRYdqQYyqieaDNvVodSL9yTJIM_OSi9ELH7L43UafopCFi1tzFXqvt2v7GyipJXHILqolf7kpIkGPakEHS_UUKVz6H66qDx94ZitiXdaHqVdKY_loJe5woM/s777/Summary.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="519" data-original-width="777" height="428" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvmiC4jdDsJpR3hkVA5NVvQF8gLsRX2KgRvui78fgiNegA4aGu3O5bZYVJvf-rtYJKQnxFeRYdqQYyqieaDNvVodSL9yTJIM_OSi9ELH7L43UafopCFi1tzFXqvt2v7GyipJXHILqolf7kpIkGPakEHS_UUKVz6H66qDx94ZitiXdaHqVdKY_loJe5woM/w640-h428/Summary.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Constant errors and "write" problems when in the Synology, no issues in PC or as a normal external drive... or in my new NAS!&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;What I experienced were two identical drives (purchased at the same time) behaving differently - one perfectly fine, the other throwing up a load of write "errors" in the synology software. Now, supposedly, my device wasn't affected by the issue regarding HDD limitations - I think these issues started on DSM version 7.1.1 but I had updated to version 7.2 to try and fix the issue. So, I completely lost trust with Synology and instead moved one of the drives to an external enclosure which was "floating" between PCs to store data.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I still had the other drive in the Synology but it wasn't turned on and was becoming stale (not that I acrue much new data over time)... I was essentially an unlikely drive failure away from losing data and added to this, my 4 TB drive was getting dangerously full.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Not a great situation to be in.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, I had the idea to make my own NAS using FreeNAS or another Linux-based OS. Unfortunately, modern PC cases don't have sufficient drive bays for 3.5" drives. In fact, any case that I was able to easily purchase (and for a reasonable price) didn't meet the requirements of being able to have multiple HDDs in a RAID array.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So my plans lay in stasis until now. With no interesting GPUs to test, RAM prices through the roof, and no desire to build an AM5 system* I decided to bite the bullet and push through with a &lt;a href="https://www.jonsbo.com/en/product/ComputerCase/NASMotherboardSeries.html"&gt;Jonsbo system&lt;/a&gt;. I've had my eye on Jonsbo cases for a while but have not found any easy way to get one shipped to my country of residence. This changed just before Christmas when I found a supplier in the EU which would ship to me, so I pulled the trigger!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*I built one for my dad but I really don't need a more performant PC than the i5-14600 systems I am currently running...&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Building the future...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHS0qcu1dFzN30rqJ6CdEOkn19F3ANjyy119xrxc-lkbyQwPCrD_68vrOe0LnC8Ac1ldl-Mk_nvmfYLkyC6igpWiTk-teW7-F1cNUjJzAxaclV0Uu3Y8ActgzF_5dfQGNX4XwJ3_tPjE8sWF_PFZy1OXMPsSuQ5zM3ioNQL0mczPabwMDqHEYngGE6Ex0/s1571/Image%207_sm.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="881" data-original-width="1571" height="358" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHS0qcu1dFzN30rqJ6CdEOkn19F3ANjyy119xrxc-lkbyQwPCrD_68vrOe0LnC8Ac1ldl-Mk_nvmfYLkyC6igpWiTk-teW7-F1cNUjJzAxaclV0Uu3Y8ActgzF_5dfQGNX4XwJ3_tPjE8sWF_PFZy1OXMPsSuQ5zM3ioNQL0mczPabwMDqHEYngGE6Ex0/w640-h358/Image%207_sm.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The layout of this case is amazing!&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;a href="https://www.jonsbo.com/en/products/N4Black.html"&gt;This N4 case is amazing&lt;/a&gt;. It's maybe a little bigger than many people would like for an in-home NAS: certainly not a huge footprint but bigger than the average shelf that the DS216j fits on. However, I love all the little details which make this easy to use as a dedicated NAS device.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;Six bays for 3.5" drives&lt;/li&gt;&lt;li&gt;Two bays for 2.5" drives&lt;/li&gt;&lt;li&gt;A SATA and power distributor for the four 3.5" bays on the left hand side&lt;/li&gt;&lt;li&gt;A simple rail system to easily slide the drives in and out, with a rubber handle to aid in removal&lt;/li&gt;&lt;li&gt;A 140mm fan behind the left side of the storage rack&lt;/li&gt;&lt;li&gt;Support for mATX and ITX motherboards&lt;/li&gt;&lt;li&gt;SFX PSU compatibility&lt;/li&gt;&lt;li&gt;Low profile PCI card compatibility&lt;/li&gt;&lt;li&gt;Modular case design which allows specific access to required areas&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I think two of the biggest points are the rail system and the modular case access. It's very convient to be able to only have to disassemble the case as much as you need, instead of having to take off big panels like you would normally need to do in a traditional tower case. Similarly, attaching little stand-off "feet" to the drives, allowing them to be slotted in and out at will, instead of futzing around with screwing drives into place is another life-saver that I really appreciated on a consumer-level device.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Yes, this is the most expensive case I've ever built in! Costing me €173 (not including shipping), it was over €90 more than even my SFF cases. But, as I said before, I need a NAS and plan to keep this build around for the long-haul.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJUFxIrPKuRMOiyrb8b0HgNkM0JlH-ULBV5xddO04pwjn1M7GQ3HYn92amcdw-a3BZrpenh0UErruwnYC5X-OtNqlrjpTOmz8mUHv8orzTcYmAcqxY-1PkCDnOsrlQxLDkrpGpVxKD35V9tgFHPjBeIF3uVV4M-LKpuFbX4f0CByxkdhD5J6dbAqAi04s/s1936/Image%203_sm.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1095" data-original-width="1936" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJUFxIrPKuRMOiyrb8b0HgNkM0JlH-ULBV5xddO04pwjn1M7GQ3HYn92amcdw-a3BZrpenh0UErruwnYC5X-OtNqlrjpTOmz8mUHv8orzTcYmAcqxY-1PkCDnOsrlQxLDkrpGpVxKD35V9tgFHPjBeIF3uVV4M-LKpuFbX4f0CByxkdhD5J6dbAqAi04s/w640-h362/Image%203_sm.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The rear sections are able to be removed independently, allowing access to the different connections for the drives on each side...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;What's in the box..?&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;For this build, I've put together the following:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;Ryzen 5 4600G (from my PS5 simulation testing!)&lt;/li&gt;&lt;li&gt;ASUS Prime B450M-KII&lt;/li&gt;&lt;li&gt;&amp;nbsp;PCIe to SATA riser&lt;/li&gt;&lt;li&gt;2x 8GB Orico DDR4 3200&lt;/li&gt;&lt;li&gt;Silverstone SX450&lt;/li&gt;&lt;li&gt;250 GB Samsung 850 EVO&lt;/li&gt;&lt;li&gt;2x 4TB Toshiba N300 HDDs&lt;/li&gt;&lt;li&gt;2x 3TB Toshiba HGST HDDs&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I had a lot of these parts on-hand from the previous NAS installations, a laptop SSD upgrade (the laptop subsequently died) and testing hardware from previous blogposts. These are not "optimal" components but also not terrible.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Specifically, the motherboard didn't come with enough SATA ports - so I had to buy the riser listed above to be able to equip all the drives. The board also lacks a USB-C header, meaning that one of the two front ports on the case is not active. There's also no wifi on the motherboard, but that's not a dealbreaker as this box will live next to the ISP's router - so a direct ethernet connection will be used.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;One thing that I didn't really think about was the SATA port locations and this wasn't ever something I had to think about before. They are all clustered towards the lower portion of the board (this is fairly normal for most motherboards) but the cable routing in the case allows for cables to "flow" left and right, for easier reach to the drive bays. I don't know if they exist but a board with some SATA ports around the top (CPU) end would be nicer for cable management.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I also did not like working with the SATA cables - they're very stiff and inflexible. You can't cable tie them like most other cables in a PC. It's been such a long time that I've used more than one SATA device in the PC that I'd forgotten about that. So, overall, cable management is not very nice looking. At least the case has a distributor on the right hand side to allow a stable base for cables to connect to but I feel that the choice to not have a similar distributor on the right hand side is a big negative once you've experienced using the right hand side.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The SSD is a perfectly small size for running a lightweight Linux-based operating system, the RAM is more than sufficient for running a simple file share and the APU is perfect for saving energy by avoiding a dedicated GPU and, despite the &lt;a href="https://www.techpowerup.com/cpu-specs/ryzen-5-4600g.c2319"&gt;CPU architecture being relatively old&lt;/a&gt; (Zen 2).&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I didn't have a small form factor PSU on hand, so that was new. However, given the relatively modest hardware in the system,&amp;nbsp;the SFX PSU didn't need to be able to provide more Watts - 450 W is more than enough for such a system, even if I choose to add more drives at a later date.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I actually would have liked a higher power PSU - around 600W as I like to be able to have flexibility in my builds (maybe I would even need it in future for another ITX build) but&amp;nbsp;prices were higher than I was willing to spend, so this was the one... I didn't expect PSU prices to be higher in 2026 (actually, normal ATX PSU prices have already begun to drop but SFX are still quite expensive compared to the last few I bought in 2022-23.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiScqVKyP21RL60MmsQ5m61bOCotjiicGfB2UxzI38NpQTl0NRuirOfR9-R19w3tdGtKDT_S1CEakwTQWaiHLNnIrSlGPmHOtjx4CAUITNJtzHU62fWQyNJ61sPmJnYEqmHplH8vZDZ7f-_ZpYNIbzUpYtLuy0FK0yKRowvsVJLBvow5Djro6nhuPOuvyk/s1549/Image%208_sm.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1037" data-original-width="1549" height="428" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiScqVKyP21RL60MmsQ5m61bOCotjiicGfB2UxzI38NpQTl0NRuirOfR9-R19w3tdGtKDT_S1CEakwTQWaiHLNnIrSlGPmHOtjx4CAUITNJtzHU62fWQyNJ61sPmJnYEqmHplH8vZDZ7f-_ZpYNIbzUpYtLuy0FK0yKRowvsVJLBvow5Djro6nhuPOuvyk/w640-h428/Image%208_sm.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The stand-off guides for the rail system work nicely and the flexible handles feel nice in the hands (though it's not like you'll be handling them very often!)...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;What's on the box..?&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I mentioned earlier that I had been looking forward to using an OS such as FreeNAS. Well, I waited so long from the inception of my dream to built a NAS to actually doing it that &lt;a href="https://www.truenas.com/freenas/"&gt;FreeNAS became TrueNAS&lt;/a&gt;* and I got really confused when visiting the website and thought that it &lt;a href="https://www.truenas.com/file-sharing/?_gl=1*1uwuacl*_up*MQ..*_ga*MTQ2NTQ5MzY0Ni4xNzY5NzEyMjIy*_ga_M6M6EZ070J*czE3Njk3MTIyMjEkbzEkZzEkdDE3Njk3MTIzMzEkajYwJGwwJGgw"&gt;was paid only&lt;/a&gt;, so I chose &lt;a href="https://www.openmediavault.org/"&gt;OpenMediaVault&lt;/a&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;OMV is built on Debian, which is supposed to be quite good and stable for this use case. Apparently, TrueNAS CE was originally built on FreeBSD** but they migrated to Debian as well and the only difference I can see from the marketing is TrueNAS is basically promoting the use of openZFS filesystem and OMV is a bit more abivalent for users to choose from EXT4, ZFS and BTRFS.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*Specifically, "&lt;a href="https://www.truenas.com/truenas-community-edition/?_gl=1*2p521t*_up*MQ..*_ga*MTQ2NTQ5MzY0Ni4xNzY5NzEyMjIy*_ga_M6M6EZ070J*czE3Njk3MTIyMjEkbzEkZzEkdDE3Njk3MTIzNTkkajMyJGwwJGgw"&gt;TrueNAS Community&lt;/a&gt;" but that wasn't clear when I was reviewing the website and searching for recommendations online... they push heavily to the paid enterprise version.&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;**When it was called FreeNAS and then, later, TrueNAS Core...&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;And thus began my first steps into Linux...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;In the Belly of the Beast...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I'll give a bit of a report on my experience with the software in this section in case anyone searching for answers in the future needs this help, so feel free to skip!&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Creation of a boot drive and subsequent installation was very easy. I used Rufus to perform the former action and the installation mostly went as explained in the &lt;a href="http://wiki.omv-extras.org/doku.php?id=omv7:new_user_guide"&gt;OMV installation guide&lt;/a&gt; - which I fully recommend following step by step.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Yes, this guide is for version 7 and the current version is 8 but the guide suffices as there appears to be no changes here.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The first issue I had was that I experienced graphical corruption onwards on the second install screen and couldn't progress because I couldn't understand what was on screen. It turns out that I just needed to update the BIOS. For whatever reason, though this BIOS was not "old" (I think it was from 2023), I needed to update to the latest 2025 version. Once this was performed, everything was fine.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Next up, I had was that I chosen to set up the system "offline" - or more accurately, without network connection, since I was doing something else in parallel and decided not to put the box in its intended location. This meant that I didn't have a spare ethernet cable near me that I didn't want to disconnect from another device. While this isn't an issue for the inital setup (and the wording on screen indicated I could perform the network setup later), there's not too much setup to perform without network connection so I made things more difficult for myself in the long run.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The first problem was me trying to understand how to skip the network setup step - it took me a while to see how to do it. Unfortunately, I didn't write down what option I chose, but in the end I managed. The next issue I experienced was delayed in effect but thoroughly intertwined with the network setup and a lack of beginner's knowledge of Linux.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Following the wording on screen (which I unfortunately did not take a photo of!) the installation setup advises the user that it's not necessary to set up a root account and instead your user account will be assigned as the administrator account. So, I followed this advice (thinking it would simplify things*) and proceeded to set up the user account and password then choose the installation drive and wrapped things up.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*In the end, it did - but it did feed into the confusion that is to follow...&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Since I hadn't connected to the network, this was the point where I had to move the box into position and start it up properly.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;Just a note: for command line instructions, I will be inserting it in angled brackets &amp;lt; &amp;gt; for clarity...&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/a/AVvXsEhlruDiWj9r5p199nxuiHR2XNBbR14qJsdjIdrs0OfzqSJijvY5FgxHvkE0E4yGgWOCWHGN7uABOJnRmLgk_H_e5ZQSPtpAo6H4cwHEez85KDIVUKVeCjR95e29Q6Vgp1PiyEoEHu9TDaY0Twzzksq6BU3aOiBRbNk3432lD80c6i55cu84vA20_bZpXek" style="margin-left: auto; margin-right: auto;"&gt;&lt;img alt="" data-original-height="331" data-original-width="615" height="344" src="https://blogger.googleusercontent.com/img/a/AVvXsEhlruDiWj9r5p199nxuiHR2XNBbR14qJsdjIdrs0OfzqSJijvY5FgxHvkE0E4yGgWOCWHGN7uABOJnRmLgk_H_e5ZQSPtpAo6H4cwHEez85KDIVUKVeCjR95e29Q6Vgp1PiyEoEHu9TDaY0Twzzksq6BU3aOiBRbNk3432lD80c6i55cu84vA20_bZpXek" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;My first real use of "AI" (at least in Google search) where I needed to rely on the Gemini summary... &lt;a href="https://x.com/Duoae/status/2012209093918802104?s=20"&gt;Twitter&lt;/a&gt;&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;First boot...&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Because I hadn't set up the network connection, once the server was turned on the first time, it didn't draw from the DHCP table and run with things like a PC normally would. Taking a look at the beginner's guide, I saw that this could be a common problem but no issue - log in with the root account (aka my user account) and query what IP address the system had assigned itself, then either manually set an IP address and network gateway via command line or perform the command &amp;lt;omv-firstaid&amp;gt; to re-enter into the setup interface.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;u&gt;The problem was that these commands would not work when typed in.&lt;/u&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;u&gt;&lt;br /&gt;&lt;/u&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This is where the installation went a little off the rails - because the beginner's guide doesn't address this scenario and instead jumps straight to web console management (which, of course, I couldn't do because the system had set its own domain).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It took quite a bit of Googling, searching random forum posts, checking through the &lt;a href="https://docs.openmediavault.org/en/8.x/index.html"&gt;OMV Wiki&lt;/a&gt; pages but the Gemini AI summaries were the most helpful guides in this process since they provided step by step processes by which I was able to tease out what I was doing wrong and what steps were needed to be performed to fix the situation.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;u&gt;Please be aware that my knowledge of Linux is essentially zero and my memory of some of these things may be out of order. So, if something isn't making sense, this will be why.&lt;/u&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;My first level of criticism of the OMV guide is that it does gloss over this situation and there are no links to underlying Debian or other documentation that might have been able to help. Visiting a lot of forums is also not very productive. Often, users are (or were) not helpful for people asking questions. I feel like there's an air of "&lt;i&gt;I suffered to find out information. Therefore, the people who come after me must also suffer in the same fashion or they won't be as pure as me.&lt;/i&gt;" It's a feeling I get when reading replies to &lt;a href="https://www.reddit.com/r/linuxquestions/comments/uvradw/how_do_i_discover_the_available_flags_for/"&gt;posts like this&lt;/a&gt;. It's generic advice that the user has obviously already tried and it's advice that is defeating on purpose. It's not meant to educate or pass on knowledge.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This is a well-known historical problem in FOSS (Free, Open Source Software) and one of the reasons I've never gotten into it (aside from my lack of ability to programme). However, I'm part of the wave of people that are causing &lt;a href="https://blog.pragmaticengineer.com/are-llms-making-stackoverflow-irrelevant/"&gt;the decline in traffic&lt;/a&gt; to sites like Stack Overflow to occur.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I don't want, nor need, to suffer for no reason. When I frequent subreddits on building PCs, I'm there to help people, not belittle them or offer null advice.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;On a more positive note, I have heard from &lt;a href="https://www.youtube.com/playlist?list=PLiZwoK8DQiwwpRSLIqYZ4yVuCEIPCr2NL"&gt;Dual Boot Diaries&lt;/a&gt;, that people are more helpful, these days. But that may also just be the type of people who are interacting with the two hosts of that show...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, anyway - onto the solutions:&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;You need to escalate your admin account into the "real" admin account to do virtually anything.&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;Yep, that's right. A concept so simple that anyone experienced with the OS likely took it for granted and didn't even think to explain it. So, sure, here I am typing things like:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&amp;lt;sudo ip addr add 192.168.0.XX dev enp7s0&amp;gt;&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;blockquote&gt;&lt;p&gt;&lt;b&gt;&amp;nbsp;&amp;lt;sudo ip route add default&amp;gt; or &amp;lt;omv-salt deploy run systemd-netword&amp;gt;&lt;/b&gt;&lt;/p&gt;&lt;/blockquote&gt;&lt;blockquote&gt;&lt;p&gt;&lt;b&gt;&amp;lt;omv-firstaid&amp;gt;&lt;/b&gt;&amp;nbsp;&lt;/p&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;...and nothing is happening and the feedback from the command line is not helping me understand the issue.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Of course, in my mind I have an administrator account - that's how I set up the OS and the problem then becomes that I'm trying to then use the default account - which doesn't work because it doesn't exist*.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*Yeah, I'll come back to that because, eh... it sort of does!?&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You must elevate your permissions to do anything useful. Yes, I logged in as an admin account but I need to first initiate&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&amp;lt;sudo su&amp;gt;&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;To get into the elevated security state and then begin issuing all these commands. Commands which, I'm afraid to say, had to all be Googled because discovering the available commands in Linux is &lt;i&gt;really hard&lt;/i&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I remember back in the DOS/Win 3.1 days, we had a big thick manual to be able to read through the commands that could be used and what they did. Nowadays, it's online and a bit less usable in terms of discoverability. But I also remember that you could search for available commands in the interface and it would work (at least for whatever I was doing). Tabbed auto fill or auto list for partially written commands also worked, IIRC!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This doesn't work the same in Debian/OMV and -help, etc. would only bring up a subset of commands that weren't useful for my current predicament. Maybe that's a familiarity thing but I feel like it's a general oversight in this ecosystem.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Once I was able to access the firstaid menu (i.e. finish the setup) I was able to properly select the network adapter, and force IPv4, which allowed my router to assign the static IP I had given the device.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Yes! The OMV package defaults to IPv6 and my router has nothing set for this protocol within the internal network and I didn't want to also have to read and understand the protocol for this little escapade in order to get the network working for this one device. (I seriously cannot grok the IPv6 address structure! It's in hexadecimal, for crying out loud!)&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Once that was out of the way, it was time to manage the device through the web interface, as god intended... Only, not quite.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIoVv8NgD90Ed8yVtuXC-9QtvNGycEjOT2N_Z2MHGBcQhg2E30wufYOICRuXfP5dzgC-4BAeqNzEaWQ7T7h6mBKMbYKfpfwVeDTshrUlJ8c6FIB5AneVSy8uhlgKpnsuvcRVmzqVyceW1BUOfY5JQj9l9XlcIpyPmQZq4FtJ5qKDCvWELtOFnYt5N3o6c/s1265/Capture%201.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="991" data-original-width="1265" height="502" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIoVv8NgD90Ed8yVtuXC-9QtvNGycEjOT2N_Z2MHGBcQhg2E30wufYOICRuXfP5dzgC-4BAeqNzEaWQ7T7h6mBKMbYKfpfwVeDTshrUlJ8c6FIB5AneVSy8uhlgKpnsuvcRVmzqVyceW1BUOfY5JQj9l9XlcIpyPmQZq4FtJ5qKDCvWELtOFnYt5N3o6c/w640-h502/Capture%201.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The dashboard, once configured with some paraphernalia...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;div&gt;See, the whole "no root account, my user is the admin" thing tripped me up again. My user is not an admin on the web interface. I could log in but had no ability to change anything and set up the file/folder systems. Nope!&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;This is still quite confusing to me. This is a &lt;i&gt;third&lt;/i&gt;&amp;nbsp;account which is seemingly not related to managing the OS. So, to actually manage the device, I log in with the default "admin" account - with default password "openmediavault". Only, I couldn't. I'd locked my accounts because of trying too many variations on passwords (I think it was one or two tries!) and had to return to the physical device to unlock the accounts.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Back to:&lt;/div&gt;&lt;blockquote&gt;&lt;div&gt;&lt;b&gt;&amp;lt;omv-firstaid&amp;gt; and "Reset failed login attempt"&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, in summary, I have two accounts, one for the physical device (which is replicated as a user on the web interface) and an admin account for managing the device through the web interface. I think that if I hadn't skipped the root admin account creation, then I would have had three accounts.... totally unnecessary!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Anyway, from this point, the setup was relatively straight forward.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I logged in as "admin", changed default password, and tried to follow the guide to create a file system volume in which to make shared folders. However, I was unable to make the file system. I couldn't select the disks to form the volume. I couldn't format/wipe the HDDs in the system through the web interface, either. It brought up an error:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;blockquote style="text-align: left;"&gt;&lt;b&gt;Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; blockdev --rereadpt '/dev/sdg' 2&amp;gt;&amp;amp;1' with exit code '1': blockdev: ioctl error on BLKRRPART: Device or resource busy&lt;/b&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There appears to have been no way to fix this in the web interface, so I Googled once more and managed to find an obscure (to me) way to force it in the command prompt. So, back to the device:&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&amp;lt;sgdisk -Z&amp;gt; This only works on unmounted disks!&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;...and this command needed to be followed by a reboot (incidentally, I couldn't find a way to reboot the system from the command line, only log out? Still didn't work that out yet).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, I could manage the disks properly.&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjs6I1MnuGp9q4J3Bu6VLFT8uoi_iHySTFNz1BM9r2LDzF7AhyQziNcWtjs-oOP87T7OdqCjPI6qq0d957dfQ_RKA79e16ZBrk8pSNuhvHnkz27JF-3jUS1h4JnGs0TbYlIqbFW290mtPC_fl0wGcmDS19etdHpH4XB65_aZ8Kqt0XAQMsYyHWhL0gc6FU/s1257/Capture%203.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="589" data-original-width="1257" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjs6I1MnuGp9q4J3Bu6VLFT8uoi_iHySTFNz1BM9r2LDzF7AhyQziNcWtjs-oOP87T7OdqCjPI6qq0d957dfQ_RKA79e16ZBrk8pSNuhvHnkz27JF-3jUS1h4JnGs0TbYlIqbFW290mtPC_fl0wGcmDS19etdHpH4XB65_aZ8Kqt0XAQMsYyHWhL0gc6FU/w640-h300/Capture%203.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Finally!&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I chose the file system (BTRFS), RAID 1+0 and allocated the disks and the OS took care of everything else. It even intelligently set up the mirroring of the drives so I didn't need to worry about that.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;From this point onwards, the guide was very good and helped walk the newbie through the various steps to create, share, and manage the folders for the user and network. Setting up the SMB (Samba) share, etc.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That was all relatively painless and logical!&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I did have a little issue later on, after restarting the NAS to find that I couldn't browse any of the folders, but it seems to have been a bug and a second restart fixed the issue. Otherwise, you could enable SMBv1 protocol in Windows as this was a recommendation I found online.&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqSd1v2oXE__dJA4r9VsVSE9yfCNb9Ed0Pi8OTqRP3HiMfsOcpXlSHvspdaoyKUR8_11TVkQwPgtVLAIUbJ-jLU3lYRLYej9K2904VsEK47cpVNAFb8s-2u-b9pVVITOaHTOc-TZBMUZWaumfJ-JXRJCLghIWxxIeQXudILtiymmr_YhcwwakPApxkKxI/s1259/Capture%205.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="679" data-original-width="1259" height="346" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqSd1v2oXE__dJA4r9VsVSE9yfCNb9Ed0Pi8OTqRP3HiMfsOcpXlSHvspdaoyKUR8_11TVkQwPgtVLAIUbJ-jLU3lYRLYej9K2904VsEK47cpVNAFb8s-2u-b9pVVITOaHTOc-TZBMUZWaumfJ-JXRJCLghIWxxIeQXudILtiymmr_YhcwwakPApxkKxI/w640-h346/Capture%205.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Nicely set up and available on the network...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: left;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;h3 style="text-align: left;"&gt;&lt;span style="color: #274e13;"&gt;Optimising...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;One thing I didn't want, was a NAS with a huge power draw constantly operating in the background, 24/7. The synology devices are known to be very power efficient and I am pretty sure it was pulling between 6 - 20 W from idle to load. I knew that it was unlikely that I could achieve &lt;i&gt;that&lt;/i&gt;&amp;nbsp;level of efficiency on this home-made device with more disks but I figured it was worth a shot.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;At this point, I realised there wasn't a very robust power management configuration in my cheap motherboard's BIOS and OMV/Debian don't appear to have very good options, either. I set up the drive sleep options for the HDDs but aside from that, I was going to need to manually tweak everything.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;First thing I did was set the DDR4 to 2133 MHz (below the stock 2400 MHz profile that was loaded onto the modules). This would be good for added stability and also a little power saving. Next up, I sequentially undervolted the CPU with a negative offset. I've currently settled on -0.05 V and everything seems stable and I got a nice little power reduction, to boot. I could probably take this further but was in the process of transferring lots of files so didn't want to push my luck at this stage of operation.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Next up, I put a power limit on the APU (PPT 25 W) which I drew down in steps, ensuring stability each time. I've read that you won't get good savings below this level on a desktop APU but I may try a little more later on.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Finally, I didn't need my CPU to be operating at such extreme clock speeds as the base clock of 3.7 GHz, so I cut that in steps, too. All the way down to 2.0 GHz. This seems stable and I've not noticed any performance degradation, so far. I might cut this down further, along with another undervolt to see if I can save any more power.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, what did that get me? Well, I didn't have any way of monitoring the system but I figure that at stock, I was probably pulling close to 100 W but I went and picked up a Watt meter to see how things had panned out. (Yes, I know people are going to complain I didn't do a proper before and after measurement but I don't feel like undoing all the settings just to see that I dropped it 40 W instead of 50 W. I might come back and update this post if and when I actually do this).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi84Sy4P49k_RnZH_hnFjC9CR_qGTenl92ayTxOselEMXxARkl8wZ_Z1Pml2htbB_FPt7jkP6NwX9A1afqO5IsP5mwmpePC_ohJVBkrn1MdkmPFN62nA667gPZDbgHTK0DZlK7cq57Vv93uE_BdjczZQ5WnSSt4CBoXkspSPb6iVWXhWGbv9EDhGJedFMc/s1142/Power%20summary.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="600" data-original-width="1142" height="336" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi84Sy4P49k_RnZH_hnFjC9CR_qGTenl92ayTxOselEMXxARkl8wZ_Z1Pml2htbB_FPt7jkP6NwX9A1afqO5IsP5mwmpePC_ohJVBkrn1MdkmPFN62nA667gPZDbgHTK0DZlK7cq57Vv93uE_BdjczZQ5WnSSt4CBoXkspSPb6iVWXhWGbv9EDhGJedFMc/w640-h336/Power%20summary.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Apologies for the terrible photos, the design of the meter is such that the power chord interferes with reading the screen! This was the only one I could find at short notice...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;26 W draw from the wall in sleep&lt;/li&gt;&lt;li&gt;37 W draw when active but not under load (i.e. browsing the folders, logging into the web interface)&lt;/li&gt;&lt;li&gt;51 W draw under load (moving files, streaming data etc)&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I think this is a good result considering HDDs typically draw 6 - 10 W under load and 4 - 6 W idle. The SSD should be drawing around 0.5 W when idle and 2 - 4 W during load.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Bearing in mind that this is power draw including conversion losses in the PSU. That means my system is doing very well, in terms of efficiency!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let's say that, active load calculation looks like this:&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;50.7 W - (4x 8 W) + (1x 3 W) = AC/DC conv. + APU + motherboard&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;blockquote&gt;&lt;p&gt;&lt;b&gt;15.7 W =&amp;nbsp;&lt;/b&gt;&lt;b style="text-align: justify;"&gt;AC/DC conv. + APU + motherboard&lt;/b&gt;&lt;/p&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Idle looks something like:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;blockquote&gt;25.6 W - (4x 5 W) + (1x 0.5 W) = AC/DC conv. + APU + motherboard&lt;/blockquote&gt;&lt;/b&gt;&lt;b&gt;&lt;p&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;b style="text-align: left;"&gt;5.1 W =&amp;nbsp;&lt;/b&gt;&lt;b&gt;AC/DC conv. + APU + motherboard&lt;/b&gt;&amp;nbsp;&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I'm pretty happy with that! If you have any ideas that could further save energy, let me know below or in the Twitter/Bluesky threads.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;And that about wraps things up for now. Hopefully, this post will get nicely indexed for other beginners to find if they encounter similar issues to myself and that it might help/inspire some other readers to delve into making their own NAS!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Overall, the project was pretty fun and also vastly increased my storage space and I'm able to keep my 2.05 TB* of game installers on-site and on-demand!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;I had never actually measured the full size of all of these, before now!&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2026/01/building-home-nas.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7KpEav5V8WeAy3zdYs-l2iWNo1a3dzJs09LBbvBEzkWg8oCbJvRrS2TOTCiBcVqUU2JylswsXEORc18I2s9T14uWqNruJHJXGZG538muS443eWpyzWcWlL4Lj5IYxPQ3IDYHIgGpBQoPikZIzKE05EypAPsHN8it-In8fyP08giswSyPWWcWoTV7IiPM/s72-w640-h460-c/Title_sm.jpg" width="72"/><thr:total>4</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-8217211612387139972</guid><pubDate>Mon, 19 Jan 2026 18:05:00 +0000</pubDate><atom:updated>2026-01-19T18:09:57.216+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">curmudgeon</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>Do we really have things so much better now...?</title><description>&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZKyX8s31SV8H3FmbbAgvBaCCRAdK7rULTPgbzHAhHrzo9lODm-ehFuq7geBitSX4ZmNadnsyk0Yn3z8_ymVsAgzN9kpSFLtPGoTM5xf-gw_BQ0Pa6CMWEvd0RS-vgDgdB1A0E8DLrGkL1O8OkO9AWMck99c01k6aOKogkVePx9b_saKqJ2y3Q2MOavyE/s1280/Title.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="720" data-original-width="1280" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZKyX8s31SV8H3FmbbAgvBaCCRAdK7rULTPgbzHAhHrzo9lODm-ehFuq7geBitSX4ZmNadnsyk0Yn3z8_ymVsAgzN9kpSFLtPGoTM5xf-gw_BQ0Pa6CMWEvd0RS-vgDgdB1A0E8DLrGkL1O8OkO9AWMck99c01k6aOKogkVePx9b_saKqJ2y3Q2MOavyE/w640-h360/Title.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You may have noticed that I'm a grumpy sort of person - easily annoyed into rolling my eyes and huffing when things are done which rub me the wrong way. You might even have seen a few tweets of mine on certain "trigger" topics, such as "inflation affecting the pricing of high technology products"*. Well, welcome to a&amp;nbsp;&lt;u&gt;&lt;i&gt;new&lt;/i&gt;&lt;/u&gt;&amp;nbsp;entry on that list: "&lt;a href="https://youtu.be/wZZf6LM3wAU?si=evIJPyAyDITwfQKZ&amp;amp;t=303"&gt;modern buyers have it better&lt;/a&gt;".&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;On the face of it, this is a basic truism (or at least we hope so!) but on the other side of the coin, this is, in my opinion, a bad faith argument.&amp;nbsp;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let's get into why I think that...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*It has negligible effect on the overall consumer price for high technology products. The forces which make a loaf of bread more expensive do NOT act on how much it costs to produce all the components and package a CPU/GPU... They are primarily priced based on COGS and how much the market will bear...&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Of Loaves and Fish...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Face it, most people you could speak to today will concede that some or many aspects of our daily lives are worse than they used to be in some manner. Whether that's cost of living, political authoritarianism, autonomy, threat of war, discrimination, etc. However, what the majority of people are doing in those instances of conversation is compariing their own historical knowledge to the current landscape. No one is doing a broad analysis with detailed, in-depth studies regarding various factors from "the past" to "now". It's all first-hand knowledge and experience.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, why do people who want to argue against a broad consumer/cohort sentiment go into hyper-specific detail?&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Often (as you can see in the video above) people making these counter-arguments, will focus on very specific items but then ignore the broader context around them. LTT's video is only partially guilty of this, in that they specifically present claims and then sort of undermine them later on, only - you can't separate these things out, this is not an academic paper. You can't (read: shouldn't) say for five minutes that tech has gotten cheaper than ever for the level of new performance* and then switch to talking about how your previous point didn't make any sense in the broader economic context. The topics are inextricably linked!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If a later point in your very short discussion invalidates a previous point, DON'T make the point in the first place! These are not ad hoc conversations, this is a scripted article**.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*We'll get onto that in a minute...&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;blockquote&gt;**I'm probably guity of violating this, myself! Though usually in the pursuit of dramatic effect. Not so with the presentational/informative style of the LTT video.&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Speaking, generally, about how things are better than they were in the long-ago past when things were expensive is an irrelevant, bad faith argument.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's an extrapolation of circumstances which have very different forcings acting upon them for which no extrapolation can be made. They are two different, unrelated pictures...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let me give a couple of examples:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Example 1: Books/paper&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In the 1500s, it took &lt;a href="https://www.google.com/search?q=how+much+did+it+cost+to+scribe+a+book+in+the+1500s&amp;amp;rlz=1C2CHBD_enMT1074MT1074&amp;amp;sca_esv=f9558c3169decf9e&amp;amp;sxsrf=ANbL-n4xpw463ko6as17wrTRlcNO5HK_Ug%3A1768733648567&amp;amp;source=hp&amp;amp;ei=0LtsabnHIMKP9u8PhsTwsAs&amp;amp;iflsig=AFdpzrgAAAAAaWzJ4E9SxID1ADWhgbqF46tUtnqncypl&amp;amp;ved=0ahUKEwi578qm9pSSAxXCh_0HHQYiHLYQ4dUDCBc&amp;amp;uact=5&amp;amp;oq=how+much+did+it+cost+to+scribe+a+book+in+the+1500s&amp;amp;gs_lp=Egdnd3Mtd2l6IjJob3cgbXVjaCBkaWQgaXQgY29zdCB0byBzY3JpYmUgYSBib29rIGluIHRoZSAxNTAwczIFEAAY7wUyBRAAGO8FMggQABiiBBiJBTIFEAAY7wUyCBAAGIAEGKIESI5VUABYlVRwBHgAkAEAmAGiAaABtTCqAQUxNy4zN7gBA8gBAPgBAZgCOaACwDHCAgoQLhiABBgnGIoFwgIEECMYJ8ICCxAAGIAEGJECGIoFwgIKEAAYgAQYQxiKBcICCxAAGIAEGLEDGIMBwgIOEC4YgAQYsQMYxwEYrwHCAhQQLhiABBiRAhixAxjHARiKBRivAcICCBAAGIAEGLEDwgIOEC4YgAQYsQMY0QMYxwHCAhEQLhiABBixAxjHARiKBRivAcICBRAAGIAEwgILEAAYgAQYsQMYigXCAgUQLhiABMICBBAAGAPCAgQQLhgDwgIIEAAYFhgKGB7CAgYQABgWGB7CAgsQABiABBiGAxiKBcICBRAhGKABwgIEECEYFZgDAJIHBTE1LjQyoAeCwwOyBwUxMS40MrgHtjHCBwc3LjIyLjI4yAesAYAIAA&amp;amp;sclient=gws-wiz"&gt;a couple to several pounds currency&lt;/a&gt; to produce a book - bear in mind that &lt;a href="https://en.wikipedia.org/wiki/Printing_press"&gt;this is &lt;i&gt;after&lt;/i&gt;&amp;nbsp;the invention of the printing press&lt;/a&gt;. Using a handy &lt;a href="https://www.nationalarchives.gov.uk/currency-converter/#currency-result"&gt;conversion website&lt;/a&gt; I found, 5 pounds in 1500 is approximately £3,329 in 2017. Just before this period, it could be calculated that lower class people would take &lt;a href="https://www.reddit.com/r/history/comments/f7bdlw/comment/fiasqpk/?utm_source=share&amp;amp;utm_medium=web3x&amp;amp;utm_name=web3xcss&amp;amp;utm_term=1&amp;amp;utm_content=share_button"&gt;a couple of years worth of salary&lt;/a&gt; to afford the cost of a book (not saying they did, just grounding the relative cost to cost of living). A single page in this era could cost around a penny, with a day's work paying 4 pennies - so a quarter of a day's wages for a lowish level person in society (my calculations thus put this around £30 - 40, today).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;By the 1800s, books had decreased in cost to purchase to &lt;a href="https://www.google.com/search?q=how+much+did+it+cost+to+scribe+a+book+between+1800-1900&amp;amp;sca_esv=f9558c3169decf9e&amp;amp;rlz=1C1CHBD_enMT1074MT1074&amp;amp;sxsrf=ANbL-n4RgbZJavF4IrJ_OLmzL59tpEdULQ%3A1768733932365&amp;amp;ei=7Lxsaen_FeHr7_UP5-bqkA8&amp;amp;ved=0ahUKEwjppfat95SSAxXh9bsIHWezGvIQ4dUDCBE&amp;amp;uact=5&amp;amp;oq=how+much+did+it+cost+to+scribe+a+book+between+1800-1900&amp;amp;gs_lp=Egxnd3Mtd2l6LXNlcnAiN2hvdyBtdWNoIGRpZCBpdCBjb3N0IHRvIHNjcmliZSBhIGJvb2sgYmV0d2VlbiAxODAwLTE5MDAyCBAAGIAEGKIEMgUQABjvBTIIEAAYogQYiQUyBRAAGO8FMgUQABjvBUjLF1DyBljzFXABeACQAQCYAb4BoAGcB6oBAzEuN7gBA8gBAPgBAZgCCaAC1AfCAg4QABiABBiwAxiGAxiKBcICCBAAGLADGO8FwgILEAAYsAMYogQYiQXCAgUQIRigAZgDAIgGAZAGBpIHAzEuOKAH9iWyBwMwLji4B9AHwgcFMC4zLjbIByGACAA&amp;amp;sclient=gws-wiz-serp"&gt;around 3 shillings and sixpence&lt;/a&gt;, which is approximately £10.35 in 2017.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, if we compare the price of a book in 2026 with that, the price of, let's say a book* that aligns with the historical contexts of the prior prices we just discussed for the 16th and 19th centuries costs around £12 - 13 - a slight increase for hardbacks with paperbacks (which, I suppose, didn't really exist in the far flung past due to crude binding techniques) being around £8 - 9.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*Historically, proper books were less for entertainment than the vast majority of scripts are today. So, I'm comparing the approximate averaged price of an informative book on Amazon UK based on a quick look. i.e. not a cheap fiction novella you read on the plane to your holiday. ;)&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Of course, between 2017 and 2025 there isn't any technological gap to drive costs down, so we're looking at the end-point of costs being more strongly affected by:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ol&gt;&lt;li&gt;Consumption (thus economies of scale) and;&lt;/li&gt;&lt;li&gt;Raw material and shipping costs.&lt;/li&gt;&lt;/ol&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If you want to take a look at technological innovation, then we need to consider e-books which, in reality should not be compared due to the cost of the devices required to view them and the inconsistent, entirely divorced from cost to produce pricing structures that subscriptions and loss-leaders have introduced into the publishing ecosystem.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, what are we to conclude from this?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;From my perspective, the method of production, the materials and ecosystems around production and the markets for these products have changed so substantially throughout these three periods that there is no comparison between them that could result in conclusions of any real value.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;Is it cheaper to buy books now? Yes.&amp;nbsp;&lt;/li&gt;&lt;li&gt;Are they a large proportion of the median person's daily wage? Not really.&amp;nbsp;&lt;/li&gt;&lt;li&gt;Do people still consume books in the same fashion between the three eras? Not on your Nelly!&lt;/li&gt;&lt;li&gt;Are the production costs and difficulties of production the same between the periods? No!&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's pointless to directly compare the periods that were quickly referenced above on this topic because nothing stayed the same.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, one item in this example is not really comparable with the modern electronic technology sphere that we are discussing - the products are vastly different in both quality and capability between the start of consumer production in the 70s / 80s, the middle (90s / 00s) and today.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;A book is a book. It may look a little nicer or worse for wear, from time to time, article to article. However, it is still printed words on paper for 90+% of the times we are talking about "a book".&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, let's take a more technological industrial example.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Example 2: Lightbulbs/light fixtures&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The early inventions of the incandescent light were fast and furious and I won't cover those here but let's skip past the very early years of Swan and Edison (when the price of a bulb could cost around a day's work) to the early 1900s. A bulb could cost &lt;a href="https://www.google.com/search?q=price+of+an+incandescent+light+bulb+in+UK+in+1900&amp;amp;sca_esv=651164c6481fa95a&amp;amp;rlz=1C1CHBD_enMT1074MT1074&amp;amp;sxsrf=ANbL-n7awrpOrILkvzHtZ4eqDDF2yW_ncA%3A1768741206231&amp;amp;ei=Vtlsab37DLuK9u8P46SpwAw&amp;amp;ved=0ahUKEwi9_q66kpWSAxU7hf0HHWNSCsgQ4dUDCBE&amp;amp;uact=5&amp;amp;oq=price+of+an+incandescent+light+bulb+in+UK+in+1900&amp;amp;gs_lp=Egxnd3Mtd2l6LXNlcnAiMXByaWNlIG9mIGFuIGluY2FuZGVzY2VudCBsaWdodCBidWxiIGluIFVLIGluIDE5MDAyBRAAGO8FMggQABiABBiiBDIIEAAYogQYiQVI_QhQAFicB3AAeAGQAQCYAf8BoAHRB6oBBTIuNC4xuAEDyAEA-AEBmAIEoALsA5gDAJIHAzIuMqAHjxiyBwMyLjK4B-wDwgcFMC4zLjHIBwmACAA&amp;amp;sclient=gws-wiz-serp"&gt;around 2 shillings and sixpence&lt;/a&gt;, which is £9.77 in 2017 and lasted around 1200 - 1500 hours.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;By 2000, &lt;a href="https://www.google.com/search?q=price+of+an+incandescent+light+bulb+in+UK+in+2000&amp;amp;sca_esv=651164c6481fa95a&amp;amp;rlz=1C1CHBD_enMT1074MT1074&amp;amp;sxsrf=ANbL-n5bfJ-wtGdQvNRCOPl6MQX_tFkbyQ%3A1768742346490&amp;amp;ei=yt1saby5Hf6O9u8PuIf7qAo&amp;amp;ved=0ahUKEwi8xovalpWSAxV-h_0HHbjDHqUQ4dUDCBE&amp;amp;uact=5&amp;amp;oq=price+of+an+incandescent+light+bulb+in+UK+in+2000&amp;amp;gs_lp=Egxnd3Mtd2l6LXNlcnAiMXByaWNlIG9mIGFuIGluY2FuZGVzY2VudCBsaWdodCBidWxiIGluIFVLIGluIDIwMDAyCBAAGIAEGKIEMgUQABjvBTIFEAAY7wVIxhVQzwpY2hNwAXgBkAEAmAGvAaABrgeqAQMwLje4AQPIAQD4AQGYAgigAuAHwgIKEAAYsAMY1gQYR8ICBBAjGCfCAggQABiiBBiJBcICBRAhGKABmAMAiAYBkAYIkgcDMS43oAeZGrIHAzAuN7gH3AfCBwUwLjIuNsgHHYAIAA&amp;amp;sclient=gws-wiz-serp"&gt;the price of an incandescent bulb was £1&lt;/a&gt;&amp;nbsp;(£1.42 in 2017), with the efficiency and quality greatly increased - lasting for up to 2000 hours. Additionally, the options available to the consumer had proliferated and various types and colours of lightbulbs could be obtained for different purposes. More expensive versions of the lightbulb were available from the 1980 when compact flourescent (CFL) bulbs were commercially available &lt;a href="https://www.energy.gov/articles/history-light-bulb#:~:text=When%20Edison%20and%20his%20researchers,for%20the%20next%2010%20years."&gt;for around £25 - 35&lt;/a&gt;&amp;nbsp;(£203 - 284 in 2017) and this had &lt;a href="https://www.google.com/search?q=how+much+did+CFL+bulbs+cost+in+2000+in+UK&amp;amp;sca_esv=651164c6481fa95a&amp;amp;rlz=1C1CHBD_enMT1074MT1074&amp;amp;sxsrf=ANbL-n5OYjJG_IJWwLXieVWHMFO88P81dw%3A1768743990129&amp;amp;ei=NuRsaea7B52G9u8PzrbjsA0&amp;amp;ved=0ahUKEwjmruvpnJWSAxUdg_0HHU7bGNYQ4dUDCBE&amp;amp;uact=5&amp;amp;oq=how+much+did+CFL+bulbs+cost+in+2000+in+UK&amp;amp;gs_lp=Egxnd3Mtd2l6LXNlcnAiKWhvdyBtdWNoIGRpZCBDRkwgYnVsYnMgY29zdCBpbiAyMDAwIGluIFVLMgUQABjvBTIIEAAYgAQYogQyBRAAGO8FMgUQABjvBTIFEAAY7wVI83pQixdY6XlwAXgBkAEAmAG-AaAB2w2qAQQwLjEzuAEDyAEA-AEB-AECmAIMoALrC8ICChAAGLADGNYEGEfCAgUQIRifBcICCBAAGKIEGIkFwgIEECEYCpgDAIgGAZAGCJIHBDEuMTGgB8sqsgcEMC4xMbgH6AvCBwUxLjUuNsgHJIAIAA&amp;amp;sclient=gws-wiz-serp"&gt;reduced to between £2 - 6&lt;/a&gt;&amp;nbsp;(£2.84 - 8.53 in 2017) with lifespans of 6,000 - 15,000 hours.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Over the first decade of the 21st century, we saw the gradual introduction of LED lights with the first proper "bulb replacement" introduced over 2009 - 2010 &lt;a href="https://www.google.com/search?q=how+much+did+LED+bulbs+cost+in+2010+in+UK&amp;amp;rlz=1C2CHBD_enMT1074MT1074&amp;amp;sca_esv=c889e3cf512c13a4&amp;amp;sxsrf=ANbL-n47D8QQo5yaImUMDcIJJ9dQoIUwAQ%3A1768743970861&amp;amp;ei=IuRsacWiNP_h7_UP9fSK4AQ&amp;amp;ved=0ahUKEwjFu9PgnJWSAxX_8LsIHXW6AkwQ4dUDCBE&amp;amp;uact=5&amp;amp;oq=how+much+did+LED+bulbs+cost+in+2010+in+UK&amp;amp;gs_lp=Egxnd3Mtd2l6LXNlcnAiKWhvdyBtdWNoIGRpZCBMRUQgYnVsYnMgY29zdCBpbiAyMDEwIGluIFVLMgUQABjvBTIFEAAY7wUyCBAAGIAEGKIEMggQABiiBBiJBTIIEAAYgAQYogRI04kJUNXYCFiPiAlwA3gBkAEAmAHAAaABnxCqAQQwLjE2uAEDyAEA-AEBmAIQoALIDcICChAAGLADGNYEGEfCAgQQIRgKmAMAiAYBkAYDkgcEMy4xM6AHwUayBwQwLjEzuAe-DcIHBjEuMTAuNcgHKIAIAA&amp;amp;sclient=gws-wiz-serp"&gt;for around £30 - 50&lt;/a&gt; (£34.68 - 57.79 in 2017).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, in 2025, virtually all lightbulbs are LED-based techology, costing around £1.74 - 2.74 with lifespans of 25,000 - 50,000 hours... and the variety and options of lightbulbs has exploded in the years since 2010 - the price is still decreasing even while more expensive options (wifi control? colour change?) are introduced!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, what can we conclude from this?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Again, from my perspective, we don't have much to conclude except that things got better over time, prices generally dropped, despite several HUGE technology changes over the years.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;Is it cheaper to buy lightbulbs now? Yes, with a caveat of total cost of ownership massively decreased compared to the purchase price being marginally cheaper per item in the past.&lt;/li&gt;&lt;li&gt;Are lightbulbs in as large a commercial demand as they historically were? No, I would say that they are in more demand as time progresses. We're stuffing LEDs into everything!&lt;/li&gt;&lt;li&gt;Are the manufacturing processes equivalent between the different periods? Not even close!&lt;/li&gt;&lt;li&gt;Does the purchase cost relate to "inflation" or "COGS and shipping"? The latter, by a large margin!!&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Look, I hope these examples are giving you at least a hint of where I'm headed in my arguments. Once again, it's pointless to compare what is being bought and used, despite ostensibly being the same product.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;A lightbulb is still a lightbulb - everything still "works" pretty much the same, only with exponentially less energy, for an exponentially longer lifespan of the product. Just from personal experience, I think the last time I changed a bulb was 3 years ago and that was 5 years after buying it (the other bulbs in the same fixture are still going, so it must have just been a bad bulb!)... Back in the day, your incandescent 60 W bulbs would last less time that that.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let's get back on track:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Comparisons...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;a href="https://youtu.be/wZZf6LM3wAU?si=cvxpUBASc5SSXPVB&amp;amp;t=474"&gt;At 07:54&lt;/a&gt; in the video they bring up a picture of two controllers, one in 2007 and one in 2025 (picture is the same for both but let's ignore that). The 2007 controller is $60 whilst the 2025 version is $65. The implication is that we're paying slightly more for the same product and it's below the rate of inflation while talking about the share of disposable income of modern day audiences being smaller. The last part being entirely true, the comparison doesn't hold up.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The Xbox 360 controller had a very cheap crappy d-pad, cheap but decent thumbsticks, simple digital buttons and analogue triggers. However, the most egreious omission is that the controller wasn't $60,&amp;nbsp;&lt;a href="https://www.shacknews.com/article/38252/xbox-360-pricing-announced#:~:text=Microsoft%20today%20officially%20confirmed%20Xbox,%E2%82%AC299.99%20and%20%E2%82%AC399.99)"&gt;it was $49.99&lt;/a&gt;.... Add to this fact that a modern $59.99 controller for the Xbox One has much higher quality components, improved ergonomics and improved wireless technologies. You can even go third party controller for the same or cheaper price and get hall effect joysticks and other advanced technologies like gyroscopic control. Back in 2007, the third party controllers were trash.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Nowadays? Third party controllers are both cheaper than official ones and actually probably better. &lt;a href="https://youtu.be/TNUf2LI-K8o?si=v7TGrmdd51Li-_ui"&gt;As LTT themselves pointed out a year ago&lt;/a&gt;...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, this comparison fall apart - they're comparing a product which has a brand effect on pricing, not on quality but even if they were fairly comparing, the quality of the modern controllers is much higher than the ones we were producing and using back in 2007.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;No dice...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;a href="https://youtu.be/wZZf6LM3wAU?si=57YeX_D5xr3Lj8oj&amp;amp;t=315"&gt;At 05:15&lt;/a&gt;, they talk about a comparison between a PC they built 11 years ago for 4K gaming and they claim that in "every single measure, modern buyers have it better...". They then proceed to compare this PC with &lt;a href="https://pcpartpicker.com/guide/BkrxFT/magnificent-amd-gamingstreaming-build"&gt;one on PCPartPicker&lt;/a&gt; which they state is $1000 less (inflation adjusted - they forgot to add that!) and is "better in every conceivable way".&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I don't actually know how to respond to this claim. No, wait. I do - that's why I'm writing this article using the pent up energy I've accumulated after all the times various entities and people claimed the past was better or that the present is better (being incorrect in both cases).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;My initial gut response is, "What the hell do you expect? That technology hasn't advanced in 11+ years?!" It's &lt;i&gt;just&lt;/i&gt;&amp;nbsp;such an idiotically vapid claim.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;LinusTechTips - "I bought a PC today and it was betterer than in the past!!"&lt;/blockquote&gt;&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It actually grinds my gears that this made it into a video segment about comparison of quality of life.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;But really, what have they compared here? The two items that they've put against each other aren't in the same category. It's like taking a &lt;a href="https://youtu.be/aMYQwnxRtLo?si=fb3Kq1eQc0i-5KGg"&gt;Ferrari F355 and putting it against a Kia EV6&lt;/a&gt;... (well, not quite that big a difference but it's not far from a good anology. the F355 was peak racing tech for the 90s while the EV6 is just a normal electric drivetrain and the EV6 runs much more closely than perhaps it should to the supercar!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let's do our own drag race:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwLRwn2MLw_TtOJx4ohXQS7MtX5TPet8QWna5jJgAjb-YJkMPqmHZnL4obLoRZ-8cd5sVbBnfHhhwh9CJc_y0mZx7ShMvBejcp0I0GjiTW3HreB8yHKeDIL6z5fitzFvsts7bXbRy-8zVdqr38eldsg_OaJaQgquMcju5VcssEY5ZX7F1N9gUbbli4gvA/s838/PC%20parts%20and%20price%20comparison.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="499" data-original-width="838" height="382" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwLRwn2MLw_TtOJx4ohXQS7MtX5TPet8QWna5jJgAjb-YJkMPqmHZnL4obLoRZ-8cd5sVbBnfHhhwh9CJc_y0mZx7ShMvBejcp0I0GjiTW3HreB8yHKeDIL6z5fitzFvsts7bXbRy-8zVdqr38eldsg_OaJaQgquMcju5VcssEY5ZX7F1N9gUbbli4gvA/w640-h382/PC%20parts%20and%20price%20comparison.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Comparing apples to apples, we see a bit of a different story emerging...&amp;nbsp;&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Feeding the 5000...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;While I can't source all these components, I can do some theoretical assessments as well as a price assessment.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;LTT's comparison isn't fair. They pitted &lt;a href="https://youtu.be/Cq-zqQiY-OA?si=w88hxDzokvttDiHM"&gt;a balls-to-the-wall PC from 2014&lt;/a&gt;, no expense spared, &lt;u style="font-style: italic; font-weight: bold;"&gt;ultimate&lt;/u&gt;&amp;nbsp;PC build to a &lt;a href="https://pcpartpicker.com/guide/BkrxFT/magnificent-amd-gamingstreaming-build"&gt;fairly decent high end PC from 2025&lt;/a&gt;. That's not apples to apples! Above, you will see I've put together an "average" 2014 PC and also a true, ultimate 2025 PC...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What we see is that the ultimate gaming PCs are approximately the same cost (I didn't do much cherry-picking of components, I just searched for "best X of 2025" and got each of these results - though I did select the DDR5 6000 CL26 2x24GB as a) there's no point going above 6000, b) CL26 is the lowest we can get, and c) I've read that the 24GB sticks can overclock better/be more stable - let me know if I'm wrong about that!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Similarly, the pretty nice PCs from 2014 and 2025 are both around $2700 in today's money. That's a pretty big coincidence in both cases, no?!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Well, it turns out that, aside from the GPU, all other components have gotten cheaper! &lt;a href="https://hole-in-my-head.blogspot.com/2020/07/next-gen-game-pc-hardware-requirements_36.html"&gt;Who would&lt;/a&gt;&amp;nbsp;&lt;a href="https://hole-in-my-head.blogspot.com/2021/08/the-relative-value-of-gpus-over-last-10.html"&gt;have thunk it&lt;/a&gt;? Sure, these are all relatively high-spec components, even the "average but pretty nice" PCs, but we can see that these are all great price reductions...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;But that's not the whole story, is it?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You see, the vast majority of people don't buy these components, they're buying cheaper and the above comparisons are using&amp;nbsp;&lt;u style="font-style: italic;"&gt;launch prices&lt;/u&gt;... The &lt;b&gt;current &lt;/b&gt;price of these components is quite different.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;YES!&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If we compare that 2014 Ultimate 4K gaming PC with a 2025 version, we will see that we are paying 50% more for "ultimate" performance eleven years later... That's &lt;i&gt;not&lt;/i&gt;&amp;nbsp;a good conclusion... What that implies is that technological progress is still ongoing but that the consumer is not benefitting from it...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9NTMsofm7NEVj5Yam3peiwk_Q1O08gSVceEzh1zXu0IPxZdNkLwPxoFMBxKfjtANfstyF5gd81emFtZ4TFZGB3T9dJYqUWYV1AS3ZHqYzaK7Tl0ISrxOeT8lKaTWxHyy45wKdgBnpUnWuh6AjFO8Tlps206Qk6RoSqZI89IkJ3jWKFIfZr6_2jrUc1Tw/s838/PC%20parts%20and%20price%20comparison%202.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="228" data-original-width="838" height="174" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9NTMsofm7NEVj5Yam3peiwk_Q1O08gSVceEzh1zXu0IPxZdNkLwPxoFMBxKfjtANfstyF5gd81emFtZ4TFZGB3T9dJYqUWYV1AS3ZHqYzaK7Tl0ISrxOeT8lKaTWxHyy45wKdgBnpUnWuh6AjFO8Tlps206Qk6RoSqZI89IkJ3jWKFIfZr6_2jrUc1Tw/w640-h174/PC%20parts%20and%20price%20comparison%202.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Yeah, those prices...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;To quote &lt;a href="https://youtu.be/wZZf6LM3wAU?si=qwHNUqCzVNfD21ao&amp;amp;t=336"&gt;LinusTechTips&lt;/a&gt;, "&lt;b&gt;&lt;span style="color: #274e13;"&gt;Sometimes, I think we get a little lost in the sauce when it comes to performance expectations&lt;/span&gt;&lt;/b&gt;"...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Only, there was no performance comparison performed? In fact, the relative performance of the systems in question is &lt;u style="font-style: italic; font-weight: bold;"&gt;entirely&lt;/u&gt;&amp;nbsp;overlooked. I mean, what even is this... I can't even...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What's shocking to me is that my "more reasonable" average 2025 PC is still pretty close in price to the good PC. There's only $1375 between the two and considering most of the 2014 PC's components launched the year before, (and new components were on their way in September), they were not as expensive as the launch price in the LTT video as we speculate in the above comparison (i.e. there was no Crypto/AI/etc. bubble occurring in 2014 that would push up prices!). In comparison, 2025's launch prices are the &lt;i&gt;minima&lt;/i&gt;&amp;nbsp;for pricing and today's prices are &lt;i&gt;much worse&lt;/i&gt;... but, once again, mostly for RAM, SSD and GPU pricing.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, YEAH, I understand that we shouldn't be "mad" and that things are the way they are but is this even normal? The skyrocketing prices of RAM and storage are one thing but in addition we have the over-inflation of consumer GPU tech, too. We &lt;i&gt;just don't have&lt;/i&gt;&amp;nbsp;the same level of investment by these companies that datacentre applications do, and that's fine, but the end result is that we're laden with the same price increases due to massive amounts of process and &lt;i&gt;intentional&lt;/i&gt; architectural overlap which could be avoided.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;These companies have decided to save money by merging consumer and datacentre development into almost an individual stack and, when push came to shove, they rightly chose their more lucrative clients.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, should we be "mad" at unfair business practices? Should we be "mad" at poor planning? Should we be "mad" at toxic politics and investments?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Yes, I think we should. And in this sense, LinusTechTips' video has failed on all counts. This video in particular is indirectly pandering to authority and minimising consumer experiences on a huge scale and you know what? I don't even blame them.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I can see the reasoning behind the video, I can understand the genesis of the idea: "People are angry, they shouldn't be SO uncontrollably angry and angry in an unfocussed manner. Let's make a video around that. You can still have good gaming experiences..."&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;But that's not this video.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This video is a "whitewashing" of computer hardware history and ignores today's trials and tribulations.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;At best, it took too long in the pipeline... At worst, it was ill-conceived.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;But let's try and do what LTT didn't:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtvUY5C9aWWBH_PFAqfpQeJYiYUqZSPwTQovhyphenhyphenMBAV0KDf0CUoKDFe64Oxg2c4Wx7sTAYZ8RNgRbH4PAd69BTx2QPlXzG80nPAxRn6vcki7wQK8zQH39lILEAPFVV11Z25IecdUiNO49Wh0596-az85wKW8NqzNLZWhrGNNFSd2X4tOTUklwqJhoUhXBY/s1600/2013-03-26_00001.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="900" data-original-width="1600" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtvUY5C9aWWBH_PFAqfpQeJYiYUqZSPwTQovhyphenhyphenMBAV0KDf0CUoKDFe64Oxg2c4Wx7sTAYZ8RNgRbH4PAd69BTx2QPlXzG80nPAxRn6vcki7wQK8zQH39lILEAPFVV11Z25IecdUiNO49Wh0596-az85wKW8NqzNLZWhrGNNFSd2X4tOTUklwqJhoUhXBY/w640-h360/2013-03-26_00001.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Performance...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The issue is that, although hardware reviewing has greatly improved over the years, it was "abjectly terrible" back in 2014 and in 2025 it's merely "okay" (though thankfully begining to improve again!).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What can I compare between these types of PCs when 4K gaming wasn't even in the public consciousness back then?!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Long-running sites like TechPowerUp tested at a maximum resolution of 2560x1600 back in 2014 - a far cry from&amp;nbsp; the 3840x2160 pixels of 4K. So, how can we do this assessment?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Well, LTT provided some very simple metrics and we can try and extrapolate from those.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There's no arguing that 1080p was the defacto resolution of 2014 but 4K is &lt;i&gt;still&lt;/i&gt;&amp;nbsp;not the resolution of 2025. But, let's try. TechPowerUp has the RTX 5090 running at an average of 147 fps at native 4K resolution in modern games. The three games that were tested back in 2014 are no longer tested in any serious GPU testing suites but give an average of 55 fps.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, what we're talking about is a 169 % increase in performance over 11 years at, essentially, the same price. Am I crazy or does that not seem excessive? That's only a 15 % performance increase per year which, &lt;a href="https://hole-in-my-head.blogspot.com/2022/02/the-rate-of-advancement-in-gaming.html"&gt;given my own figures&lt;/a&gt; show a 190% increase over the period of 2014 to 2020, seems a pretty tame rate of advancement!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Accounting...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;At the end of the day, from what I see in the video and from what I observe in the wider context of the world, the comparisons that LTT have brought together are both meaningless and midly condescending for the general public. Yes, they have a point - people shouldn't be so mad that tech prices have gone up. However, the actual reasons behind those price increases are infuriating.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Performing a proper analysis, as I have done here, shows that, no - we don't have it better than we did. An ultimate gaming PC in 2014 cost less than $5360 in today's money and a similarly conceptualised PC now costs around $8000 in today's market.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This logic extends down the stack. If the price of an okay PC in 2014 costs the same as 2025 (launch to launch price) then in the current market, the same conceptual PC costs more, too. Unfortunately, since the three most affected components are RAM, storage and GPU - these are also usually the largest part of a budget for a min-spec PC, too. So, this goes all the way to the bottom... not just the super-spenders.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Paul of Paul's Hardware &lt;a href="https://youtu.be/qbJnWnp7UGk?si=M2xHbU5A4iGueWjE&amp;amp;t=439"&gt;highlighted a passage&lt;/a&gt; that I've seen elsewhere but no idea where and I think it's relevant to the discussion:&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;"The reason why RAM has become four times more expensive is that a huge amount of RAM that has not yet been produced was purchased with non-existent money to be installed in GPUs that also have not yet been produced, in order to place them in data centres that have not yet been built, powered by infrastructure that may never appear, to satisfy demand that does not actually exist and to obtain profit that is mathematically impossible..."&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;It's all just a big financial circle into hell which will come back to bite us, the consumer, in the ass when the bottom falls out...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;And for that, we should be angry.&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2026/01/do-we-really-have-things-so-much-better.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZKyX8s31SV8H3FmbbAgvBaCCRAdK7rULTPgbzHAhHrzo9lODm-ehFuq7geBitSX4ZmNadnsyk0Yn3z8_ymVsAgzN9kpSFLtPGoTM5xf-gw_BQ0Pa6CMWEvd0RS-vgDgdB1A0E8DLrGkL1O8OkO9AWMck99c01k6aOKogkVePx9b_saKqJ2y3Q2MOavyE/s72-w640-h360-c/Title.png" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-6732395146795291931</guid><pubDate>Mon, 29 Dec 2025 20:59:00 +0000</pubDate><atom:updated>2026-03-02T18:34:46.522+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">Roundup</category><title>PC Building Advice... or How to buy a PC in 2026...</title><description>&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbXaywVcp-P0ULntiHssvE9meWdMNJHGcAf8doFD6nZl43bo5ne1upLAYFds7aCzppZg2kyKRQhOaBrZ7waaGPrqBnY9lk4qj25rnk4h2r7QlYpfFD5CGlArKSzbjAVLbWJSgnw9J7b_FYcD6QCBUPAXY6YsMTqjAYF_1QPII0zMPRrETADSHEHWBr4Dg/s1024/Work's%20computer%20Commadore%20PC.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="681" data-original-width="1024" height="426" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbXaywVcp-P0ULntiHssvE9meWdMNJHGcAf8doFD6nZl43bo5ne1upLAYFds7aCzppZg2kyKRQhOaBrZ7waaGPrqBnY9lk4qj25rnk4h2r7QlYpfFD5CGlArKSzbjAVLbWJSgnw9J7b_FYcD6QCBUPAXY6YsMTqjAYF_1QPII0zMPRrETADSHEHWBr4Dg/w640-h426/Work's%20computer%20Commadore%20PC.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Hopefully, your PC will look a little better than this one did...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I spend a lot of time on Reddit giving advice on PC building and troubleshooting. I recently saw some build guides on YouTube that are great but don't address many of the common questions that new or inexperienced users have.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Additionally, many websites and YouTube hardware enthusiasts focus purely on the theoretical performance per part and, in my opinion, while the scientist in me loves that data, it doesn't necessarily help these segments of the consumer base make legitimate purchasing decisions.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;So, I've decided to put together my own little list of...&lt;/div&gt;&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyOIe8W_joTS8YZxrImt6fUwsxWlHDSBsp8PDPhNSGhEsTvgnTQebrpvpIPe8QteCCdQ6osl-Kivb3rftwC4qqmCdbdl67McV-__Sn9UGfKUdsSqHnNlk32s_XvhTsQZZYxNIfJFcsWBu9JzATim5FEiRXnpmDbbzQ_6XHn4Khzt950CyJsXrMsX2lEPk/s2608/7.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1205" data-original-width="2608" height="296" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyOIe8W_joTS8YZxrImt6fUwsxWlHDSBsp8PDPhNSGhEsTvgnTQebrpvpIPe8QteCCdQ6osl-Kivb3rftwC4qqmCdbdl67McV-__Sn9UGfKUdsSqHnNlk32s_XvhTsQZZYxNIfJFcsWBu9JzATim5FEiRXnpmDbbzQ_6XHn4Khzt950CyJsXrMsX2lEPk/w640-h296/7.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;An old photo, now... but I was still pretty happy with the way this system turned out. Plus, it turns out that white builds have a pretty good resale value!&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&amp;nbsp;&lt;span style="color: #274e13;"&gt;How to Buy a PC in 2026...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let's start with the actual purchasing order: try not to buy parts piece by piece over an extended period of time. This is also a common refrain on the various YouTube guides I alluded to above. The reason for this is that you can potentially screw yourself with regards to the return window of the parts your buying - i.e. you can't test the parts when you get them.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, there is a caveat to this advice and it's that you can buy certain parts in small groups depending on which specific parts you're buying. e.g. If you're buying a CPU with integrated graphics, you don't need to purchase a GPU to test the CPU + motherboard + RAM combo.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Additionally, if you are stuck in one area, just jump to that section and read what you need ;).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;But what should you purchase first?&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Assuming that you're a complete newbie or someone who's going to build in a new form factor, the first thing you should nail down is...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbDAcL60090cEAl0pYpPo7AIBvTKSaXFoFSPHGZvWc-vYHfydXV3ok-h8CtsOWoKpHffQ7yCj3oNOUu-X6oUM8fg1dqWafuyYSVot1Kt9Rg6-06YSE7Eh0HBz53YnofHNhVTwjKmy4jRv7tccdNtKTcbi-twgR2Tbwmy6rFQiJK-V0_3rykH3bv0_quFI/s1920/The%20Occupation_20190930131607.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbDAcL60090cEAl0pYpPo7AIBvTKSaXFoFSPHGZvWc-vYHfydXV3ok-h8CtsOWoKpHffQ7yCj3oNOUu-X6oUM8fg1dqWafuyYSVot1Kt9Rg6-06YSE7Eh0HBz53YnofHNhVTwjKmy4jRv7tccdNtKTcbi-twgR2Tbwmy6rFQiJK-V0_3rykH3bv0_quFI/w640-h360/The%20Occupation_20190930131607.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;From the amazing game, &lt;a href="https://www.gog.com/en/game/the_occupation"&gt;The Occupation&lt;/a&gt;...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;The Case...&lt;/span&gt;&amp;nbsp;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The case defines the rest of your components, it sets your style and aesthetics, it affects the budget for your parts. It's also the only part that can potentially be purchased and left to lie around for months or years without being "built", and it's also one of the only parts which can last a decade and/or several PC builds, without fail.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That's one of the reasons why&amp;nbsp;&lt;i&gt;&lt;b&gt;I&lt;/b&gt;&lt;/i&gt;&amp;nbsp;am always on the lookout for deals on cases. I am not building PCs right now, but when I see a good price on a good quality case, I pounce. Who knows when I will need one!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In fact, just this last week, I bought one for my dad - did he want it? No! He wanted to keep using his old case from 2009 (of course - see my point above) but that didn't make the new case any worse of a deal and it's going to be there for me (or him) for years to come if and when I need it*...&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;b&gt;&lt;i&gt;&lt;blockquote style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;*Don't come to me for financial advice...&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Coming back to the topic at hand: once you have your case, you can plan the rest of your components around it. What power supply and motherboard form factor does your case support? How tall is the case? Do you need cable extensions? Is your case a specific colour? What length and height of GPU does it support? How high of a CPU cooler? How large of a radiator and what size fans does the case support?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's the single component that can define your plan going forward and I have one piece of advice left: don't buy the cheapest case that looks simply okay. Get a decent case but not super expensive. For me, that price range is typically €50 - 90 but try and steer clear of no-name brands unless you can get a really good look at the case and/or an impartial review.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;When you're buying a case, look at the design to determine if it will have what you need and/or want.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, for example, you may want to use your PC as "cold" storage for projects with large amounts of information. That means you will be using spinning platter hard drives (HDD). Many modern cases don't even ship with an HDD mounting cage, meaning you'd have to buy an over-priced add-on (if it exists).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Another thing to look out for is airflow. Do you want to focus on aesthetics or do you just care about performance? This will dictate what types of fans you want in your system (and potentially, how many). If you plan on buying your own fans so that they all match, then you don't care whether a case has them pre-installed or not. Otherwise, you can focus on cases that have fans pre-installed. Of course, just because a case includes fans doesn't mean they are any good. Maybe they're noisy, maybe they can't be controlled by the fan curves you can enable in software, so they will just sit at maximum speed when they're turned on. Related to this are the mounting systems in the case itself: can you mount a 240 or 360 mm long radiator? Can you mount 140 mm fans in multiple locations?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;One last thing to think about - How much airflow do you want going through your case. &lt;a href="https://hole-in-my-head.blogspot.com/2021/12/case-airflow-and-design.html"&gt;I've mentioned before&lt;/a&gt; that airflow really isn't as complicated as people want to make it. It just takes a little thought to optimise things a lot. Personally, I dislike the "goldfish tank" style cases as you are reducing the number of air volume changes per unit time due to the fact that the airflow has to "turn", which reduces velocity. The simplest configuration is a straight line of airflow from front to back or top to bottom.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;These are all things you should consider when deciding on the appropriate case for you.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Cheap cases are often not worth the money you "saved", either in literal blood*, or in terms of compromises you'll have to make, usually later on once you try to do something and realise you can't.&amp;nbsp;&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;*The finishing of the cut metal makes a big difference in user experience, you can really hurt yourself when working in some of the really cheap cases...&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;Expensive cases, like most other components that will be covered on this list are not anywhere worth near their cost.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Speaking of which, the next thing you should understand as part of your build is your...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsbaJyH1Ek_oOEs8lN-JOuylTd-33Is6ehCGrsDezRZDGrw9zwK9f90-A-ZL31nvUGVDmBQZBUU9MS6h_RH5mDDBAa1mOBrMqK0vS8q_AOxAE5E0fLL7xJQY8nPDQSgkDH8x7IGDc2FRTFTrcrSNN6SUPr1dL-uAeR54l36Z9-ToUa0fYTOyiaQm2h7IE/s960/sergeitokmakov.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="540" data-original-width="960" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsbaJyH1Ek_oOEs8lN-JOuylTd-33Is6ehCGrsDezRZDGrw9zwK9f90-A-ZL31nvUGVDmBQZBUU9MS6h_RH5mDDBAa1mOBrMqK0vS8q_AOxAE5E0fLL7xJQY8nPDQSgkDH8x7IGDc2FRTFTrcrSNN6SUPr1dL-uAeR54l36Z9-ToUa0fYTOyiaQm2h7IE/w640-h360/sergeitokmakov.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Large or small, your budget is important to keep in when making all decisions....&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Budget...&lt;/span&gt;&amp;nbsp;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, here's the thing: many YouTube personalities will say things like, "Just spend the extra $50 - 100 for part X over part Y because either the performance/cost ratio is better or because part X is the better choice for Z reason.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;They often justify this by saying something along the lines of, "If you can afford to spend X amount on a PC, then you should be able to afford to spend X + X/Y on it". However, that's not how the majority of humans manage their money - they decide how much they are willing to spend, and then spend up to that... with sometimes a little discretionary spending above that limit.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Also, these conversations that are essentially moralising over how you should spend your money aren't really useful. If I need a laptop to work/live, etc and my budget is €500 but your argument is that if I can't afford to spend €650, I shouldn't be even buying the €500 models because they're not worth it. What's the alternative? I get a crappy Chromebook for €200?!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That's not a solution and that assigned budget is there for a reason. Listen to what your customer/audience member is telling you. You only see a small sliver of their life, you don't know how they came to that decision.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You need to trust them.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, here's my imperfect solution: set your budget based on how quickly you need your PC up and fully running.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The shorter the time you need your PC ready and finished, the more "extreme" your budget needs to be - that works both ways! Either be prepared to pay over the odds, or sacrifice performance and optimisation to go cheaper.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Once you've got those two things settled, let's look at the platform that you will build your PC around:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;The Workload/Intended Use...&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This defines the main components in your system. Is it going to be a gaming-focussed machine, or will it be split between gaming and work? Will it be only work-focussed? What sort of work?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;For office type tasks, an integrated GPU (iGPU) on the CPU is an advantage and might suffice for your specific use case - you don't need to spend on a dedicated GPU (dGPU)!&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Research your specific workload - does it benefit from lots of CPU cores? Does it utilise a dGPU? Most professional software vendors will have a minimum and recommended specification on their website - just like games!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Only &lt;b&gt;you&lt;/b&gt; know your intended use case and programmes, start by googling those and "what hardware does it need?" or "system requirements".&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You can always go to forums and ask, but have this info in hand to help those who will help you otherwise you're just being lazy and making the kind random strangers do the legwork for you.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Once you know your use case, you'll be set to spec out your...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5aSgBe5ff9Y6d9Bm12kd7XiExMgIDeYB06KqULCqoN80L1yfMQRB6SR4bknLwp8OHCnlA92g5DaMK7sJfwC9k2rAMS3efQmAmjxoHkhHBr8GSVZ_3Akx2jz38AmmkTFkf6h03p8qwxhBxNeOcgIK_XmAUEcaJL3n6ejaXWL-dsW-F6NBNpN843lafB1o/s1030/AMD-RX%206600%20XT%20die.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="854" data-original-width="1030" height="530" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5aSgBe5ff9Y6d9Bm12kd7XiExMgIDeYB06KqULCqoN80L1yfMQRB6SR4bknLwp8OHCnlA92g5DaMK7sJfwC9k2rAMS3efQmAmjxoHkhHBr8GSVZ_3Akx2jz38AmmkTFkf6h03p8qwxhBxNeOcgIK_XmAUEcaJL3n6ejaXWL-dsW-F6NBNpN843lafB1o/w640-h530/AMD-RX%206600%20XT%20die.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Yes, it's a GPU. So, sue me... But, in all honesty, with the way &lt;a href="https://x.com/soft_fox_lad/status/1962531790410670235"&gt;the Intel leaks are panning out&lt;/a&gt;, it seems like CPUs are heading the way of GPUs...&lt;br /&gt;&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Processor (CPU)...&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, many people will wax lyrical about what is the better generation of processors but, in my experience this just comes down to a recency or brand name bias. What is "best" is not cut and dry and, in my opinion, your processor can be the easiest purchase or the most difficult, depending on how far down the rabbit hole of "optimisation" you want to burrow...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The reason for this is that testing of CPUs is not as straightforward as for any other part: most gaming reviews &lt;i&gt;primarily&lt;/i&gt;&amp;nbsp;review performance once in game. Even then, reviewers need to make the CPU the bottleneck by playing games at low resolutions and/or low graphical settings.&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;blockquote style="text-align: justify;"&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;The problem with this is that things happen with more data being driven around the system that aren't represented in these testing environments. Yes, your CPU that won at low resolution and settings might perform better when you're in a situation where there's a CPU bottleneck at the higher resolutions, but it actually may not because a different part of the CPU or I/O system could be being stretched...&lt;/b&gt;&lt;/span&gt;&lt;/i&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Most application reviews focus on &lt;i&gt;super-niche&lt;/i&gt; professional applications and not on windows and office applications or they are standardised proprietary tests which are essentially black boxes and unavailable for the vast majority of users to even be able to confirm or invalidate.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Additionally, for gaming, the number of cores themselves doesn't matter too much compared with the quality (performance) of each individual core. A 6 core current generation part can outperform an 8 core high end part from several generations ago. The one aspect where more cores are generally better is for game &lt;a href="https://www.techspot.com/review/3073-dram-apocalypse-pc-upgrade/"&gt;shader compilation&lt;/a&gt; - where more cores will allow a faster first-boot process into the game. But again, core performance (newer usually equals better) also has an effect on this aspect...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;All of this means that CPU reviewing is very difficult and currently mostly focusses on the theoretical, instead of the practical and while that does have a lot of value, it's not necessarily reflective of the user experience.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We have various commentators, &lt;a href="https://youtu.be/6O5XGVaPDZo?si=L8YGfwOxcG-I_HeV"&gt;like TechYesCity&lt;/a&gt; who think they can &lt;a href="https://youtu.be/DSZDbSEOqBI?si=R9IOH4We_ra43PFb"&gt;feel or prove&lt;/a&gt; that certain generations of processors are inferior in terms of application latency but, of course, these things are not clear cut and there are many &lt;a href="https://vi-control.net/community/threads/12th-and-13th-gen-intel-core-processors-have-higher-latency.141761/post-5378586"&gt;counter-arguments&lt;/a&gt; regarding the root cause of any &lt;a href="https://forums.blurbusters.com/viewtopic.php?t=14187"&gt;user-observed issues&lt;/a&gt; - not least of which is the OS itself!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;From what I can understand of the situation, "latency" fixes appear to mostly be in the same arena as audiophiles that use nonsensical* devices that are essentially placebo effect generators**. i.e. People are seeing things which are not there. In fact, I would bet that a large proportion of issues could be traced back to system setup and storage/RAM choices.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*From a physics point of view...&lt;/span&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;**Apologies if this comes across as dismissive but there's a complete lack of methodology and a huge focus on anecdotal experience...&lt;/span&gt;&amp;nbsp;&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;From my point of view, here are the important things to consider:&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;More cores are better for productivity applications (think video editing, CAD, software compilation)&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Faster single core frequency is better (more cycles per second means more work done!) - Games typically want this aspect.&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Local memory (cache) on the CPU is important - generally, higher equals better. - Games DEFINITELY want this!&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Newer generation of processor does not always equal better - the architectural design of the CPU makes a difference!&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Paying more won't always give you more performance (and vice versa)&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;a href="https://hole-in-my-head.blogspot.com/2020/12/in-defence-of-cores-and-future-of-gaming.html"&gt;I've touched on some of this before&lt;/a&gt;&amp;nbsp;but what I am looking out for when buying a PC is the best combination of all of these factors for the price point I want to achieve. Most of my time on Reddit is spent over at &lt;a href="https://www.reddit.com/r/buildapc/"&gt;r/buildapc&lt;/a&gt;&amp;nbsp;which is essentially a forum for troubleshooting new builds and PC upgrades, as well as giving building/buying advice. The sad thing (from my perspective) is that most people just go for the "best". It doesn't take any skill, nuance of understanding or challenge to pick a system comprising of "9800X3D + 32 GB DDR5 6000 CL30 + RTX 5080 + 2 TB gen 4 SSD"...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;While outlets like Hardware Unboxed are correct that "in the here and now" &lt;a href="https://youtu.be/AR9V8RTvVcM?si=hhAIhZYXHD79OiKe"&gt;gamers don't need more than 6 CPU cores&lt;/a&gt; to game well, that's also not so forward-thinking. We can make the same argument back in 2020 with 4 core CPUs and I would not like to be gaming on a 4 core CPU now, in 2025.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Then there is the question about upgrading. Many people put a lot of emphasis on the fact that on AMD systems, you have an upgrade path. However, you have to think about your cost of ownership. The upgrades available on the AM4 platform (&lt;a href="https://youtu.be/NdpfV5IkUi0?si=1AdOSFgoXCSUTxYl"&gt;famous for its longevity&lt;/a&gt;) only really made sense because the initial three generations of CPUs were not that great: They were great for competition and consumer choice but performance-wise, they kinda sucked. The Ryzen 5000 (Zen 3) series fixed that and, quite frankly, the move to Zen 4 and Zen 5 hasn't seen so much of an improvement in gaming applications beyond what those CPUs were capable with when paired with high-end system memory and storage SSDs.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, what does this mean in modern terms?&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It means that core-wise, you're better off not buying a 6 core CPU in 2025 because frequency and performance are plateauing and applications are only going to get more multithreaded as time goes on. Nowadays, cost- and performance-wise, in-platform upgrades only make sense if you originally purchased a very cheap, low-end CPU near the beginning of the platform's life and upgrade to a more expensive high-end CPU near the middle-to-end of the platform's life.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That means, if you want performance don't buy a Ryzen 5 7600 in 2025 because you're paying too much for too little. Similarly, don't buy a locked-down Intel CPU, with very limited operating frequency when unlocked, more performant parts of a previous generation will perform better. However, saying this - because of the e-cores, 6 p-core unlocked CPUs are sufficient for gaming.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Speaking of Intel, their CPUs are complicated by the fact that they moved to a heterogeneous core design - i.e. they have some "performance" cores (p-cores) and some "efficiency" cores (e-cores) and, really, you only want to count the performance cores when it comes to gaming. Conversely, for AMD, you only really want to count the CPU cores which are on the same chiplet*.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*There are some games that benefit from more cores, even spread across the chiplets, but they are not that common at this point in time...&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Circling back around to on-processor memory (aka "cache") more is better - when it's associated with specific CPU cores. You will see a general increase in cache each generation because of the need to improve performance beyond just core architectural design. However, cache is not an easy win - it's expensive to put on the silicon as it takes a lot of area and it requires a lot of transistors dedicated to input/output data management and synchronisation of that data between each level of memory.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;From a user perspective, you also have to take into account where that cache is located. For AMD, they have chiplets, meaning that any cache not on the chiplet with the cores that are being used is not really fundamentally going to provide that boost. For Intel, cache is dedicated to either e-cores or p-cores. That's quite a big deal because, like core count, the bigger number doesn't necessarily mean better.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Yeah, it's not easy to pick a CPU for the uninitiated.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;So, here are my buying guides for right now in 2026:&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;br /&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;For a bigger budget:&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;Get a modern platform (i.e. DDR5 motherboard)&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;If you plan to only game, an 8 core CPU is sufficient&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;If you plan to use productivity applications, then more cores is generally better (do research)&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Get an unlocked processor with a high operating frequency&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;For a more frugal budget:&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;There's absolutely &lt;i&gt;nothing wrong&lt;/i&gt;&amp;nbsp;with going with older generations&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Try not to go low-end on those older generations as it's not worth the money to upgrade later - get the best you can &lt;i&gt;now&lt;/i&gt;&amp;nbsp;and you will actually probably save money&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;6 core, current generation CPUs are fine but still get unlocked parts, if they fit into your budget&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The one caveat to the frugal advice is price of associated components. DDR4 has recently doubled in price (compared to where it was earlier this year) as it has reached End of Life (along with the current supply shortages). That extra expense will likely kill any savings you could have made by going to an older platform... so maybe bite the bullet and go DDR5.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Speaking of which...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoq-JlDJyLuHxJVtxAodiEJ-pwGvutr1eytwhN6UieAEAMtkQYoSuJGSnizbtglgWwpyyUSuZ20AYHSkISM5mtiLMKvtLgIlUwBbaIZ1tWU742bTowMRsulxEwY4I_VZkP2Oqxg-yEaPt9X6gN0XnPXtESYWGQ-JPAO7whLbTi6bWBAz_5bqwCgDp1MV0/s4128/20220811_190546_alt.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1908" data-original-width="4128" height="296" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoq-JlDJyLuHxJVtxAodiEJ-pwGvutr1eytwhN6UieAEAMtkQYoSuJGSnizbtglgWwpyyUSuZ20AYHSkISM5mtiLMKvtLgIlUwBbaIZ1tWU742bTowMRsulxEwY4I_VZkP2Oqxg-yEaPt9X6gN0XnPXtESYWGQ-JPAO7whLbTi6bWBAz_5bqwCgDp1MV0/w640-h296/20220811_190546_alt.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Your motherboard needs to fit in the case you buy! Don't be the person who buys an ITX case but gets an mATX motherboard...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Motherboard...&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The motherboard will be defined by the features you want. For the most part, you'll know if you want an expensive board because you'll be in the market for a high end or professional application. Similarly, don't cheap out on a low-end board model as it should only be paired with low-end parts that will have a shorter lifespan in terms of usefulness.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Low-end boards will also lack features that may make you regret the purchase. Examples of these are: lack of M.2 SSD slots, PCIe expansion slots, USB ports, processor support, and wifi/bluetooth support.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Unfortunately, motherboards have increased in price over the years, meaning that a "low-end" board is up to around €110.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I typically advocate for getting a mid-range model. These are typically denoted with a 50 or 60 at the end of the model name (e.g B650, B660) but things may change, going forward.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If you want to use faster system memory, generally speaking, motherboards with two memory slots will have better compatibility due to fewer electronic signal issues. However, there's generally no downside to having four slots if you're not trying to break records.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Try and get motherboards with heat spreaders (heatsinks) on the various chips on the board. It's not 100% needed but is generally better as an indicator for both the quality of the motherboard and also the longevity of the components. You don't need a lot - some motherboards absolutely overdo it in this department but you'll be able to weed those out by how expensive they are!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Integrated backplates are nice (you won't accidentally lose them) but try and get a motherboard which does not hide the CMOS battery underneath this backplate as it will make removal/replacement more complicated for no reason.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Two last things for purchasing a motherboard - it is VERY nice to have both a BIOS flashback button and troubleshooting error LEDs on the board. The former allows you to fix BIOS issues without even neeeding a CPU in the system and allows you to recover from otherwise unrecoverable situations. The latter helps you diagnose problems with your system - especially during the initial setup when you're putting all the components together...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Ultimately, the buying advice for motherboards is simple:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;Don't spend too much&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Make sure you have enough expansion slots for m.2 SSDs&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Try and get a motherboard with BIOS flashback and/or error LEDs&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Make sure you get the motherboard size which will fit in your case! (You have ATX, mATX and ITX)&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Make sure you get the motherboard which is compatible with your CPU!&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In general, you should always check compatibility of both CPU and system memory (usually referred to as RAM) on the manufacturer's website. These lists are not exhaustive, nor exclusionary - RAM kits not on the list can and will frequenty work - but they are a good starting point... If you're ever in doubt - either post on one of the many subreddits that are dedicated to helping people in these situations (far too many, IMO) or use a free service like PCPartPicker (it has various regional versions, too).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOuWaYVmf-PkkAI93YRRNg56ui5RrROI2JtFQo-bZOhkGUpIiPeCc0yL_X3v0IxVL5iu9YapNtkA8WsKw-gzWboDl6Iw50YI0P5kRkcOSbLRzT4HX3WQPB7izv9YL7d6cOLppxfLywwCUyMz5hVbPWMqaM7P16SevXlVIbh8K0a_crLg9UNf449rQV6Xw/s1920/Title%20image%202.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOuWaYVmf-PkkAI93YRRNg56ui5RrROI2JtFQo-bZOhkGUpIiPeCc0yL_X3v0IxVL5iu9YapNtkA8WsKw-gzWboDl6Iw50YI0P5kRkcOSbLRzT4HX3WQPB7izv9YL7d6cOLppxfLywwCUyMz5hVbPWMqaM7P16SevXlVIbh8K0a_crLg9UNf449rQV6Xw/w640-h360/Title%20image%202.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Stick with two and things'll be good for you...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;h4&gt;&lt;span style="color: #274e13;"&gt;System Memory...&lt;/span&gt;&lt;/h4&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Otherwise (partially incorrectly and) informally known as "RAM", system memory is the backbone of your computer. In the past, the quality of the memory was of the utmost importance but, in my opinion, I think this is much diminished with DDR5 and newer technologies.&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;For example, on DDR3 and 4, you needed to work in dual-channel mode (two sticks) because otherwise you could (and frequently would) encounter bottlenecks due to CPU access with one stick. Two sticks allowed the CPU to alternate the channel being utilised, improving latency to memory and, thus, performance - usually by quite a large margin.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;DDR5 technologies don't suffer as badly from this problem. &lt;a href="https://youtu.be/hr6p1tqeM3M?si=nTdFvK6oASM6wxS-"&gt;Ancient Gameplays&lt;/a&gt; have shown the difference (or lack thereof) of a single stick of DDR5 but it does seem that newer games and combinations of more powerful CPUs and GPUs are showing that dual channel configurations are, indeed, &lt;a href="https://youtu.be/_nMu1KFkOC4?si=N6rqTmpEZfrD76QY&amp;amp;t=565"&gt;more optimal&lt;/a&gt;. Meanwhile,&amp;nbsp;&lt;a href="https://youtu.be/IstA56IAeVA?si=pgMKiihQn-WdrziM"&gt;Hardware Unboxed&lt;/a&gt; (&lt;a href="https://youtu.be/B5K8Rg-oDwU?si=ByUEM_ub4JyO0Mh9"&gt;and others&lt;/a&gt;) have shown that DDR5 doesn't even benefit that much from increased frequency or lower latency*. This is mostly due to the way newer memory technologies are designed - with DDR5 already operating effectively like dual channel mode on a single stick through increased "ranks". Add to this the error correction on these more modern RAM designs and you get more robust performance, in general.&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*Though both lower latency and higher speed are definitely good, the extent to which they improve are very minor in 99% of applications... and generally not worth the extra cost.&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;To put it simply, though you'll find everyone under the sun proclaiming that DDR5 6000 CL30* is the best overall DRAM to be purchasing, DDR5 speeds at or above 5600 MT/s are fine, combined with a CL (CAS Latency) of 40 or less will also be fine for the majority of applications.&lt;/div&gt;&lt;div&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;&lt;i&gt;&lt;/i&gt;&lt;/b&gt;&lt;/span&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;&lt;i&gt;*Even I am guilty of this - mostly because it's at a sweet spot for price/performance and also highly compatible with both AMD and Intel CPUs and platforms. But it's mostly risen to fame because it's the exact specification that AMD recommends for its CPUs...&lt;/i&gt;&lt;/b&gt;&lt;/span&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;div&gt;DDR4 also has this issue - people will over-emphasise the importance of both speed and latency but, in my testing, &lt;a href="https://hole-in-my-head.blogspot.com/2023/01/analyse-this-does-ram-speed-and-latency.html"&gt;these are not necessarily the most important factors&lt;/a&gt;. As for DDR5, anything at or above DDR4 3200 MT/s and CL16 or 18 is fine.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;[Addendum December 2025]&lt;/span&gt;&amp;nbsp;&lt;/b&gt;The big problem right now is that memory prices have gone through the roof, meaning that even buying a single 16 GB capacity stick has shot up by more than 100% in just a couple of months as of this post. What this means for actually being able to complete a PC in 2026 remains to be seen but let's just say that, when I started this post, saying that "system memory is the backbone" of the PC was a major understatement!&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;So, in this environment, don't worry about losing 16% of performance by going for a single stick instead of two. Try and get through this period as best you can...&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;And that's it:&lt;/div&gt;&lt;div&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;Anything above DDR4 3200 with a CAS Latency at 16 - 18 or above is fine&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Anything above DDR5 5600 with a CAS Latency below 40 is fine&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;A single stick is okay, even if you potentially lose a little bit of performance&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Don't get less than 16 GB capacity - it's not worth the hassle&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;System memory is actually one of the easier items to choose in the process of building a PC, as is...&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoRTTqbJdtDSnhnbp01jOFSbPvRBy2PcZM5OxZvRfU_mlTW5FlyyOXDv1UeIyoDBKPBmJliCfx5rs5rah8tp1yDFvVnNdnOKSJtIHdXOiikKIG7jOPG3c-2TQHzyFaeidSbM1xlMnNLPVl5jd8Q5aNErjLD8f1WAcbahRoNoqWW46CTJKLQ_2H0-LowMQ/s536/HDD_SSD_Read_rates.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="407" data-original-width="536" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoRTTqbJdtDSnhnbp01jOFSbPvRBy2PcZM5OxZvRfU_mlTW5FlyyOXDv1UeIyoDBKPBmJliCfx5rs5rah8tp1yDFvVnNdnOKSJtIHdXOiikKIG7jOPG3c-2TQHzyFaeidSbM1xlMnNLPVl5jd8Q5aNErjLD8f1WAcbahRoNoqWW46CTJKLQ_2H0-LowMQ/s16000/HDD_SSD_Read_rates.PNG" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The humble HDD has been firmly supplanted by any form of SSD for OS installation, general storage and data transfer. The only place left for it, in general is super cheap builds, and cold(er) storage...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;h4&gt;&lt;span style="color: #274e13;"&gt;System Storage...&lt;/span&gt;&lt;/h4&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;YES! Everyone knows the term "Hard Drive" or "Hard Disk" (HDD). It's such a ubiquitous term that people (myself included) use it incorrectly to mean "storage". Much like the humble 3.5" floppy disk became known as the "save icon", your hard drive is where you store data.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;To be honest, there isn't even much for me to talk about with this one.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;There are certain things you need to know about SSDs. SLC, MLC, TLC, QLC are the formats of bit storage on the NAND flash memory: Single Level Cell, Multi-, Triple-, and Quad-Level Cell technologies... What these essentially boil down to is &lt;a href="https://memkor.com/insights-%2F-news-1/f/differences-between-ssd-nand-flash-slc-vs-mlc-vs-tlc-vs-qlc-nand"&gt;a trade-off between speed, storage size, longevity&lt;/a&gt;, and reliability.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;You see, SLC is the fastest, longest lived, most reliable but least storage dense technology and QLC is the most storage dense, slowest, shortest lived* and least reliable technology. What this translates to is that SLC is effectively reserved for national defense and military applications, whereas QLC is the means by which SSD manufacturers bring low-cost storage to the masses...&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Now, for many years I was quite concerned about the longevity of SSDs** but it seems to have been a mostly unfounded fear in terms of the typical consumer use-cases. Consumers just aren't performing writes to their drives &lt;i&gt;enough of the time and in enough volume&lt;/i&gt;&amp;nbsp;that high error levels can occur or that &lt;a href="https://www.crucial.com/support/articles-faq-ssd/why-does-ssd-seem-to-be-wearing-prematurely#:~:text=Each%20time%20a%20NAND%20cell,year%20in%20an%20unpowered%20state)."&gt;write wearing&lt;/a&gt;&amp;nbsp;appears to have&amp;nbsp;a big impact on the performance and lifetime of the drive.&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote style="font-style: italic;"&gt;*"Shortest" still meaning years, here.&lt;/blockquote&gt;&lt;p&gt;&lt;i&gt;&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;i&gt;**I'm still concerned about the Xbox and Playstation console SSDs which are soldered to the motherboard, thus cannot be replaced, and which consoles feature &lt;/i&gt;constant&lt;i&gt; recording features which are always writing to the drive!&lt;/i&gt;&lt;/blockquote&gt;&lt;i&gt;&lt;/i&gt;&lt;p&gt;&lt;/p&gt;&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;The reason for this is a little complicated but to put it in a relatively short form, SSDs use a few tricks to&amp;nbsp;help&amp;nbsp;a) with performance; and b) with longevity.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;First up, let's start with longevity. All SSDs use algorithms in their onboard controllers &lt;a href="https://en.wikipedia.org/wiki/Solid-state_drive#Wear_leveling"&gt;to manage where and how data is written physically on the NAND chips&lt;/a&gt;. This helps individual NAND cells to not become worn-out. Additionally, many SSDs used to use onboard DRAM chips as a buffer in order to reduce the amount of unnecessary writing to the NAND storage (which also had the effect of improving throughput performance). Finally, &lt;a href="https://www.seagate.com/blog/ssd-over-provisioning-and-benefits/"&gt;SSDs can sometimes be&lt;/a&gt;&amp;nbsp;&lt;a href="https://www.kingston.com/en/blog/pc-performance/overprovisioning"&gt;over-provisioned&lt;/a&gt; - meaning that their true capacity is larger than their real capacity*.&lt;/div&gt;&lt;div&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;&lt;b&gt;&lt;i&gt;*Remember that storage sellers don't sell you the accurate number of bytes converted into Megabytes or Terabytes for historical reasons. That means that when you buy a 1 TB drive, you're getting 1,000,000,000 bytes - which is equivalent to 931 Gigabytes (because division by 1024^3 for GiB in the operating system), &lt;/i&gt;not&lt;i&gt;&amp;nbsp;1,000 GB.&amp;nbsp;For a true 1 TB drive, this would be a Tebibyte - equal to 1.100 TB (as reported in the OS). Any extra capacity beyond these may be indicative that your drive has some amount of over-provisioned quantity of NAND cells**... &lt;/i&gt;&lt;/b&gt;&lt;/blockquote&gt;&lt;blockquote&gt;&lt;b&gt;&lt;i&gt;**But this isn't a fool-proof way to determine it, as &lt;a href="https://serverfault.com/questions/945522/why-same-disk-model-but-different-capacity"&gt;sector size&lt;/a&gt; can have an impact on how many "gigabytes" are actually recorded on your drive! Also, make sure you use &lt;a href="https://www.computerhope.com/issues/ch002288.htm"&gt;System Information&lt;/a&gt; to check for the actual size of the drive. Otherwise, Windows, especially, can mislead you based on partitioning... The only way I find reliable to understand if there is any over-provisioning is &lt;a href="https://www.techpowerup.com/review/wd-black-sn770m-2-tb/"&gt;from a review&lt;/a&gt;... but then, manufacturers can post-review &lt;a href="https://www.tomshardware.com/news/adata-and-other-ssd-makers-swapping-parts"&gt;change SSD components&lt;/a&gt;....&lt;/i&gt;&lt;/b&gt;&lt;/blockquote&gt;&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Now, with performance, things become a little tricky. As I mentioned above, SSDs are not all created equal. SATA devices will be limited to around 500-600 MB/s maximum sequential transfer speeds and the out of order (random) transfer speeds will also possibly be quite a lot lower. PCIe-based NVMe devices have much greater performance potential partially due to the specialised protocols of the PCIe standard used and partially due to the greater bandwidth of the PCIe bus compared to the SATA bus .&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Additionally, features such as on-device DRAM enable faster reads and writes by implementing data buffers; reserving a portion of the drive as an SLC cache* to improve I//O performance, and by &lt;a href="https://www.pcworld.com/article/784380/host-memory-buffer-hmb-the-dram-less-nvme-technology-thats-making-ssds-cheaper.html"&gt;utilising the system memory&lt;/a&gt; to improve performance for SSDs without a DRAM cache. All of these features ensure that PCIe-based SSDs perform well, despite not working with top-of-the-line technologies.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Hell, NAND technology is so mature now that even QLC NAND devices are both &lt;a href="https://www.techpowerup.com/review/wd-blue-sn5100-2-tb/4.html"&gt;super reliable and performant&lt;/a&gt;... The ONLY thing that you really &lt;i&gt;DO &lt;/i&gt;need to be concerned with is how full you allow your drive to become!&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Seriously, all of these measures fail when an SSD is full - to be fair, HDD performance also drops as the drive fills up but to a lesser extent due to relative ratio of performance!&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;I typically keep my SSDs not more than 70-80 % full and, ideally, not more than 70% full. This enables most of these technologies to function well and also allows the drive to shuffle data around without issue. This provision also technically applies to HDDs but you can get closer to the maximum capacity.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The last thing to note about performance is that it &lt;i&gt;just does not&lt;/i&gt;&amp;nbsp;have much appreciable impact on gaming. Nope. Nothing. It's all about three things in the following order: GPU/CPU/RAM. Storage speed and performance is a distant fourth and the gap is unlikely to be bridged any time soon!&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Speaking of which, the last thing to talk about is capacity.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZSVJsyhdxLHnwwA0hXGSQEX3rw12W20MJT6bmzYVOtGmunUx2890PwJbixpO1vcPHfRnx8nTdy-oAcJGUaMRKaqUxLXXa8OzSC8kWqlcWAqv6klQNxVx6lcB0aq3PFq7LUalZSGICCzVfouGYfKyDQcwrvywWDGY1GAsCye2MIae9CeEks3XKYjAQEWU/s943/SSD_variance.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="713" data-original-width="943" height="484" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZSVJsyhdxLHnwwA0hXGSQEX3rw12W20MJT6bmzYVOtGmunUx2890PwJbixpO1vcPHfRnx8nTdy-oAcJGUaMRKaqUxLXXa8OzSC8kWqlcWAqv6klQNxVx6lcB0aq3PFq7LUalZSGICCzVfouGYfKyDQcwrvywWDGY1GAsCye2MIae9CeEks3XKYjAQEWU/w640-h484/SSD_variance.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;As you go down the stack, you will find that things fall apart... don't buy small SSDs!&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The thing with SSDs is that capacity is not all about just the amount of things you can store, it's also related to performance and longevity. You see, NAND not only comes in various types of technological implementations but also NAND manufacturing technology has changed over time, with more "layers" being able to be incorporated into a single chip.&amp;nbsp;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The problem here, is that the more layers you have, the fewer NAND chips you have, as each layer contains additional cells, which means you have more capacity per chip. This means that you have fewer data channels in your SSD to the SSD controller chip because channels are limited per NAND chip!&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;To summarise, as NAND technology advances, fewer chips are required to satisfy the rated capacity of the SSD and thus larger capacities are required to increase both data throughput and SSD longevity - as you can see in the image above. With more channels, you have faster read/write speeds; you have better random input and output operations; you have better endurance.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The ultimate conclusion of this is that in the older days, an SSD of a particular capacity perfomed better as a ratio of the drive's overall maximal performance and had better longevity than a modern drive of the equivalent capacity.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Short version: a 512 GB drive in 2018 gave better relative performance to the top-of-the-line models than a modern 512 GB drive does today.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;[Addendum December 2025]&lt;/b&gt;&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div&gt;NAND prices are now facing a similar, if less pronounced, fate to that of DRAM. Therefore, you'll have to shop around to find a decent deal on storage. Normally, I wouldn't recommend less well-known brands like Orico or Netac) but get what you can from where you can - they're mostly alright and you'll be able to identify the dodgy brands by the reviews and unrealistically "cheap" prices...&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Adding to this, Crucial have decided to discontinue their consumer products and these were historically a great, cheap alternative to Western Digital, Samsung, and Sabrent. This will be a major blow to storage prices as Micron (the owner of Crucial) will divert NAND production to server products, leaving a supply hole in the consumer space.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Thus, my recommendations for buying an SSD in 2026 are:&lt;/div&gt;&lt;div&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;Don't worry about NVMe Gen 3, 4, or 5 - just get at the price you want&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Get at least a 1 TB drive but preferably 2 TB for modern devices released 2024&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Don't be afraid if you're not buying from "the big three" storage brands&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&amp;nbsp;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&amp;nbsp;&lt;/b&gt;&amp;nbsp;&lt;/div&gt;&lt;h4&gt;&lt;span style="color: #274e13;"&gt;The Graphics Card...&lt;/span&gt;&lt;/h4&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Back in the good old days, when you bought a new GPU, it was like buying a window into a new world filled with wonder and boundless performance - before being forced to do the same thing again 6 months to 1 year later. The difference was that you spent $300 or less to get that experience.&amp;nbsp;&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Nowadays, it's all incremental. Many people will tell you it's 
because manufacturing advancements are slowing but I do not believe that
 this is the root cause of the situation. In my estimation, the real 
root cause is increased greed on the part of the manufacturers (just go 
and see their profits), they (and their shareholders) are not happy with
 30-50% profits, they want more. Which, you know, is fair enough but can
 arguably be not only an untenable situation but also an unrealistic 
goal.&lt;/div&gt;&lt;div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Unfortunately, the cost to &lt;i&gt;design, make, and ship&lt;/i&gt;
 a GPU has become much greater. This means that to keep the same or 
larger margins, any process or design gains are lost amidst the drive to
 increase profit instead of keeping the ratio of profit the same. This 
results in a race to cut product value and, at least in the GPU 
environment, spend more on "intangibles" like the software stack (i.e. 
ray tracing, RTX voice, upscaling, etc.).&lt;/div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;So, taking everything into account, the hardware gains are still present. They're just not being delivered to the same consumers. A consumer who is in the market for $300 is not the same consumer that is spending $700 but the position in the product technical stack can be similar*.&lt;/div&gt;&lt;blockquote&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;*i.e. silicon size is similar, and relative performance position in the product stack is similar.&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div&gt;&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSUav75KV83iTx3wWFYBmPnh90bvaGqiZ5pwbti6lTtdNEMmp3N60WwptsRSSW5ZMH_wqXl7tHS0LnzF8-3w_cZnfwXkx2NdsyLeF3plLwoh-N_IuucLBpru2i0qj4FJjUznuOHSIDnQoBzGoWZuvuoMCxDIRRB049zXO-sxN-kUA9OhZYb4RdE2QOpb0/s1079/TPU_scaling.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="737" data-original-width="1079" height="437" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSUav75KV83iTx3wWFYBmPnh90bvaGqiZ5pwbti6lTtdNEMmp3N60WwptsRSSW5ZMH_wqXl7tHS0LnzF8-3w_cZnfwXkx2NdsyLeF3plLwoh-N_IuucLBpru2i0qj4FJjUznuOHSIDnQoBzGoWZuvuoMCxDIRRB049zXO-sxN-kUA9OhZYb4RdE2QOpb0/w640-h437/TPU_scaling.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The RX 6750 XT &amp;gt; RX 9060 XT and RTX 3060 Ti &amp;gt; RTX 5060 Ti difference is pretty pathetic...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;The issue is that, at that sub-$500 price point, we're talking of getting a 10 - 20 % performance improvement (and sometimes VRAM regression in the same tier!) over a period of 5 years*. That improvement does come at a slightly reduced cost meaning that the real generational improvement is essentially non-existent because you could get very similar performance at the same price in the previous generations.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;a href="https://hole-in-my-head.blogspot.com/2025/12/next-gen-pc-gaming-requirements-2025.html"&gt;We are not seeing gen-on-gen&lt;/a&gt; &lt;a href="https://hole-in-my-head.blogspot.com/2025/05/so-whats-next-rate-of-advancement-in.html"&gt;improvements that matter&lt;/a&gt;...&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;For example the jump from the RX 6750 XT to the RX 9060 XT is around 0 fps in rasterised performance. For the RTX 3060 Ti to the RTX 5060 Ti, &lt;a href="https://www.techpowerup.com/review/zotac-geforce-rtx-5070-solid/29.html"&gt;it's 20 fps&lt;/a&gt;.&amp;nbsp;&lt;b&gt;&lt;u&gt;At 1080p&lt;/u&gt;&lt;/b&gt;.&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*Not all of the low-end cards released at the same point in the generation but I'm counting from when each generation starts because those consumers had no upgrade until the part was released and the technology for each generation is set at the start of the generation - they don't change the architecture half-way through a generation...&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;Coming back to the topic at hand: if you want to game at 1080p High settings or 1440p medium settings, then it's likely that the same card can manage the performance you want. If you want better quality, ray tracing and higher resolutions/framerates, then you're going to have to move up a GPU performance tier.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Here's the thing with GPUs: &lt;i&gt;every product is good&amp;nbsp;at the right price&lt;/i&gt;.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;So, mostly, when people are saying that GPU &lt;i&gt;X&lt;/i&gt;&amp;nbsp;is bad, what they're really saying is that "&lt;i&gt;it's bad at that price&lt;/i&gt;". If you buy an RTX 5050 for €250 when the RTX 5060 is €280-290 and it's almost 30% more powerful with the same quantity of VRAM, you really should stretch your budget to the 5060... However, drop that 5050 to below €200 and you're looking at a pretty great deal - especially if you're only focusing on 1080p resolution and certain types of games.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Speaking of which, esports titles (Counter Strike 2, DOTA 2, Valorant, etc) typically run better on Nvidia GPUs - not always, but often. Meanwhile, certain game engines work better on AMD GPUs (the Call of Duty games spring to mind). So, always look at reviews and try and find reviews that specifically show your game (if it's graphically demanding). Sites like &lt;a href="https://www.techpowerup.com/"&gt;TechPowerUp&lt;/a&gt;, &lt;a href="https://www.techspot.com/"&gt;TechSpot&lt;/a&gt; and YouTube channels such as &lt;a href="https://www.youtube.com/@GamersNexus"&gt;Gamers Nexus&lt;/a&gt;, &lt;a href="https://www.youtube.com/@Hardwareunboxed"&gt;Hardware Unboxed&lt;/a&gt;, &lt;a href="https://www.youtube.com/@KitGuruTech"&gt;Kitguru&lt;/a&gt;, &lt;a href="https://www.youtube.com/@HardwareCanucks"&gt;Hardware Canucks&lt;/a&gt; and &lt;a href="https://www.youtube.com/@eTeknix"&gt;eTeknix&lt;/a&gt; are all good, reliable places for general information. Places like these will give you an idea for productivity and gaming workloads. If you want more specialised information, then you're going to have to search for it.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Regarding ray tracing, since its introduction to gaming, Nvidia have been on top at each price point but AMD have caught up with the RX 9000 series, so there's not much of a problem going either way. Where the products really begin to separate is in other use cases.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;For example, AMD is the clear choice if you are going to be using a Linux-based Operating System (OS). This includes SteamOS, where Nvidia cards are difficult to get working. On the other hand, Nvidia cards are typically more performant and compatible with professional programmes, programmes using the CUDA programming framework (created by Nvidia) and also most locally running* AI programmes.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmsCWNCACvinskQTMsGk72UoHGRXtQxYUtKAU2qZXSxRS11rOmEHD3O49cugphlo2WVZluhCbu2WxpYy0mB-MEt6vDRNFMuXXC3hC8AJuMHY9Kf22WYaRcOJaumPZMKDHuNKgCFCPdRglNuyJKOPN8w_UTSQhJp68XIhwpLfPdPp4H-dvWbZP9ZeBCec0/s1920/The%20future.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmsCWNCACvinskQTMsGk72UoHGRXtQxYUtKAU2qZXSxRS11rOmEHD3O49cugphlo2WVZluhCbu2WxpYy0mB-MEt6vDRNFMuXXC3hC8AJuMHY9Kf22WYaRcOJaumPZMKDHuNKgCFCPdRglNuyJKOPN8w_UTSQhJp68XIhwpLfPdPp4H-dvWbZP9ZeBCec0/w640-h360/The%20future.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Yes, we'd all love a huge, performant, cheap GPU with lots of VRAM but we also need to face reality...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Cicling back to the case, you need to double check the GPU size, to make sure it fits and that your PSU has the requisite wattage and cables to support the GPU (more powerful GPUs typically need more power and cable connections - unless you're running a new-style 12VHPWR or 12V-2x6, in which case you'll either need an adapter to connect 2 or 3 8pin PCIe cables or a compatible PSU with the appropriate cable.&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*As in, running on your PC, not in the cloud, on a server...&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;Aside from the actual performance of the GPU core itself, the quantity of VRAM is also important, as it allows higher quality textures to be used by the game, along with more concurrently used features, at higher resolutions, and &lt;a href="https://youtu.be/7LhS0_ra9c4?si=TDLDtdWpdLc5dkXw"&gt;also overcomes some&lt;/a&gt; &lt;a href="https://youtu.be/kEsSUPuvHI4?si=1jluJRnagMRD1c4w"&gt;PCIe/memory bottlenecks&lt;/a&gt; as data needs to be shuffled to/from the GPU less frequently and so operation becomes more efficient. You only need to look at &lt;a href="https://youtu.be/AdZoa6Gzl6s?si=QkUgz0FUeWIRNeTz"&gt;some of the testing&lt;/a&gt; &lt;a href="https://youtu.be/IHd95sQ-vWI?si=jYH-MUbjgbLddoGP"&gt;performed by reviewers&lt;/a&gt; to see the effect of too little VRAM versus&amp;nbsp;&lt;i&gt;having enough&lt;/i&gt;...&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;The issue with VRAM is that it's&amp;nbsp;&lt;b&gt;too&lt;/b&gt;&amp;nbsp;easy to get hung up on it and make a poor purchasing decision. For example, I've seen people buying the RTX 3060 12 GB or RX 7600 XT 16 GB instead of an RTX 5060 8 GB or an RX 9060 XT 8 GB - the latter two both handily outperform the former duo. The reason why this is bad is because you won't be encountering many situations where the added VRAM will be benefitting you over the decreased performance of the GPU core.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Similarly, you don't want to pick an RTX 5060 Ti 16 GB over an RTX 5070 12 GB - the latter as more than 25% better performance. If you want the raw fps, you can turn down a couple of settings to bring a game within the 12 GB VRAM capacity and enjoy generally higher quality graphics and higher framerate.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWpo46X20GBLqyzLYpwXSzTjbF0DpgdsZA2TqjYbIbxyfRYBQTL_QYAtwb9D8ahmtRQM4duUchq37G4XW3_oJWaWzQuul_Bjhyl2QAz8yT-L5DX1J-fH1ciixt31WALyW5m7j9qNyKb_OuWeCF6A0HLmuM5Ou1ui1Pv-vPUTPwIRrGRVK66r5af20Ou1I/s2048/Big%20Navi%20tease.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1152" data-original-width="2048" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWpo46X20GBLqyzLYpwXSzTjbF0DpgdsZA2TqjYbIbxyfRYBQTL_QYAtwb9D8ahmtRQM4duUchq37G4XW3_oJWaWzQuul_Bjhyl2QAz8yT-L5DX1J-fH1ciixt31WALyW5m7j9qNyKb_OuWeCF6A0HLmuM5Ou1ui1Pv-vPUTPwIRrGRVK66r5af20Ou1I/w640-h360/Big%20Navi%20tease.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;It's important to not take the GPU vendors' performance claims at face value and instead wait for independent reviews to come online before making most purchasing decisions...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;It's also good to be aware of how many PCIe lanes the GPU uses to connect to the system (the GPU connects directly to the CPU using the topmost PCIe 16x slot on the motherboard. It will work in lower slots if they're the correct size but will generally lose performance as these are slower and connected through the motherboard chipset) and also the PCIe generation of both the card and motherboard.&amp;nbsp;&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;You don't need to match the generation - PCIe is a forward and backward compatible standard. However, lower-end GPUs typically have a cut-down number of lanes which means if you go&amp;nbsp;&lt;i&gt;too far&lt;/i&gt;&amp;nbsp;back in generation on the motherboard side of the connection, you'll throttle the data going to and from the GPU, potentially reducing its performance.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Lastly, you need to take into account the cooling solution on the GPU. At this point in time, most GPUs are very performant at the power envelope they are working in. i.e. modern GPUs are power efficient. This means that most of them don't actually need huge coolers. However, GPU AIBs (Add-In Board partners - the companies that actually combine motherboard, cooling solution and VRAM into a finished product) will make these huge coolers with lots of flashy stuff on them in order to be able to charge a premium because their profit margins per item sold are pretty tight. Not to mention that there's no meaningful performance difference between variants of the same GPU from different vendors (e.g. ASUS versus MSI).&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Therefore, for most of the current generation cards, you're better off buying the cheapest version of the GPU you want. You can also look for reviews that test noise levels of GPUs as a larger cooler will typically (though not always) allow for cooler operation and thus quieter operation.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Lastly, there are intangible/minor features of GPUs that you may want to consider:&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;ul&gt;&lt;li&gt;RGB lighting will add a premium to the GPU but provide no benefit;&amp;nbsp;&lt;/li&gt;&lt;li&gt;A VBIOS switch will allow two VBIOS profiles to be loaded on the GPU - while this is nice to have in case your VBIOS becomes corrupted (I've never had this happen in over 30 years of gaming) the actual different VBIOSes loaded onto the card from the vendor are usually a nothing burger and not useful despite supposedly being "different" and the user isn't really able to make their "own" VBIOS - having to rely on any official ones provided by the vendor;&lt;/li&gt;&lt;li&gt;Connectivity is important - make sure you have enough HDMI/DisplayPort connections for your use case.&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div&gt;This last point can be important - you can get adapters which convert from HDMI to DP and vice versa but if you only have three video-out ports on your GPU and you require four, you're screwed. Another, sometimes invisible item is the multi-monitor and multi-programme performance of GPUs.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;While all modern GPUs, including integrated graphics included on the CPU will easily handle multiple monitors when running at the desktop and office/web browsing programmes, GPUs in the low end will begin to crumble once you start running 3D programmes (e.g. games).&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Examples of this can be running two high refresh rate monitors on a low-end card and trying to play an esport title. You'll get a big performance loss on an RTX 5060 Ti level and below unless you reduce the refresh rate of the secondary monitor to 60 Hz or less. Leaving it running at 120+ Hz will cost you dearly.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Also, running other programmes that are typically considered "light" on the GPU at the same time can result in big drops in performance. Watching YouTube or Netflix whilst running a game on an RX 9060 XT may result in improper playback of the video content as the GPU struggles to run both the game and the video playback.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;So, if you plan on running multi-monitor setups, keep these things in mind.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Streaming and video editing also have their own demands on the hardware and it's typically advised to go higher-end when doing so but is not required if your time is not the utmost concern. Lower end cards will still do a decent job but slower and/or of less quality.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;In my opinion, this generation, the best value GPUs are the RX 9060 XT 16 GB, RX 9070 XT, RTX 5070 and RTX 5070 Ti. But, as I mentioned, at the right price, you can justify most GPUs.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Here are my general pieces of advice for choosing a GPU in 2026:&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;Get a card with at least 12 GB of VRAM for 1080p and 1440p resolutions, 16 GB is better for long-term viability&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;16 GB VRAM is the minimum for 4K&amp;nbsp;&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Treat widescreen resolutions (2560x1080 and 3440x1440) as being the resolution&amp;nbsp;&lt;i&gt;above&lt;/i&gt;&amp;nbsp;- so, performance of a GPU at 1440p can be a guide for widescreen 1080, and performance at 4K can be a guide for widescreen 1440&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;If you're only playing (or mostly targetting) esport titles, get Nvidia cards - 8 GB VRAM may be sufficient for these titles&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;If you want to Stream, get a stronger video card than required to play the game as this will take some resources away from running the game&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;If you want to do a lot of video editing, you likely want to have a higher end Nvidia card as they include two encoders/decoders for outputting the finished video (RTX 5070 Ti and higher)&amp;nbsp;&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;In general, it's advised to not get the cheapest card, nor the most expensive cards (i.e. the RTX 5050 isn't the best choice unless you know exactly what you're getting into; meanwhile, the RTX 5090 and 5080 are pretty poor value for performance)&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;If you want good ray tracing performance, any card at or above the RTX 5070 level of performance will do&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;You can absolutely go to older generations and second hand cards but you might need to repaste the thermal compound on the GPU core in certain cases&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;If buying second hand, always ask for benchmarks and to see the GPU running&amp;nbsp;&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;&amp;nbsp;&amp;nbsp;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDHKCdLI7GwcnNjrLpvH_q78QtcQNdanUtgq8XSoFBzvNFkn6iU6RCw97BujrZBaUyZD24qydJjU1K7Ybs3tyugelEC1qNJNF09g4QsGFj8KttINOBLNg5eRueshbuT5Ov6WxjO5OtHpYWY-ARHBGdtNcJbFkX5-LzIPqPFU6YSgwe_ml-g4y23T0DsaE/s1920/joke_RX.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDHKCdLI7GwcnNjrLpvH_q78QtcQNdanUtgq8XSoFBzvNFkn6iU6RCw97BujrZBaUyZD24qydJjU1K7Ybs3tyugelEC1qNJNF09g4QsGFj8KttINOBLNg5eRueshbuT5Ov6WxjO5OtHpYWY-ARHBGdtNcJbFkX5-LzIPqPFU6YSgwe_ml-g4y23T0DsaE/w640-h360/joke_RX.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;A bigger, flashier GPU construction with a meagre factory overclock will never bridge the gap with the crappiest version of a higher tier GPU... Don't fall for this trick from the vendors!&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;&lt;span style="color: #274e13;"&gt;Conclusion...&lt;/span&gt;&lt;/h3&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;And that's how you build a PC in 2026!&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Two topics I haven't touched upon in this post are Monitors and peripherals such as keyboards and mice. If I find the time, I will get onto writing something up for those, as well.&amp;nbsp;&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;I
 hope this light guide helped some prospective buyers make easier 
decisions during the construction of their PCs.&amp;nbsp;&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Two last pieces of advice:&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;ol&gt;&lt;li&gt;You are not required to buy the latest and greatest technology. If your PC works for your use cases, don't spend money on things you don't need - unless you really want to! Also, older technology works&amp;nbsp;&lt;i&gt;just fine&lt;/i&gt;&amp;nbsp;(up to a point) for modern gaming. If you can find cheaper deals on older platforms and CPUs/GPUs that you think will do you just fine, then take them.&amp;nbsp;&lt;/li&gt;&lt;li&gt;Platform longevity isn't everything and, often, it isn't I feel like people focus too much on it. If you're buying a platform towards the end of its life, you don't benefit from the much vaunted "longevity". If you rarely buy computers - let's say, every 5 years, you may not benefit from the longevity. Longevity is best if you buy into the platform&amp;nbsp;&lt;i&gt;&lt;u&gt;early&lt;/u&gt;&lt;/i&gt;.&lt;/li&gt;&lt;/ol&gt;For example, AM4 was a great platform for people who bought in during Ryzen 1000, 2000, and 3000 - with each generation bringing more performance to the 5000 series. However, we need to remember that AMD had crap performance in the beginning of the Ryzen 1000 series. They made huge advances over a short period of time because they started from such a terrible position.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;In contrast, the AM5 platform hasn't been so great, so far,&amp;nbsp; because the 7000 series started off in a great place and the 9000 series couldn't perform that much better for the same price (the exception is the X3D chips). So, if you're buying into AM5 now but don't plan on upgrading for five years, then your best option from a value for money point of view is to get one of the X3D CPUs. In which case, your upgrade options are going to be slim going into the 10000 series and (if it's supported) the 11000 series.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;If you buy the Ryzen 5 7500F, you'll have a better upgrade but then will you actually save money over that five year period by going cheap? I'd argue that the benefits are going to be minimal.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;Of course, this conversation becomes moot when the competition fail to provide a decent alternative choice - Intel's latest platform is a big disappointment for gamers, in general. Though, there are those who claim that you can tweak it to perfection but you need to know what you're doing and want to spend the time and energy to do so...&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;And that's it!&amp;nbsp;&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;See you next time and 
leave your comments below or find me on Twitter/Bluesky...&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2025/12/pc-building-advice-or-how-to-buy-pc-in.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbXaywVcp-P0ULntiHssvE9meWdMNJHGcAf8doFD6nZl43bo5ne1upLAYFds7aCzppZg2kyKRQhOaBrZ7waaGPrqBnY9lk4qj25rnk4h2r7QlYpfFD5CGlArKSzbjAVLbWJSgnw9J7b_FYcD6QCBUPAXY6YsMTqjAYF_1QPII0zMPRrETADSHEHWBr4Dg/s72-w640-h426-c/Work's%20computer%20Commadore%20PC.jpg" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-1778600097080462842</guid><pubDate>Sun, 28 Dec 2025 16:26:00 +0000</pubDate><atom:updated>2026-01-15T10:35:16.038+00:00</atom:updated><title>Looking back at 2025 and predictions for 2026...</title><description>&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgF6lF8CTeViYAsXjG8E3iCyds4TgSIlCNwJxx5rKwlDKIHgZhp_B4OKVccTdy9RbwFMqv-nEOlWaldJPZlru5BRC60a_p5_kHWR1ThxnArEZL-fzJplCd7nfomtxaPpvtw_6mnBdI_jMv9q0uRzjxZVb_XdUBWwDbDwm_F7I6ATdVtDtT74-crVoV_znA/s1420/Happy%20birthday!.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1078" data-original-width="1420" height="486" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgF6lF8CTeViYAsXjG8E3iCyds4TgSIlCNwJxx5rKwlDKIHgZhp_B4OKVccTdy9RbwFMqv-nEOlWaldJPZlru5BRC60a_p5_kHWR1ThxnArEZL-fzJplCd7nfomtxaPpvtw_6mnBdI_jMv9q0uRzjxZVb_XdUBWwDbDwm_F7I6ATdVtDtT74-crVoV_znA/w640-h486/Happy%20birthday!.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;br /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Once again, the end of the year approaches and it's time to take stock of the follies I've so foolishly written down last year. Perhaps I'll also spend a bit of time attempting to prognosticate the future in this incredibly volatile world we currently live in, too...&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;2025 Recap...&lt;/span&gt;&amp;nbsp;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Last year I managed to get an accuracy of 57% but given everything that's happened to derail plans this year (tariffs, shortages, etc) I'd be happy to get 20% correct for this entry. Let's get straight to the pain!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&amp;nbsp;&lt;span style="color: #274e13;"&gt;&lt;b&gt;The RTX 5060 will be the first Nvidia GPU to feature 24 Gb (3 GB) modules.&lt;/b&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;b style="color: #274e13;"&gt;&amp;nbsp;The RTX 5060 will be the&amp;nbsp;&lt;i&gt;only&lt;/i&gt;&amp;nbsp;RTX 50 series GPU that utilises the 24 Gb modules.&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;b style="color: #274e13;"&gt;The desktop RTX 5060 will either release with 12 GB VRAM or have a variant with the 12 GB configuration.&amp;nbsp;&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;Well this came a cropper - the memory manufacturers&amp;nbsp;&lt;i&gt;still&lt;/i&gt;&amp;nbsp;failed to produce any 3 GB GDDR7 modules for commercial use (as far as I can tell). We really need these modules because in the midst of a RAM crsis/shortage, having more memory per integrated circuit module produced will alleviate the number of chips that are required to be made, thus improving the situation. That&amp;nbsp;&lt;i&gt;even further&lt;/i&gt;&amp;nbsp;delay in commercial production runs has led to an issue regarding the RTX 50 Super cards as there are no modules available for the increased capacity variants of the main series cards.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;Zero for three, so far..&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&amp;nbsp;&lt;/b&gt;&lt;ul&gt;&lt;li&gt;&lt;b style="color: #274e13;"&gt;RTX 50 series and RX 90 series will be unimpressive in the price to performance compared with the current generation (RTX 40/RX 7000). We will not get large performance gains at each price point - with the exception of the RTX 5090.&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The RTX 4060 was around 10% more powerful than the RTX 3060. The RX 7600 was similarly faster than the RX 6600 XT. This generation, the RTX 5060 was 20% faster and the RX 9060 XT was 25% faster. Those are decent single generation gains but essentially status quo considering the small gains the prior generation.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;a href="https://hole-in-my-head.blogspot.com/2025/12/next-gen-pc-gaming-requirements-2025.html"&gt;As I pointed out last time&lt;/a&gt;, we didn't even get a performance gain that jumped a single performance tier level. The RX 9060 XT didn't meet the RX 7700 XT's level of performance and the RTX 5060 didn't beat the RTX 4060 Ti - &lt;i&gt;which is&amp;nbsp;just pathetic&lt;/i&gt;! Worse still, the price of these products is maybe only €50 less than those last gen products were at their cheapest and neither provides more VRAM at that price.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Similarly, the RTX 5070 released at the price that the RTX 4070 Super tended to sell at towards the end of its lifecycle. I bought a 4070S for around €570 and the 5070 released at around €580 - 600.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, zero real-world gains to be had on those products.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, the RX 9070 and 9070 XT "launched" at very reasonable prices but these turned out to be "fake" prices, despite months of delay which allowed AMD and their partners to stock-up supply in order to meet demand. At the&amp;nbsp;&lt;i&gt;actual&lt;/i&gt;&amp;nbsp;prices these cards were met with in the customers' hands there was a very slight increase in performance per Euro but nothing impressive.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Since that initial rush for GPUs (since Nvidia failed to supply the market and AMD's larger than usual supply wasn't enough to fill the void [and Intel was MIA]) the 9070 has been seen for as low as around €520 but is currently hovering around the price of the RTX 5070. The RX 9070 XT has similarly been around the price of the RX 7900 XT for a long while, too while almost meeting the performance of the €900-1000 RX 7900 XTX.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Meanwhile, the RTX 5070 Ti and 5080 slightly beat both of their prior gen equivalents (4070 Ti Super and 4080 Super) while remaining at around the same price points. Approximately 10% for the 5080 and 15% for the 5070 Ti. Unfortunately, these both fail to beat the tier above from the previous generation.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The RTX 5090 gets a decent 30% but not impressive compared to the price hike for that minimal level of performance...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, I feel like aside from three GPUs (the RTX 5070, RX 9070 and XT) the rest of the stack were not impressive, and most of the stack from both manufacturers was not an improvement on price (especially near launch). So, you'll have to forgive me if I count this as correct.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b style="color: #274e13;"&gt;RDNA4 will still not be an impressive uplift in terms of ray tracing ability and will not be as performant as the RTX 4080.&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This was categorically wrong for the improvement in performance. Not only was RDNA4 an impressive leap for RT workloads but it was also impressive for raster, too! However, I was still correct that none of te cards would be as performant as the 4080 - the 9070 XT is around 5% below - close, but no banana!&lt;/p&gt;&lt;p&gt;I'm going to count this as 50/50.&amp;nbsp;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;b style="color: #274e13;"&gt;RDNA4 will initially only release as a mid-range product - no low end GPU will be present for the first half of 2025. The RX 9600/XT may not even see a release until 2026 but this is dependent on whether Nvidia launches the RTX 5060 or not during the year.&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This prediction was &lt;b&gt;*this*&lt;/b&gt; close to being correct. The 9070 and XT were announced at CES in &lt;a href="https://www.techpowerup.com/gpu-specs/radeon-rx-9070-xt.c4229"&gt;Janurary and released in March&lt;/a&gt;. The RX 9060 XTs were announced in May and released in June. Just a few weeks later and it would have been the second half of 2025.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;ul&gt;&lt;li&gt;&lt;b style="color: #274e13;"&gt;The RDNA4 top card will release at around €649/$600.&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I'm going to count this as correct since my prediction lay right in the middle of the range - the MSRP at launch was between €639 - 689. If you look at the "fake" MSRP in the USA, it was $599 - so, technically right on the money, even if they didn't stay at those prices. The cards are still not really below that €689 value all that much - with most in the low-to-mid-€700s.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&amp;nbsp;&lt;b&gt;&lt;span style="color: #274e13;"&gt;SteamOS will make a return for DIY desktop gaming PCs... (Linux through the back door).&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I'm going to cheat with this one - I feel like I deserve it considering all the crap that the RAM shortage and DRAM manufacturer delays relating to 3 GB modules caused. The Steam Machine was announced and I'm going to count that as SteamOS making a return on desktop. Sure, people have been pulling the Steam Deck backup images into service to spin up their own "Steam machine" but I believe that the Steam Machine itself will make this process easier and improve support for hardware and software.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Overall score:&amp;nbsp; 44% correct. A little below half (with a smidge of cheating!)...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Predictions for 2026...&lt;/span&gt;&amp;nbsp;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Last year, I tried to increase the number of predictions but I realise now that:&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ol&gt;&lt;li&gt;I chose too many predictions that rely on the same "reason" or topic.&lt;/li&gt;&lt;li&gt;I didn't make&amp;nbsp;&lt;i&gt;enough&lt;/i&gt;&amp;nbsp;predictions!!&lt;/li&gt;&lt;/ol&gt;So, the only logical solution is to go bigger, harder and longer...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;*Ahem*&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The only thing I'm a bit worried about is the total and complete chaos that "the industry" could be during 2026 - it may be impossible to predict anything with anything approaching logic. So, with that comforting thought, let's get to it!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;The RTX 5060 Ti will be the first GPU to feature 3 GB modules for its VRAM...&lt;/b&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Before the rumours of Nvidia scaling back production of the 5060 Ti 16 GB because of the RAMpocolypse, I came to the realisation that it would be prudent for Nvidia to reduce the number of chips on their GPUs whilst still provding a decent amount of VRAM on their models.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;For the 5060 Ti 16 GB, Nvidia require 8 modules. For a 12 GB variant, they would need only 4 modules - and they could still charge a premium over the 8 GB variant. However, you can't apply this logic to any other GPU in the stack - the 5070 Ti, and 5080 both have a 256 bit bus (meaning 8 modules is the minimum they can apply to the card unless they cut back the bus width). The same situation exists for the 192 bit RTX 5070, 128 bit RTX 5060/Ti 8 GB, and the 512 bit RTX 5090 - none of them can have the number of GDDR modules reduced without hampering performance of the memory architecture.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, it makes sense that Nvidia would remove the 16 GB model of the 5060 Ti and instead replace it with a 12 GB model - or cancel it altogether...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This unfortunately means that the RTX 5060 will likely never receive a 12 GB variant because those modules would go to the pricier model in the stack: the 5060 Ti where the margins are higher.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b style="color: #274e13;"&gt;The RAM shortage will be much better by the end of October&amp;nbsp;2026 with broad availability of certain models improving. However, pricing will still be at least double what it was at the beginning of 2025...&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Yes, this is technically on the same topic as the first prediction but this was something I have believed for a while. There have been many people looking at the sky falling for the next 2 - 3 years because of the AI vacuum sucking up the wafers previously allocated to DRAM, whilst others are much more upbeat about the ability of the industry to recover.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Some of the actual DRAM manufacturers are included in of the former group whilst other industry professionals (AIB partners, experienced industry analysts, etc) are included in the latter group. Personally, I think that both can be correct at the same time - they're just defining what the situation means differently. Having better supply for the consumer sooner, rather than later, is a "good" and "quick" outcome. However, having largely inflated prices is a "bad" outcome. Those points of view cover both groups and hence my prediction.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Back in February 2025, I bought a 7200 MT/s kit for €117; in January 2024, I bought a 6000 MT/s kit for €125; and in August 2023 I bought a 6400 MT/s kit for €113. That means that I'm expecting similar kits to be available for around €200 - 230 at the end of October 2026. Currently, they're either completely unavailable or priced between €400 - 600!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b style="color: #274e13;"&gt;No new GPUs (not including the 12 GB variant of the 5060 Ti!) will be released in 2026...&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This really isn't a big stretch - we're less than a year into the current GPU generation. We're not expecting another generation until 2027. However, that does lead to my next prediction:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;Some GPU models (not including the 5060 Ti 16GB*) will be either discontinued or production may be announced to be temporarily halted - in order to focus on other models and ensure&amp;nbsp;&lt;i&gt;their&lt;/i&gt;&amp;nbsp;continued supply...&lt;/b&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let's face it, if the VRAM situation doesn't improve then it's only logical that AMD and Nvidia will turn to focus on their products with the highest margins. That most likely to be the RTX 5070 and above in terms of performance.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;i&gt;&lt;b&gt;*Correction 2026-01-15: I originally wrote "not including the 5060 Ti 16GB but I meant to write "not including the 12GB variant of the 5060 Ti". I'd already noted that I expected the 16 GB model to be discontinued as part of the 12GB variant introduction.&lt;/b&gt;&lt;/i&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;Motherboards and/or CPUs, and/or power supplies will get cheaper...&lt;/span&gt;&lt;/b&gt;&amp;nbsp;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We're already hearing that &lt;a href="https://www.pcgamer.com/hardware/motherboards/the-dominoes-are-falling-motherboard-sales-down-50-percent-as-pc-enthusiasts-are-put-off-by-stinking-memory-prices/"&gt;sales of motherboards are down&lt;/a&gt; due to the RAM shortage. Not only that but also rumours that AMD are &lt;a href="https://videocardz.com/newz/amd-reportedly-postpones-b650-chipset-discontinuation-amid-ddr5-price-spike"&gt;bringing the B650 series of boards back into production&lt;/a&gt;. I predict that, to stimulate sales and avoid various companies going out of business, motherboards and CPUs will see price reductions. Similarly, power supplies are just not going to be in demand, either. So, I can see a world where they reduce to historic lows since their slight increases since the end of the cryptomining craze...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I'm not sure that even those measures will be enough but almost none of these companies can survive on minimal sales numbers for multiple years... However, this could have knock-on effects for the next generation of CPU and motherboard compatibility: if companies&amp;nbsp;&lt;i&gt;need&lt;/i&gt;&amp;nbsp;an injection of cash, then CPU vendors would be hurting them by providing/allowing compatibility with existing motherboard platforms...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b style="color: #274e13;"&gt;SSDs will not come down in price and remain elevated and (second, but linked, prediction) supply will become constrained, with some models becoming frequently out of stock...&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;SSDs have also started going up in price. Currently, they are around €180 - 200 for a 2 TB Gen 4/5 drive whereas they were around €100 just a few months ago. The difference here is that there was no huge unexpected buy-up of NAND for AI like there was for DRAM - just a gradual increase in purchasing for AI and datacentre use, with capacity so maxed-out that &lt;a href="https://battleforgepc.com/article/storage-apocalypse-2025-why-ssds-and-hdds-are-about-to-get-impossibly-expensive/?srsltid=AfmBOoqjdsA4Z1G_C45rks9KSJzOeFvNLGQaKhyZmxv779qgMoMsQgcs"&gt;2026 is fully booked already&lt;/a&gt;! Additionally, NAND production is performed in separate facilities to DRAM, so there isn't as much contention for those particular resources and the vast majority of drives no longer use an integrated DRAM buffer.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Added to this, we have smaller capacity SATA HDDs being produced less (with the industry focussing on high capacity 10+ TB for datacentre workloads) with capacity not having been increased for around 10 years. Some sources say that SATA SSDs will begin to be phased out and this would likely be in favour of the U.2/U.3 formats of solid state storage used in server environments.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Unfortunately, m.2 is proving to have been a mistake in terms of not only capacity but also flexibility&amp;nbsp;(motherboards typically don't manage to cram more than 2-3 slots, in total and prices of larger drives never came down like they did for HDDs). I would really like it if the U.2/3 standards came to consumer boards, so that manufacturers can focus on a single product and also faster speeds for bulk storage can be achieved.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVImpFrdPrBnFNBMq5r-PsqHixEwXCjWFwE15CgGBFwfcuXlAw-1VZN4wACNBUXNmnaICdZwwWY1kz7cvnWQ0zVcwVHxpHO7HNSt-_AG7nObQ-F5JZn78Lf1L7JueAxReJZEOhkTMuK9zJ_pLcumBsqBP7AKnEXhek2aZ2chwnZOJOztIQdcIGKCmzn8o/s792/Corsair.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="611" data-original-width="792" height="494" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVImpFrdPrBnFNBMq5r-PsqHixEwXCjWFwE15CgGBFwfcuXlAw-1VZN4wACNBUXNmnaICdZwwWY1kz7cvnWQ0zVcwVHxpHO7HNSt-_AG7nObQ-F5JZn78Lf1L7JueAxReJZEOhkTMuK9zJ_pLcumBsqBP7AKnEXhek2aZ2chwnZOJOztIQdcIGKCmzn8o/w640-h494/Corsair.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Pick a card, any card...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b style="color: #274e13;"&gt;Brands like Corsair may temporarily discontinue some of their vast numbers of product lines...&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;While Corsair might not be the best example, even well-known GPU makers like Gigabyte have quite varied portfolios. Given the RAM and storage shortages, I could see one or more manufacturers "refocussing" their efforts on their "core competencies" or something of the sort... It all comes down to what each manufacturer can easily get ahold of and what brings in the greatest profit margins. Could a GPU manufacturer choose to invest more energy on monitors, for example? I could see that happening. Gamers can purchase monitors with relative ease... which brings us to the next prediction:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;Monitors will continue getting cheaper and with more and better features for the price...&lt;/span&gt;&lt;/b&gt;&amp;nbsp;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As I mentioned above, if gamers can't spend their hard-earned, inflation depreciated money on building a&amp;nbsp;&lt;i&gt;new&lt;/i&gt;&amp;nbsp;PC, they can spend it on peripherals. And what are the best quality of life upgrades available that gamers usually avoid buying instead of new CPUs and GPUs?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That's right! Audiovisual equipment.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Higher quality monitors have already been dropping in price over the last year and there are even &lt;a href="https://www.xda-developers.com/oled-monitors-affordable-but-ips-better-in-many-ways/"&gt;some OLED models in the $500 price range&lt;/a&gt;. While I don't expect significant further decreases in price, I can see better software and display features (such as increased areas of local dimming, greater refresh rates, and better HDR performance) coming to each existing price range. This would put further pressure on the low-end GPUs as they &lt;a href="https://hole-in-my-head.blogspot.com/2025/12/next-gen-pc-gaming-requirements-2025.html"&gt;fail to work well at 1440p and high refresh rates&lt;/a&gt; in modern, graphically demanding titles...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Similarly, as we see revenue decrease for some of these brands, we may see a resurgence of audio equipment and marketing around the quality of such devices...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b style="color: #274e13;"&gt;The Steam Machine will launch at $550 for the 512 GB version and $700 for the 2 TB version...&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;With the DRAM and NAND problems, I don't think Valve can sell this very cheaply. I think the prices will be higher than they initially planned and have the chance to come back down if supply and pricing of the RAM and SSDs improve.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Personally, I think 512GB of storage is as egregious as 256GB was a few years ago - we shouldn't even be contemplating it in a PC that's meant for gaming but what can we do? We really seem to be sliding backwards, here.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;Valve will not announce any Steam Machine partners this year - so no higher end models...&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I'm mostly just thinking this because I'm sure that Valve want to focus on optimising the SteamOS for the specific hardware in the Steam Machine and not for a more general approach. So, while I believe that Valve is very open to having third parties make bespoke hardware that's SteamOS "certified", the expense of RAM and Valve focusing on their own efforts probably mean we won't see other devices emerging - like the Lenovo Legion S (Steam version).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I also think third parties will prefer to act cautiously, allowing Valve to test the, waters of this new market...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b style="color: #274e13;"&gt;AMD will also temporarily halt the production of the 16 GB version of the RX 9060 XT resulting in it (or most models) going out of stock.&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Similar to the rumours surrounding the RTX 5060 Ti, I can see AMD doing the same thing for their "RAM module intensive" GPU. All four Radeon 9000 series GPU models sport the same VRAM (20.1 Gbps) so they would benefit immensely by reallocating VRAM from the 9060 XT 16 GB to the 9070 or 9070 XT.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's a logical business decision.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b style="color: #274e13;"&gt;&amp;nbsp;Laptops, All-in-one PCs and mini-PCs will also see price rises and go out of stock. The remaining stock will, in the vast majority of cases, be barebones without RAM and storage...&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I've already witnessed this starting to happen. I was looking at a 32GB Minisforum box and it sold out within a day of my interest. The 16 GB model is still in stock but has increased in price. They introduced a no RAM/storage option for the device...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This is pretty bad for businesses... We're potentially looking at 8 GB becoming the defacto shipping laptop configuration. Leading us to:&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;Laptops will begin shipping more high end laptop models with 8GB RAM configurations in order to maintain inventory in the market...&lt;/b&gt;&amp;nbsp;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This is pretty self-explanatory - in order to actually sell laptops to business customers, the OEMs will need to reduce to the minimum possible RAM configuration.&amp;nbsp;&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;*Of course, I just watched &lt;a href="https://youtu.be/uvahiVBvn9A?si=hx_cXhAqqTY9t1Fv&amp;amp;t=342"&gt;GamersNexus' video&lt;/a&gt; (28th Dec) where they mention that &lt;a href="https://www.trendforce.com/presscenter/news/20251211-12831.html"&gt;Trendforce also predicts this same thing&lt;/a&gt;... I swear I didn't see this report before writing these predictions and this is still also a prediction from them, so I'm keeping it!&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;At this point in time, I see models up to around €400 with 8 GB RAM - which is almost a hate crime given how RAM hungry a lot of applications are (not to mention Windows, itself!). Things are likely to only get worse... Resulting in:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b style="color: #274e13;"&gt;Microsoft will release a "debloated" Windows 11 version which will focus on optimising for 8 GB system memory as a core experience...&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;They actually don't have a choice - Microsoft&amp;nbsp;&lt;i&gt;must&lt;/i&gt;&amp;nbsp;do this in order to retain commercial customers and large OEMs like HP and Lenovo. I guess that's a net positive for existing gamers, too....&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;We will not see the AI bubble burst this year...&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Sorry, everyone. I just can't see it happening until 2027 at the earliest - we haven't reached peak stupidity, yet. They're &lt;i&gt;only just&lt;/i&gt;&amp;nbsp;talking about &lt;a href="https://www.tomshardware.com/tech-industry/startup-proposes-using-retired-navy-nuclear-reactors-from-aircraft-carriers-and-submarines-for-ai-data-centers-firm-asks-u-s-doe-for-a-loan-guarantee-to-start-the-project"&gt;recommissioning the nuclear reactors from retired Navy vessels&lt;/a&gt;. So, plenty more rope available to hang themselves with!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Aaaaand, with that, we close off our predictions for another year. What are your predictions? Do you agree with any that I've made?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Wishing you all a Happy New Year and a safe year ahead!&amp;nbsp;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2025/12/looking-back-at-2025-and-predictions.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgF6lF8CTeViYAsXjG8E3iCyds4TgSIlCNwJxx5rKwlDKIHgZhp_B4OKVccTdy9RbwFMqv-nEOlWaldJPZlru5BRC60a_p5_kHWR1ThxnArEZL-fzJplCd7nfomtxaPpvtw_6mnBdI_jMv9q0uRzjxZVb_XdUBWwDbDwm_F7I6ATdVtDtT74-crVoV_znA/s72-w640-h486-c/Happy%20birthday!.png" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-6977257231599668020</guid><pubDate>Thu, 25 Dec 2025 22:09:00 +0000</pubDate><atom:updated>2025-12-25T22:09:30.420+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analyse this</category><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">curmudgeon</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>Analyse This: The Steam Machine's Specs are bad, no matter what Valve might say...</title><description>&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwgz5YfzbLG5s225ujRYJT1ujukzvelRxuz3X-Y4T-jGiCpnTr63HJHXDBRKDnFiqJ0iVZcy3OqS_rRUKId0Ptqc1rp-0jFHMUMSzIVXYgasquzu9qBJdhApjbw3ybufer89OX2TGjJl532l-xypsf7dMH-mhXduOfTzrvfuuiF7r4GvnMDZ_5fapJSRI/s1920/bandicam%202019-11-04%2012-06-56-229.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwgz5YfzbLG5s225ujRYJT1ujukzvelRxuz3X-Y4T-jGiCpnTr63HJHXDBRKDnFiqJ0iVZcy3OqS_rRUKId0Ptqc1rp-0jFHMUMSzIVXYgasquzu9qBJdhApjbw3ybufer89OX2TGjJl532l-xypsf7dMH-mhXduOfTzrvfuuiF7r4GvnMDZ_5fapJSRI/w640-h360/bandicam%202019-11-04%2012-06-56-229.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Mana from heaven or a gift from the devil?&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Much has been made over &lt;a href="https://youtu.be/OmKrKTwtukE?si=kC6vhj6GkLO8I91u"&gt;Valve's reveal&lt;/a&gt; of the Steam Machine. &lt;a href="https://www.thepoint.online/valve-steam-machine-price-prospects-2026/"&gt;With&lt;/a&gt; &lt;a href="https://www.gamesindustry.biz/the-steam-machine-could-be-an-industry-turning-point-opinion"&gt;opinions varying&lt;/a&gt; &lt;a href="https://www.eurogamer.net/400-with-a-controller-this-would-really-send-a-message-one-analyst-on-how-valves-steam-machine-could-make-the-biggest-impact-on-the-games-industry"&gt;from&lt;/a&gt;&lt;a href="https://www.youtube.com/watch?v=XNA6X4sh-mc"&gt;&amp;nbsp;gaming saviour&lt;/a&gt; &lt;a href="https://www.xda-developers.com/bad-news-steam-machine-fans-valve-will-price-it-like-pc-not-console/"&gt;to&lt;/a&gt; &lt;a href="https://youtu.be/T90vEXbHL5g?si=0JOoB5D1s7t9-hxU"&gt;irrelevant&lt;/a&gt; &lt;a href="https://www.reddit.com/r/Steam/comments/1ow7zuc/with_all_the_love_why_are_we_already_treating_the/"&gt;devil&lt;/a&gt;. Okay, maybe things aren't quite that polarising - I'm trying to make the intro a bit spicier! However, it's clear there are two main emerging narratives surrounding the soon-to-be-released Steam Machine - one positive, one negative.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Personally, I lie on the negative side of the equation for a couple of specific reasons with orbit around the fact that &lt;i&gt;&lt;b&gt;it's just not good enough&lt;/b&gt;&lt;/i&gt;.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, let's get into why I feel that way...&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&amp;nbsp;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Playing to the Crowd...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Many people (&lt;a href="https://www.tweaktown.com/news/108907/valve-claims-steam-machine-outperforms-70-percent-of-all-pcs-on-steam/index.html"&gt;Valve included&lt;/a&gt;) have pointed out that the Steam Machine has specs which are better than 70% of all PCs on Steam.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Aside from the fact that the Steam Hardware Survey &lt;a href="https://youtu.be/RV847y2Q96c?si=NHNeeiMbYFMHO_0J&amp;amp;t=3081"&gt;is fundamentally flawed in its approach&lt;/a&gt; from a statistical perspective, basing your decisions and arguments on this average is also logically flawed from several points of view:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;The current average is the worst place to put your product.&lt;/li&gt;&lt;li&gt;The markets the Steam Machine will be placed onto at the price point it is at are not going to move the needle in a positive direction.&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let's address the first point.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;While I have my own reasons, I'll defer to &lt;a href="https://x.com/SebAaltonen/status/1989263972353327464?s=20"&gt;a graphics engineer's opinion&lt;/a&gt; to begin with:&lt;/div&gt;
&lt;blockquote class="twitter-tweet"&gt;&lt;p dir="ltr" lang="en"&gt;The majority of devices in the Steam HW Survey are several years old. Their owners have been considering an upgrade for some time: "It's still good for one more year!"&lt;br /&gt;&lt;br /&gt;A new computer that matches the Steam HW average is not a good purchase. It was a good purchase 3+ years ago. &lt;a href="https://t.co/NdwR9leoJ1"&gt;https://t.co/NdwR9leoJ1&lt;/a&gt;&lt;/p&gt;— Sebastian Aaltonen (@SebAaltonen) &lt;a href="https://twitter.com/SebAaltonen/status/1989263972353327464?ref_src=twsrc%5Etfw"&gt;November 14, 2025&lt;/a&gt;&lt;/blockquote&gt; &lt;script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"&gt;&lt;/script&gt; 
  
  
  
  
  
  &lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Here's the issue: there are many more low-end computers in the world than there are high-end. The average will be skewed towards the low-end and those systems will be impacted by the second point:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The markets where the lowest-end hardware will be present will be overly represented by&amp;nbsp;&lt;i&gt;very old&lt;/i&gt;&amp;nbsp;hardware combinations. Not to beat around the bush - there are reasons why people in certain parts of the world are more likely to sport hardware that may be a decade old, or more*.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The issue, then is that the Steam Machine&amp;nbsp;&lt;i&gt;is not going to those countries&lt;/i&gt;. 100% a guarantee. At this point, &lt;a href="https://help.steampowered.com/en/faqs/view/339C-BC5C-3D89-53D9"&gt;Valve sells directly&lt;/a&gt; to 30 countries, with another 4 countries through third party sellers. That doesn't include most of Asia, South America, Africa, the Middle East. Just check out this survey from last year regarding &lt;a href="https://newsletter.gamediscover.co/p/which-countries-played-pc-and-console"&gt;the top 20 countries on Steam&lt;/a&gt; over a two week period...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Sure, some of those countries which are excluded have extremely low-income and maybe could not afford&amp;nbsp; a Steam Machine but the argument could be made that those same countries also potentially lack a PC gaming culture and also internet infrastructure which would support Steam content dissemination...&amp;nbsp;&lt;/div&gt;&lt;blockquote&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;b style="color: #274e13;"&gt;&lt;i&gt;*Sometimes, it's purely economic - low salaries, high tariffs or import taxes on computer technology that is not manufactured in the specific country. However, sometimes you just can't easily get access to the technology. For example, I've wanted an Optane drive for years and even tried to purchase one when I spotted deals. However, each and every time i was stopped by my banks - they wouldn't let me purchase from the USA. I couldn't get around the restriction because the amount was relatively large and a suspicious purchase and by the time I could get to speak to a customer service agent, the deals were either finished or stock was gone, etc.&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b style="color: #274e13;"&gt;&lt;i&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There's a well-known statistical meme where if you have two groups with different averages and you selectively re-classify one or more of them and place them in the other group the average of both groups can increase.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This can come about when the result for the population which is shifted to the other group has values below the average in the group they are part of but above the average of the group they move into. This can be (and has been) used by entities* to cook the books on various initiatives for which positive outcomes are "required" or "expected" and may be necessary to justify ongoing funding and/or employment. The thing is, you can do the opposite and lower both averages.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;&lt;i&gt;*Usually political in nature... It's usually one of the reasons why new metrics or ways of measuring metrics are introduced. Cynical, I know.&lt;/i&gt;&lt;/b&gt;&lt;/span&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Coming back to the second point, the people who are going to be purchasing the Steam Machine are primarily going to be in relatively affluent countries that have ease of access to high-performing computer hardware.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The Steam Machine has a good chance to lower the overall specifications of the "average" system on the &lt;a href="https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam"&gt;Steam Hardware Survey&lt;/a&gt;&amp;nbsp;by working its way into homes which already have a high-end desktop system as an entertainment system addition. It&amp;nbsp;&lt;i&gt;won't&lt;/i&gt;&amp;nbsp;be likely to raise the floor of the low-spec gamers on that survey.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, the comparison is meaningless. It's a pointless statistic...&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtVJM6wS3jhYUk75Fe-3H-63cegG2zjuNRLjy_urqyJDjnhZrDUGc6ivicjBe3iJxoG8kiynhN60fg_o4P_C1dlK7hloAv3pi_sYhQAUVso56dW5VDAr3WfLRheX3GR81NaciyEWNkUMAE__hibqwv9s9U9Gfgx4GtuLKLiNU23QAZh2yDxUZ85oYuzoE/s1267/Deck%201.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="641" data-original-width="1267" height="324" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtVJM6wS3jhYUk75Fe-3H-63cegG2zjuNRLjy_urqyJDjnhZrDUGc6ivicjBe3iJxoG8kiynhN60fg_o4P_C1dlK7hloAv3pi_sYhQAUVso56dW5VDAr3WfLRheX3GR81NaciyEWNkUMAE__hibqwv9s9U9Gfgx4GtuLKLiNU23QAZh2yDxUZ85oYuzoE/w640-h324/Deck%201.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;br /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&amp;nbsp;&amp;nbsp;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Misaligned Priorities...&lt;/span&gt;&amp;nbsp;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, we get onto the specifications themselves.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The Zen 4 APU* is limited to 30 W, so will be around the performance of a part like the Ryzen 5 7545U 28 W APU and, tied to this relationship is the fact that there will not be 6 "full" cores but instead 2x full cores and 4x compact cores**. So, a laptop CPU with laptop TDP. However, that's mostly fine. We're not going to be being held back by this relatively recent CPU architecture which can perform really well even at 30 W.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;&lt;a href="https://www.digitalfoundry.net/features/hands-on-with-steam-machine-valves-new-pcconsole-hybrid"&gt;*Minus the integrated GPU...&lt;/a&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;blockquote&gt;**Compact cores, otherwise known as Zen "c" cores, are lower power and also operate at lower frequencies than the full cores but are otherwise feature equivalent.&amp;nbsp;&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;The system memory is 16GB - which is fine. &lt;a href="https://hole-in-my-head.blogspot.com/2025/12/next-gen-pc-gaming-requirements-2025.html"&gt;According to my own trending and predictions&lt;/a&gt;, 16GB will be the most required for the next couple of years, and second most required after that. It's likely that the RAMpocalypse will result in developers keeping 16GB as the most required option for a few more years than I was initially predicting...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The Steam Machine also has two models - one with 512 GB storage and the other with 2 TB storage. The former is woefully inadequate. It's bad.... with the sizes of games over the last 10 years, you're looking at being able to install only a few - maybe less than a dozen. If you go back further or play indie games, you're golden.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCY6epz7u1X7dfgCof74izB2kvZY2qxeThqDgBgU4r6H5eWEhJseiKFRHLz5RjzZGHysBB8v3BrmWEFptx1BKMP7AxK0oX3hGtIxByT71MYgWR8s9UC7_x0Ln_CSTZD-lLk1DZvGziribT0raVJwjyeW_nghi9ehanAgIav36ukYWoM38NRKk4bsNjzbo/s842/Deck%202.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="520" data-original-width="842" height="396" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCY6epz7u1X7dfgCof74izB2kvZY2qxeThqDgBgU4r6H5eWEhJseiKFRHLz5RjzZGHysBB8v3BrmWEFptx1BKMP7AxK0oX3hGtIxByT71MYgWR8s9UC7_x0Ln_CSTZD-lLk1DZvGziribT0raVJwjyeW_nghi9ehanAgIav36ukYWoM38NRKk4bsNjzbo/w640-h396/Deck%202.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;512 GB! Really?! Come on, Valve...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The GPU, on the other hand is a disaster.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Not only is the suped-up version of the RX 7600M in the Steam Machine a pretty weak GPU - cut down from the RX 7600 and reduced to 110 W TDP - but the technology is more than two years old, at this point. We're already on the much more effective, efficient and "fixed" RDNA4 in current desktop parts and RDNA5 is likely to enter the market at either late 2026 or mid-2027.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;RDNA3 is a stop-gap architecture which &lt;a href="https://youtu.be/eJ6tD7CvrJc?si=-awG8tk9gobbOiem"&gt;failed to deliver&lt;/a&gt; on the promises that AMD themselves made in the reveal. It's not a good architecture for ray tracing and isn't very performant when compared to the Nvidia counterparts and RDNA4. For N33 (the lower-end GPU chip for RDNA3) the performance uplift over N23 (RDNA2) &lt;a href="https://hole-in-my-head.blogspot.com/2024/10/the-performance-uplift-of-rdna-3-part-2.html"&gt;was around 3-5%&lt;/a&gt;&amp;nbsp;for the same core frequency.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Therefore, based on my own estimation, the chosen specs for the Steam Machine GPU will be around the performance of an RX 6600, maybe very slightly faster. (And this was before I was aware of the RX 7600M entry in the TechPowerUp GPU database!)&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;blockquote class="twitter-tweet"&gt;&lt;p dir="ltr" lang="en"&gt;This will likely be around RX 6600 levels of performance. That's not an experience I'd want to be using in a new device on a 4K TV in 2026.&lt;/p&gt;— Duoae (@Duoae) &lt;a href="https://twitter.com/Duoae/status/1988689398603661489?ref_src=twsrc%5Etfw"&gt;November 12, 2025&lt;/a&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;This watered-down mobile GPU also has the misfortune to be paired with only 8 GB VRAM. This is unfortunate because Valve themselves are positioning this as a 4K console - something which it will struggle with, even when using upscaling...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You only need to look at the majority of large tech outlets and you will see story after story after video of how 8 GB of VRAM is not enough in 2025 unless you're playing on low settings and at 1080p and I have to agree with them when it comes to the AAA titles that get released every year. For older games and indie games, it's going to be fine. However, my trending is predicting that the average recommended VRAM quantity will be greater than 8 GB in 2026.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, just when the Steam Machine is launching, it's already backward-facing in both GPU technology as well as capability.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaXAfnTrieCXYTKjCvMhOSWEl4VfzNCwRWTPkRtht07bSrNWIwm4pL41iHJJFak7FFFmKFKUo8amOKAnWXfBR_U2s8pFsNRSVNunloYqedzFKSKvYLVTx_E3t1XGstd2Y-i2TAgYb0Z1_Bva4G0LfwrRtNQyai_SFwAOYpbiJIDgsc-Xb1-w2UZeyPKP4/s688/Deck%204.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="485" data-original-width="688" height="452" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaXAfnTrieCXYTKjCvMhOSWEl4VfzNCwRWTPkRtht07bSrNWIwm4pL41iHJJFak7FFFmKFKUo8amOKAnWXfBR_U2s8pFsNRSVNunloYqedzFKSKvYLVTx_E3t1XGstd2Y-i2TAgYb0Z1_Bva4G0LfwrRtNQyai_SFwAOYpbiJIDgsc-Xb1-w2UZeyPKP4/w640-h452/Deck%204.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The RX 7600M is slightly below the RX 6600, so the increased power and slight up-clock will put it around the latter's performance, maybe 1-2% faster... &lt;a href="https://www.techpowerup.com/gpu-specs/radeon-rx-7600m.c4014"&gt;via TechPowerUp&lt;/a&gt;&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;How to Fix the Steam Machine...&amp;nbsp;&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In my opinion, the minimum storage size should be 1 TB. So, the two models should be 1 TB and 2 TB. It's likely that they made the decision they did because of SSD and RAM prices. The 512 GB model is likely to be more reasonably priced whereas the 2 TB model is probably going to be over the cost of a simple storage upgrade as it would have existed over the course of 2024 and the first 7 months of 2025...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, this is likely a forced issue and likely, if the situation improves, Valve will address this in the future. The GPU issue is not so easy to fix.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, I could just wax lyrical and say that Valve should add more VRAM (insert comments about RAM shortage and vast overpricing here) and a more powerful GPU. However, the problem here is that AMD doesn't have one!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;They have zero RDNA4 mobile GPUs on the market*, and from the 7000 series lineup, the next powerful GPU is the 7800M which is a chiplet-based design (very bad) with 12 GB of VRAM (good) with 180 W TDP (sort of bad) that just would not fit in the TDP, nor the budget of the Steam Machine.&amp;nbsp;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*And to the best of my knowledge, they have not even announced any!&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Their other options are iGPUs which, unfortunately, top-out at 12 CU** - 43% of the quantity in the RX 7600M or, if you prefer, 16 CU less. These iGPUs are good for very light gaming (mostly 2D or simplistic 3D from many years ago) but not for the types of games that players may wish to experience on their TV.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The only products which have a significant amount of graphics processing power on their APUs are the already existent consoles (PS5, XSS/XSX) - none of which may be utilised by Valve.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;i&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;**There does exist the &lt;a href="https://www.techpowerup.com/336010/amd-mobile-rdna-4-lineup-led-by-radeon-rx-9080m"&gt;Ryzen AI 9 HX 370&lt;/a&gt; with 16 CU of RDNA 3.5 but, let's face it, Valve can't afford that chip - it's in high demand and ends up in €960+ devices...&lt;/b&gt;&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There are &lt;a href="https://www.techpowerup.com/336010/amd-mobile-rdna-4-lineup-led-by-radeon-rx-9080m"&gt;the rumoured/inevitable RDNA4&lt;/a&gt; mobile parts which will likely be announced at CES in January 2026 but those are set to release&amp;nbsp;&lt;i&gt;after&lt;/i&gt;&amp;nbsp;Valve's intended launch window. Added to this stumbling block is the rumour that the RX 9060M will feature only 8 GB of VRAM, leaving it without any real advantage at 4K resolution - something which you really don't want to try, even on the desktop GPU with 16 GB. It's just not powerful enough for it.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, in reality, the way to "fix" the Steam Machine is to&amp;nbsp;&lt;i&gt;&lt;u&gt;not&lt;/u&gt;&lt;/i&gt;&amp;nbsp;launch it in 2026...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Not really a very good take away but, as I've tried to point out, here, it's just not a very good system to be releasing&amp;nbsp;&lt;u&gt;&lt;i&gt;now&lt;/i&gt;&lt;/u&gt;...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Conclusion...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, will I get a Steam Machine? I'm damn tempted to! But I could also just roll my own with the much superior hardware I already have on hand. Will it work as smoothly? Probably not, but I'd get a lot better performance when it does work!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Is the Steam Machine bad? Objectively, I think yes.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's likely going to be a poorly-priced and positioned product in the market in 2026. However, consumers are fed-up with Microsoft - both for Windows OS and for Xbox. There is a market for secondary systems which play undemanding games on the TV and also streaming within the home from a more powerful system running Steam on Windows.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Could Valve have made a better system? In this time and place? Most certainly not.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;They are constrained by the lack of advancement in the low-end of the GPU technology stack as much as the rest of us are - and that's a damn shame!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The Steam Machine v1 will, therefore, be a shadow of what it could have been. A good CPU, decent platform but crippled GPU that won't even support the FSR4 feature set.&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2025/12/analyse-this-steam-machines-specs-are.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwgz5YfzbLG5s225ujRYJT1ujukzvelRxuz3X-Y4T-jGiCpnTr63HJHXDBRKDnFiqJ0iVZcy3OqS_rRUKId0Ptqc1rp-0jFHMUMSzIVXYgasquzu9qBJdhApjbw3ybufer89OX2TGjJl532l-xypsf7dMH-mhXduOfTzrvfuuiF7r4GvnMDZ_5fapJSRI/s72-w640-h360-c/bandicam%202019-11-04%2012-06-56-229.jpg" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-3568851551772273674</guid><pubDate>Sat, 20 Dec 2025 17:06:00 +0000</pubDate><atom:updated>2025-12-20T17:06:15.605+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">screenestate</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>Next Gen PC gaming requirements (2025 update)</title><description>&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGakvCmvGqAVc8XhMHXRlemAvTfvIzMkjQXTimhSLcPmT3Bhd5pBDCt98lRklGv9yMO-P1UP_qo-KAjiXsJosornknUKcw5mY9YVCEAqGXYlowAqf39afWKP36VkJDM_eySRuuQHfY3mzHwRgB8spaB7iEEOfGmOm_lxUcBDSCMZOwTjlujsXDUhszmCw/s1024/Title%202025.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="681" data-original-width="1024" height="426" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGakvCmvGqAVc8XhMHXRlemAvTfvIzMkjQXTimhSLcPmT3Bhd5pBDCt98lRklGv9yMO-P1UP_qo-KAjiXsJosornknUKcw5mY9YVCEAqGXYlowAqf39afWKP36VkJDM_eySRuuQHfY3mzHwRgB8spaB7iEEOfGmOm_lxUcBDSCMZOwTjlujsXDUhszmCw/w640-h426/Title%202025.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;On time, again!&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;It's that time of year where "best of" lists start falling like cats and dogs. So, here's my contribution to the year round-ups: trending of the recommended game requirements for games released this year...&lt;/div&gt;&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;In Brief...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Again, I won't rehash the background of this ongoing study. &lt;a href="https://hole-in-my-head.blogspot.com/2020/12/next-gen-pc-gaming-requirements-part-4.html"&gt;The&lt;/a&gt; &lt;a href="https://hole-in-my-head.blogspot.com/2021/12/next-gen-pc-gaming-requirements-2021.html"&gt;prior&lt;/a&gt; &lt;a href="https://hole-in-my-head.blogspot.com/2023/01/next-gen-pc-gaming-requirements-2022.html"&gt;years'&lt;/a&gt; &lt;a href="https://hole-in-my-head.blogspot.com/2024/05/next-gen-pc-gaming-requirements-2023.html"&gt;posts&lt;/a&gt; &lt;a href="https://hole-in-my-head.blogspot.com/2024/12/next-gen-pc-gaming-requirements-2024.html"&gt;can&lt;/a&gt; address those questions... What I will re-state is that this is designed to track the average developer-recommended hardware for demanding and popular games on PC. This data can be used to trend the advancement of technology in the industry and may even be useful for certain types of devs (indies?) who might lack access to any sort of free, proper data-driven service. From a consumer perspective, I find this trending interesting for making PC purchasing decisions - specifically surrounding expected lifetimes of components that a user may wish to buy.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As usual, all data is &lt;a href="https://docs.google.com/spreadsheets/d/1O1_0bsnKrmazhoxtyJzuzXRKGFTIjvnRheF3nlVFDu4/edit?usp=sharing"&gt;available here&lt;/a&gt;... So, let's get to the main discussion!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Market Fluctuations...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Considering &lt;a href="https://www.pcgamer.com/hardware/the-ram-crisis-is-just-getting-started-micron-makes-the-difficult-decision-to-abandon-the-consumer-memory-business-to-focus-on-supplying-ai-data-centers/"&gt;The Great RAMpocalypse&lt;/a&gt; that is currently ongoing, we may as well start with memory!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGiZ3IjG36cFeypOrVgpnQKVW7P1zpDu0_I9Rvr0M5_-M9ruQ2vSdd3sgdrc_kIzdzzDe5Ho-DbsLqKQwQhzgFoehyphenhyphen80NkTcWDNu8Rj8k2Lzqpi24MW-kb9vpHHG_QlAYJicNbT2aHkpPv1Sqh0lk0KG1XXmcfFW-j70TQEwC-puSVjQJne9vLlDZLLJc/s753/RAM%20system%20per%20year.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="753" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGiZ3IjG36cFeypOrVgpnQKVW7P1zpDu0_I9Rvr0M5_-M9ruQ2vSdd3sgdrc_kIzdzzDe5Ho-DbsLqKQwQhzgFoehyphenhyphen80NkTcWDNu8Rj8k2Lzqpi24MW-kb9vpHHG_QlAYJicNbT2aHkpPv1Sqh0lk0KG1XXmcfFW-j70TQEwC-puSVjQJne9vLlDZLLJc/w640-h362/RAM%20system%20per%20year.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Climbing higher...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Fortunately, and definitely considering the current situation, the most recommended system requirements are for 16 GB of system memory. DRAM (specifically DDR4 and DDR5) is hyper expensive right now - we're talking about 3x or more for a 32 GB kit,&amp;nbsp;&lt;i&gt;which is bad... &lt;/i&gt;- and my general buying advice is to buy&amp;nbsp;&lt;i&gt;&lt;b&gt;only&lt;/b&gt;&lt;/i&gt;&amp;nbsp;what you need and nothing more.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In previous years, I've been advocating for getting 32 GB system RAM but with the current prices of several hundred dollars/euros/&amp;lt;insert currency here&amp;gt; that can no longer logically stand. The vast majority of games don't need more than 16 GB - so stick with that. If you're on DDR4, get 2x 8 GB kits. If you're on DDR5, while some testing &lt;a href="https://youtu.be/hr6p1tqeM3M?si=Ib5NQOUkXYgsehLN&amp;amp;t=938"&gt;shows a negligible&lt;/a&gt; loss in performance, &lt;a href="https://youtu.be/EVi2OX6uo5Y?si=5C9UsrylyYRXhlCO"&gt;other testing&lt;/a&gt; &lt;a href="https://youtu.be/_nMu1KFkOC4?si=d7RnpSEYwn0Pvcpl&amp;amp;t=565"&gt;shows a bigger&lt;/a&gt; performance loss by going to a single DIMM (stick). Though, aside from the difference in CPU speed and performance, the quantity of RAM could be a factor in the difference of these results.&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;i&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;I swear I remember more RAM testing after DDR5 was released which showed minimal gains (capacity equivalent 1 stick vs 2 sticks) but I can't for the life of me find the videos...&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;What isn't so good for this strategy is that we're still seeing a general increase in random one-off games (including the first game recommending 64 GB of memory!!), so the general trend is still ticking upwards with a +2.4% increase in games asking for 32 GB.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Overall, though, despite the current dire situation regarding DRAM pricing, gamers are unlikely to be suffering as long as they have 16 GB system memory in their PC since the minimum recommended specs are still typically not more than 16 GB.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFqggawQwcP8Bokx2iR_MLoiliy9ADK3AVxdh74qiRajNwcZpg0l9KsJG_l1el2Gs8tWaBITqMV6xSj7LubJz8RP7gA3UO_md-3AVJ8l1sk6kZ-lUNktvl-k2ZKhi-9-1J64xWMQQb36ksKwVRcbJLvgokgiqrAR8nmsPGLgKTcDSIoDH0FLcjDZly48s/s751/RAM%20video%20per%20year.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="423" data-original-width="751" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFqggawQwcP8Bokx2iR_MLoiliy9ADK3AVxdh74qiRajNwcZpg0l9KsJG_l1el2Gs8tWaBITqMV6xSj7LubJz8RP7gA3UO_md-3AVJ8l1sk6kZ-lUNktvl-k2ZKhi-9-1J64xWMQQb36ksKwVRcbJLvgokgiqrAR8nmsPGLgKTcDSIoDH0FLcjDZly48s/w640-h360/RAM%20video%20per%20year.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Moving onto VRAM and we can see that the upward trend is a bit more pronounced and certain. Developers have been holding back for a long time and they just want more VRAM for their more graphically and technologically advanced games. In addition to this, there is &lt;a href="https://x.com/SebAaltonen/status/2001000839574643138?s=20"&gt;definitely a push&lt;/a&gt; &lt;a href="https://x.com/rianflo/status/2001080321802977727?s=20"&gt;towards doing &lt;i&gt;more&lt;/i&gt;&lt;/a&gt;&amp;nbsp;on the GPU itself.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Sure, 8 GB is still the most recommended quantity and the third-most recommended quantity is 12 GB but the all-important second-most recommended quantity has jumped up to 10 GB.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;This really isn't a surprise to anyone who partakes in videos from GamersNexus, HardwareUnboxed or DigitalFoundry but the big takeaway here is the rate of change: +13.2% for 10 GB and +5.1% for 12 GB with a total of almost 24% of the games polled recommending more than 8 GB VRAM.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2Wo_2nPTWYI5pX3_xw2gwUkGtS7-tX2b6h1H6IbPKLFiub_5oE_2KEPq1ybqj_47WldFGmPekx6Esw7aXN7PdDeqA7shxGBomd_XcjM4LMwcI3QUYXnIdkHIGOuxS9-oVcE9LzR0nXl9c0IzaXTjXHrcobIBbg0SqNVzdqlUWUHVKUTNlSbqQLv-q_88/s2243/RAM%20percentages%20per%20year.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="513" data-original-width="2243" height="146" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2Wo_2nPTWYI5pX3_xw2gwUkGtS7-tX2b6h1H6IbPKLFiub_5oE_2KEPq1ybqj_47WldFGmPekx6Esw7aXN7PdDeqA7shxGBomd_XcjM4LMwcI3QUYXnIdkHIGOuxS9-oVcE9LzR0nXl9c0IzaXTjXHrcobIBbg0SqNVzdqlUWUHVKUTNlSbqQLv-q_88/w640-h146/RAM%20percentages%20per%20year.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Looking at the percentages, we're seeing that definitive push for increased VRAM more clearly...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We're also seeing some positive movement in acknowledging the issue from the GPU vendors, too. Although not on the cheapest models, more VRAM is making its way into lower-cost models as a standard and production of the 8GB variants &lt;a href="https://www.techpowerup.com/340577/nvidia-allegedly-reducing-supply-of-rtx-5060-5060-ti-8-gb-cards-to-partners"&gt;seems to have been reduced&lt;/a&gt; as a direct consequence.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;a href="https://x.com/3DCenter_org/status/1912690194043842810?s=20"&gt;Based on&lt;/a&gt; &lt;a href="https://gamersnexus.net/gpus/gpu-prices-crater-inevitable-opportunity-screw-consumers"&gt;the less than inspiring sales of the 8 GB variants&lt;/a&gt;*, it's likely that next generation, the GPU designers will be targeting at least 9-12 GB VRAM** on the $300 - 350 price segment of the market.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;i&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*Based on anecdotal evidence the 8GB variants appear less often than even older RTX 3060 12GB cards in etailer best sellers lists and these cards have some of the largest below MSRP price reductions on the market (The RTX 5070 being the exception!)...&lt;/span&gt;&lt;/blockquote&gt;&lt;/b&gt;&lt;/i&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;**Assuming that they move to 3GB GDDR7 modules for use over a 96 -&amp;nbsp; 128 bit bus. Alternatively, if the DRAM supply improves in time for the design of these products, then they could clamshell 6 or 8 modules of 2 GB capacity and provide 12 - 16 GB VRAM.&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Of course, this is highly dependent on what happens with DRAM availability over the course of 2026.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;ComputAIng Power...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi08LoPkqEiubxXwHJPAcPdrPZnbUs7lRDoo3PuPU2Jce3v91XFopjGko_AbkYSS57YCpTSjD-nfVBQs3B5JsEpzHh9IIAEx0VY0QosdnBOt7Oa2qmOXKoJ49tQOXUofN7IZIJDc9b8X89tgkQ5VPdfrpOlWlyzOUPoiJNKu3_-fnTVcawPRn9eSraIHDk/s751/CPU%20average%20performance%20per%20year.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="751" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi08LoPkqEiubxXwHJPAcPdrPZnbUs7lRDoo3PuPU2Jce3v91XFopjGko_AbkYSS57YCpTSjD-nfVBQs3B5JsEpzHh9IIAEx0VY0QosdnBOt7Oa2qmOXKoJ49tQOXUofN7IZIJDc9b8X89tgkQ5VPdfrpOlWlyzOUPoiJNKu3_-fnTVcawPRn9eSraIHDk/w640-h362/CPU%20average%20performance%20per%20year.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Over the last year, we've seen a small increase in single core performance recommendations - which isn't surprising because the PS5 Pro isn't really any more powerful than the PS5/Xbox Series X from a single core standpoint. There's some frequency increase that brings it on par with the Series X but in terms of the microarchitecture, there's no new technology that has pushed the consoles beyond what deevlopers would already be targetting for current generation performance.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;On the other hand, multicore performance has increased by a good percentage (approximately 16%). This performance increase repeats what we've observed in previous generations because although desktop single core performance isn't increasing that much (X3D chips aside), multicore performance &lt;i&gt;does&lt;/i&gt;&amp;nbsp;see larger generational improvements and, as a result, as developers are recommending newer CPU generations, the multicore performance is getting increasingly stronger for the same number of cores.&lt;br /&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2qRJltX2cP05-EFKI7XAcIMejYw9LXhB1vf7ux3l0kDdx9-nyKuJbfmy5-6dTu92F5PijVLJKddOlxtLsk8orNm8ZTkW2LkgGdTJwmll73gUJQ_8PNyGwyCfARD7oAlxXS06AOWJ5n1Rowks0ApnT-srWmiTu7gj9yOLOlkxAU6VxsvgtBJAtSGgAH5M/s809/CPU%20rate%20of%20increase%20per%20hardware%20cycle.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="809" height="336" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2qRJltX2cP05-EFKI7XAcIMejYw9LXhB1vf7ux3l0kDdx9-nyKuJbfmy5-6dTu92F5PijVLJKddOlxtLsk8orNm8ZTkW2LkgGdTJwmll73gUJQ_8PNyGwyCfARD7oAlxXS06AOWJ5n1Rowks0ApnT-srWmiTu7gj9yOLOlkxAU6VxsvgtBJAtSGgAH5M/w640-h336/CPU%20rate%20of%20increase%20per%20hardware%20cycle.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;If we move across to looking at that improvement over a console hardware generation, we see the reduced "demands" from developers for consumers to realise the intended experience. In my understanding of this data, this shows the effect of an increased cross-generational period and the relative CPU performance of the console CPUs compared to the available "average" consumer desktop parts.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, this has resulted in a 25% less performance increase in single core requirements since the release of the PS5/XSX and the release of the Pro than we had between the PS4/XBO and the Pro/X release. The multicore increase in requirements is closer but still 11% less.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, we're looking at potentially approching a performance plateau in CPU requirements until we get stronger console hardware... At the end of the last generation, we had 2.3x the single core and 3.5x the multicore performance from the start of the generation. So far, we're 5 years into the current generation. At this point in the last generation, we had a 1.8x and 2.2x increase for single and multicore, respectively. This generation, we have values of 1.4x and 1.8x.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That's a real slow-down... and points to the fact that there are a LOT of CPUs out there that will play modern games without issue and we'll see this in the predictions section, later on.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Stable Confusion...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The average GPU performance recommended requirements for games have been increasing at an almost linear rate over the last couple of years and this continues in 2025 with another 11% performance increase.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-81uzLknBNJX1vWwueKtVeGf0b8vSo50Z3TfNNQddWMkegV-KYgH4taWwCXa1csAAPEKO-PcoZZiTFOXoT1HTdARa7dvuAhGNPuw5cdyTYdwM_b2b4hsF-eVqda_5S06hklgq6tufY05iiyFLzddnFedNAEEnnx8HMyI92v72CqlqYGcVljrnLvJMGvg/s751/GPU%20average%20performance%20per%20year%20per%20generation.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="751" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-81uzLknBNJX1vWwueKtVeGf0b8vSo50Z3TfNNQddWMkegV-KYgH4taWwCXa1csAAPEKO-PcoZZiTFOXoT1HTdARa7dvuAhGNPuw5cdyTYdwM_b2b4hsF-eVqda_5S06hklgq6tufY05iiyFLzddnFedNAEEnnx8HMyI92v72CqlqYGcVljrnLvJMGvg/w640-h362/GPU%20average%20performance%20per%20year%20per%20generation.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;This outlines the fact that we're nowhere near maxing-out GPU performance based on the graphical features developers are targetting.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, looking at the per console hardware generation increases, we see a similar slowdown to that observed for CPU performance. On one hand, this bodes well for consumers - your GPU will last you longer. On the other hand, what does this mean for the availability of new consumer GPU products?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What incentive do AMD, Intel or Nvidia have to put out &lt;i&gt;better&lt;/i&gt;&amp;nbsp;low-end hardware if consumers really don't require much better technology - only a higher quantity of VRAM for all of these new features to be enabled.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, I'd argue that most of the lower end GPUs don't have that good of a performance and we need higher performing products at lower prices to be able to push the industry and gaming landscape forward. But what we may risk here is one or more of the three GPU manufacturers dropping out of the low-end entirely for at least one generation as the profit drops out of the market due to all these shortages shenanigans.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6zA6wLxRkTVh6JelIsFkprqfGVQgmmLdyQje8KRmVJQdZqMUVxuZ6ptfESrKmWGVymLUQgmU9PDip22Cze9rYKlogP6KEmLzODwoAz_9X7i8zpHPYsT2iT3R0hStIPQ-4QfWRmi9Fqlzr72BCRfHVSNyLPI87Zewmvma6YUketDArsdBRxh0q0o3Om04/s749/GPU%20rate%20of%20increase%20per%20hardware%20cycle.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="429" data-original-width="749" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6zA6wLxRkTVh6JelIsFkprqfGVQgmmLdyQje8KRmVJQdZqMUVxuZ6ptfESrKmWGVymLUQgmU9PDip22Cze9rYKlogP6KEmLzODwoAz_9X7i8zpHPYsT2iT3R0hStIPQ-4QfWRmi9Fqlzr72BCRfHVSNyLPI87Zewmvma6YUketDArsdBRxh0q0o3Om04/w640-h366/GPU%20rate%20of%20increase%20per%20hardware%20cycle.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Developers just aren't demanding a lot from the consumer for GPU performance, year-on-year...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;The issue here, as I've mentioned in prior posts, is that we can't just keep considering 1080p as the defacto resolution for the rest of time. Monitor technology has advanced and continues to get cheaper and cheaper. Decent 1440p, medium refresh rate monitors can be had below $250 and there's even talk of OLED monitors &lt;a href="https://www.flatpanelshd.com/news.php?subaction=showfull&amp;amp;id=1759317900"&gt;on the horizon&lt;/a&gt; that are &lt;a href="https://www.xda-developers.com/2026-will-make-oled-monitors-affordable/"&gt;below $500&lt;/a&gt;&amp;nbsp;and NONE of those are going to be 1080p.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;No one should be looking at a GPU in 2025, let alone 2026 and be looking at 1080p gaming performance as its deciding factor. We need to drop this expectation as it's letting the hardware manufacturers get away with giving consumers worse products that underperform.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You only have to look at VRAM at 1440p to see that &lt;a href="https://www.techpowerup.com/review/asrock-radeon-rx-9070-xt-monster-hunter-wilds-edition/33.html"&gt;you can loose a significant portion&lt;/a&gt; of the potential of the GPU once that quantity is exceeded but you don't even need to look at VRAM or higher resolution, you can see on the lower-end cards that &lt;a href="https://hole-in-my-head.blogspot.com/2025/08/the-performance-uplift-of-rdna-4-and.html"&gt;they fall apart even in high refresh rate gaming&lt;/a&gt;&amp;nbsp;at 1080p...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWyX3zTB6N477G8U9CGPfgssbL2ANYbHTtIjdnecOida2okTXWlF5xYABBnDCQ_8RtntsOjOOsd3DN__gzfEoZhMEMfKUWkL-b_Lh79wvWHygKSCquurZRXTVAq0FHX4R4gl7qr7aQjPCxiXcpA3I3KedkDkJS8oqXs7i7cjZ8yvdH5qDKAJ-aNT-90h8/s521/Yearly%2060%20class%20performance%20per%20resolution.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="481" data-original-width="521" height="369" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWyX3zTB6N477G8U9CGPfgssbL2ANYbHTtIjdnecOida2okTXWlF5xYABBnDCQ_8RtntsOjOOsd3DN__gzfEoZhMEMfKUWkL-b_Lh79wvWHygKSCquurZRXTVAq0FHX4R4gl7qr7aQjPCxiXcpA3I3KedkDkJS8oqXs7i7cjZ8yvdH5qDKAJ-aNT-90h8/w400-h369/Yearly%2060%20class%20performance%20per%20resolution.PNG" width="400" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;We're once again in a period were our 60 class GPUs don't cut it in demanding games at the current mainstream resolution for new displays...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Right now, consumers are trapped in a cycle of "performance debt". If you can afford mid-to-high-end equipment, you're going to be alright. At the low-to-mid range, you're treading water and every time things start to nudge in the consumer's favour, &lt;a href="https://overclock3d.net/news/gpu-displays/nvidia-plans-heavy-cuts-to-gpu-supply-in-early-2026/"&gt;along come the companies to stamp that out&lt;/a&gt;...&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I recently made a GPU tier comparison list to see the relative banding of the GPUs that you are most likely to upgrade from and to. I took it back to the RX 5000 and RTX 20 series (though I know there are older cards still in gamer's PCs, I figured these were the relevant comparisons covering around 7 years of products. I made a cut-off for each band of&amp;nbsp;± 5 %, one band goes from and to every 5 and the other goes from 0 to 0*.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;&lt;b&gt;&lt;i&gt;*I had a tough time figuring out how to explain this. For example, cards are roughly banded to 95 - 105 % and 90 - 100 %&lt;/i&gt;&lt;/b&gt;&lt;/blockquote&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What you can see from the chart is that cards where there is no performance overlap (i.e. a decent performance difference to adjacent products, generally sit higher in the stack. (The lines are closer together). Cards which are within the largest bands (i.e. have a negligible or very small, not humanly noticeable performance difference in raw fps numbers) sit lower in the stack.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhqhhhH9Q2gO683x9n9LVujB5EcqBDZjzTKRCQk-TEqsjvuhHbcqy-cOS0iBKZTI2DbT-izcTvmk6tr-_w1dsT1xueFPf57Xb6J4F0ZZbGAZhE8MFo9iDATARHI3cz119-9C2DU9R1yIvDgLBiOOpQPrXKbCC6B4MgTT-r2dywiGp189RIfp05akAuKsw/s1069/GPU%20tiers.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1069" data-original-width="523" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhqhhhH9Q2gO683x9n9LVujB5EcqBDZjzTKRCQk-TEqsjvuhHbcqy-cOS0iBKZTI2DbT-izcTvmk6tr-_w1dsT1xueFPf57Xb6J4F0ZZbGAZhE8MFo9iDATARHI3cz119-9C2DU9R1yIvDgLBiOOpQPrXKbCC6B4MgTT-r2dywiGp189RIfp05akAuKsw/w314-h640/GPU%20tiers.PNG" width="314" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;GPU performance tier upgrade chart, banding similarly performing GPUs together...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;Now, this isn't a particularly unusual or unexpected result - you would expect old highly performing cards to match performance with new, lower performing cards. However, what is disappointing is the &lt;i&gt;level&lt;/i&gt;&amp;nbsp;of overlap and lack of performance differentiation on display.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The area which I dislike the most is from the RX 6750 XT to the RTX 5060 Ti 16GB. That's 13 cards (I forgot to add the 4060 Ti 16 GB to the same line as the 8 GB) and, if we ignore the RTX 2080 Ti - since that's a flagship card and anyone upgrading from that wouldn't likely be in the market for a low/mid-range product* - we're looking at three generations of product which may have been bought second-hand and have nothing to upgrade to within a reasonable price or performance range... Though at least one could upgrade and obtain more VRAM to apply the performance to!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*Unless they realised they got burnt and vowed never to again pay that much for a GPU!&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;But the area I am despairing at is at the top third of the chart. There's no trickle-down performance. The RTX 3080 Ti was matched by the 4070 Super but the RTX 5070 is essentially the same card. If we look at this chart, the RTX 6060 will have the performance of an RTX 5060 Ti 16 GB (or slightly above) which will be a good 25% below the RTX 5070! It wouldn't have even &lt;i style="font-weight: bold;"&gt;moved&lt;/i&gt;&amp;nbsp;out of that band I just spoke about. An RTX 6070 might not even reach the RTX 5070 Ti, given the difference between that card and the base 5070.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I made the point above that developers aren't demanding performance (or are demanding less performance) uplifts from gamers but I may have had that backwards - there aren't performance gains to be had, so developers are building that into their hardware expectations. And YET people still freak out about Indiana Jones and &lt;a href="https://youtu.be/HCqS_uZ0G4Q?si=x-CDJc4DJWHL8JIO"&gt;DOOM: The Dark Ages&lt;/a&gt; requiring ray tracing compatible cards. I've seen a lot of &lt;a href="https://x.com/AntonHand/status/2001470448547303733?s=20"&gt;incidental posts&lt;/a&gt; &lt;a href="https://x.com/ChiseHatoriBan/status/2000290093169607103?s=20"&gt;from people &lt;/a&gt;&lt;a href="https://x.com/A_Smol_T/status/2001569435224780983?s=20"&gt;claiming developers&lt;/a&gt; &lt;a href="https://x.com/RlCHVRDS/status/2001391387367850419?s=20"&gt;need to target&lt;/a&gt; &lt;a href="https://x.com/cybercpu/status/2001470081344364560?s=20"&gt;older hardware&lt;/a&gt; because of the RAM crisis but developers have been doing this from the mid-2010s and even more so since GPU hardware progress began to stall a few years ago. In addition, I've been mentioning this as a conclusion of this trending each year.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, I don't think we're in any worry of developers not doing that. However, we also can't keep considering zero advancement. Expecting hardware RT feature support in the second half of the 2020s should not be controversial...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Of course, we come back to the main problem: Instead of offering &lt;i&gt;good&lt;/i&gt;&amp;nbsp;products at a price point, GPU manufacturers are playing a game of hardware chicken and, unfortunately, the gaming market just isn't worth &lt;i&gt;enough&lt;/i&gt;&amp;nbsp;money for them to really care.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Of course, &lt;a href="https://hole-in-my-head.blogspot.com/2022/01/how-to-fix-graphics-card-market-imo.html"&gt;I've already called for the divestment and separation of the gaming parts&lt;/a&gt; of the GPU companies - that's the only way this market can be fixed.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Console Comparisons...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhF1CT12NDvHjm88L246dl2STKqN0frhCoKppF4HpzFjVkXMNaqcxez34UMXijj0UsuxqU-3gqnhd5yIgY1s_9-kjCQb1tsgMyFXOWUaCrbLwodu_ASh5-37jiSy2QhxL4BsCh1yogMt1Ceeuk3nrmscbBckqYMNHHbCxC4m9uNhz2LXs0WKDW267Ws0ps/s783/Performance%20relative%20to%20current%20console%20hardware.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="783" height="348" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhF1CT12NDvHjm88L246dl2STKqN0frhCoKppF4HpzFjVkXMNaqcxez34UMXijj0UsuxqU-3gqnhd5yIgY1s_9-kjCQb1tsgMyFXOWUaCrbLwodu_ASh5-37jiSy2QhxL4BsCh1yogMt1Ceeuk3nrmscbBckqYMNHHbCxC4m9uNhz2LXs0WKDW267Ws0ps/w640-h348/Performance%20relative%20to%20current%20console%20hardware.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Nothing has really changed over the last year and the general trend holds for this console generation - CPU single core performance is holding relatively flat, and still around or slightly below the CPU power of the consoles.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Multicore performance (as noted in the section above) is increasing, most likely as a result of increased multicore efficiency of AMD CPUs and increased numbers of physical cores in Intel CPUs. This isn't really important for gaming except in very particular situations - shader compilation, double-use cases such as streaming, etc.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The GPU performance is increasing slowly - likely as more developers take advantage of the extra power and abilities available on PC hardware compared to the consoles. This is also likely a reason why CPU requirements aren't increasing - we have plenty of CPU performance and, as more features are pushed onto the GPU, CPU performance is less relevant. Of course, this ignores the continued lack of optimisation in many games which mismanage some of these features and also overload the CPU.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It seems a lot of people are still really blind to stutters and frame drops...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I would have liked to begin including ray tracing recommended requirements but so few games are doing this that the data is too sparse to put into any meaningful discussion.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Predictions, Predictions...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, we're onto the real reason this series was begun - looking into the "crystal ball" of what we should be targeting in a few years' time. 2025 was the "future" I had originally predicted until and I had never extended that as, for one, it felt like moving the goalposts a little&amp;nbsp;&lt;i&gt;too&lt;/i&gt;&amp;nbsp;much. Secondly, it seemed unnecessary. However, looking back, I probably should have extended each year to provide some ongoing discussion and visibility. It's not like the data wasn't there, I just didn't feel like updating the graphs all the time as some of them have overlaid elements which have to be manually adjusted.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;But enough navel-gazing... I'm pushing this thing out to 2030.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBmsnuybc2mLMzudyFOmrg0D6Qtfh_Qzq-_sS7ACnmj4Kp_BarUJIpELoljfNrZMdg9ZiBvXMC-I4ngvmzXK-Z2_J9GFZIMKzK-x2mTpyMsUUNCHrJo4H60DLDETgB3kaVRxSSgd6l3YcGkfIb0qivdbJiE2VeHZTTo95GAet72W6KLLSBy5hO6N94lAU/s747/CPU%20single%20core%20predicted%20per%20year.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="423" data-original-width="747" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBmsnuybc2mLMzudyFOmrg0D6Qtfh_Qzq-_sS7ACnmj4Kp_BarUJIpELoljfNrZMdg9ZiBvXMC-I4ngvmzXK-Z2_J9GFZIMKzK-x2mTpyMsUUNCHrJo4H60DLDETgB3kaVRxSSgd6l3YcGkfIb0qivdbJiE2VeHZTTo95GAet72W6KLLSBy5hO6N94lAU/w640-h362/CPU%20single%20core%20predicted%20per%20year.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: left;"&gt;From the extrapolated curve fit, we can see that single core performance is expected to continue to slowly grow over the next five years. To account for this new performance region, I've adjusted the comparison products on the charts to provide some relevant reference points.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Unless we're about to see some extreme new, highly demanding games on the horizon (which seems unlikely given current technology price trends) a CPU with the equivalent performance of a Ryzen 7 7700X or i5-14600K should see users through until 2030 without issue.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You might note the weird-looking Ryzen 5 9600X result but this is likely an reflection of &lt;a href="https://chipsandcheese.com/p/zen-5-variants-and-more-clock-for-clock?utm_source=publication-search"&gt;some of&lt;/a&gt; &lt;a href="https://medium.com/@jason890418123/exploring-zen-5-and-zen-4-microarchitectures-dive-into-op-cache-branch-prediction-and-more-f9da2469fb5e"&gt;the improvements&lt;/a&gt; AMD have made to the Zen 5 architecture which provide a decent performance boost over Zen 4 in many productivity and scientific workloads but &lt;a href="https://youtu.be/emB-eyFwbJg?si=y-uePzl9rcpkN9Od&amp;amp;t=786"&gt;are almost invisible in gaming workloads&lt;/a&gt;...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidz-NsZW3MkCAnVSNj5jYHXYNlMKxj8-d0ZLrrFebWGjKEIEhhLywuF3mEB1Mbx7f4VrKnaVYhLlYRmwcv_kADYC29BPN2O8GUGVSk5Q-33zJtmH6wLGQnJPDgKMeKPjh1CTBhqLwqDSDQ-Zw4cyrece2sQJSWFu7U-DUdpdv38rFfVfiapGMYePt2yb8/s747/CPU%20multicore%20predicted%20per%20year.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="747" height="364" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidz-NsZW3MkCAnVSNj5jYHXYNlMKxj8-d0ZLrrFebWGjKEIEhhLywuF3mEB1Mbx7f4VrKnaVYhLlYRmwcv_kADYC29BPN2O8GUGVSk5Q-33zJtmH6wLGQnJPDgKMeKPjh1CTBhqLwqDSDQ-Zw4cyrece2sQJSWFu7U-DUdpdv38rFfVfiapGMYePt2yb8/w640-h364/CPU%20multicore%20predicted%20per%20year.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;The trend for CPU multicore performance is looking to end up somewhere around that of the Ryzen 7 7700X. This metric will likely be less important (as we've historically observed) due to it being affected by the core counts of CPUs available on the market at all price points. However, if you're on a Ryzen 5 Zen 4 or above CPU, you're likely to be fine until 2030.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Contradicting this slightly, my personal predictions for the number of cores/threads are set at 8 and 16 for the next five years. We can see that, this year, we are a bit below for the mode cores/threads but I expect this to increase either next year or 2027 and remain there into the 2030s.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQl1cuXpVU4iS5dpOupOfaIFIi5S28UUZp8HmBK0kYD1qMSxsIPS7xQDXAiQQG84MhLrY763hDaND4y-Z7bsgZgGTbSMFbj3wUICXIGSeIezoW2x7EOqZGbGiV_eIwtrIGNvYflHQX0Udlij7NPxTaVhZmfnVrHppkUDTvBQURtr4f-lnHbJ4RxAn1ydQ/s751/CPU%20cores%20threads%20predicted.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="427" data-original-width="751" height="364" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQl1cuXpVU4iS5dpOupOfaIFIi5S28UUZp8HmBK0kYD1qMSxsIPS7xQDXAiQQG84MhLrY763hDaND4y-Z7bsgZgGTbSMFbj3wUICXIGSeIezoW2x7EOqZGbGiV_eIwtrIGNvYflHQX0Udlij7NPxTaVhZmfnVrHppkUDTvBQURtr4f-lnHbJ4RxAn1ydQ/w640-h364/CPU%20cores%20threads%20predicted.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;The actual average cores and thread values are closer to what I am predicting. We're already seeing an average of 8 cores being recommended and we're currently sitting at an average of 14 threads being recommended due to Intel's CPUs.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The reason I see this still increasing is because, even if consumers are not able to upgrade their systems due to expensive RAM/GPUs, developers are likely to push CPU requirements to make up for these other lack of resources and even back to the AM4 platform, an 8core/16thread processor is basically the "best" gaming CPU you could recommend. As we move out of the era of recommending Intel CPUs that don't include hyperthreading, this will push both the average and mode cores and threads up.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Moving onto the highly contentious memory side of the equation, my prognosticating powers are severely hampered by lack of ability to understand the complex picture of what's currently happening and what will happen. There are too many factors and inflection points on the near horizon to be really accurate, here. So, I am going to assume a return to normality within a reasonable timeframe (1-2 years) and, thus, developer recommended requirements are unlikely to be truly affected.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWrRY7v9992JhD6VYMwGtRxaWscD5ItbcsyAtYOONQDFhKmBMt8F2IE8SFgY9oRMGb1eT71fYBI_dt9dBZtjWnKeuvoI-VhQInkdM8glUbhiznZqoiHpIAos5Op0fywP5enCgafoN06L1-rqGx7aidGMAYr2VmuP1lxy9Krve13mX2EuzjbYwnEVDuuT4/s749/RAM%20system%20predictions.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="749" height="364" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWrRY7v9992JhD6VYMwGtRxaWscD5ItbcsyAtYOONQDFhKmBMt8F2IE8SFgY9oRMGb1eT71fYBI_dt9dBZtjWnKeuvoI-VhQInkdM8glUbhiznZqoiHpIAos5Op0fywP5enCgafoN06L1-rqGx7aidGMAYr2VmuP1lxy9Krve13mX2EuzjbYwnEVDuuT4/w640-h364/RAM%20system%20predictions.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;I'm still expecting the most required quantity of system memory to be 16 GB going into 2026 but for that to increase by 2027 to 32 GB. I'm currently listing 24 GB as the half-measure third most required quantity and that's really only because those module configurations exist and you're going to see some games test against them.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, after 2027, I don't expect this to change any time soon. 32 GB is such a large value that I believe games would struggle to utilise it in most scenarios (certain genres being an exception) and the next step up is so huge, I don't imagine 48 GB or 64 GB being required any time sooner than 2035 - if ever, within the next 15 years.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We'd need to experience a complete paradigm shift in how computing is performed and programmes are designed (or for web and browser designers to infiltrate gaming dev houses ;) ).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;VRAM, on the other hand...&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRvxX5AKYdgXHeM8El_k0EE188iQC2Ul2QFk1uoXIhKgObwGn-m_oCW1dFte87es8M58FMYT6NbU8awX-o-a2AKvoMFwXAzGQIkX7fKz11Eo4E-Fbe4WrIIIlb7jv_Vt9Ab4qIUznDDZM9kTDagnnYhLCNlq354xK8tNFkapg6zlXEmidlLenPYbGCyOE/s749/RAM%20video%20predictions.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="749" height="364" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRvxX5AKYdgXHeM8El_k0EE188iQC2Ul2QFk1uoXIhKgObwGn-m_oCW1dFte87es8M58FMYT6NbU8awX-o-a2AKvoMFwXAzGQIkX7fKz11Eo4E-Fbe4WrIIIlb7jv_Vt9Ab4qIUznDDZM9kTDagnnYhLCNlq354xK8tNFkapg6zlXEmidlLenPYbGCyOE/w640-h364/RAM%20video%20predictions.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;I'm mostly predicting the continued slow growth of VRAM and that's primarily because of AMD and NVidia's stingy behaviour on the lower-end cards and consumers&amp;nbsp;&lt;i&gt;absolute refusal&lt;/i&gt;&amp;nbsp;(and also sometimes ability!) to get cards with more VRAM.&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, that sentiment is changing and, over the last year, as I noted above, we've seen a broader push-back and refusal of the 8 GB VRAM cards on offer. But they're still there, and manufacturers are going to want to keep trying to push them - especially with the current DRAM availability crisis.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;3 GB GDDR7 modules are still missing in action, despite being 2-3 years late from their intended start of manufacture and I really think the application of those onto existing memory controller widths is going to address the situation. I also think that they would address the DRAM shortage, too. Stop making 2 GB modules and you get "more" memory per wafer of GDDR6... Problem solved?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Well, yes and no.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You see the current rumours &lt;a href="https://youtu.be/GE7XTAhzjoo?si=A-twPX6bxdicLxRq&amp;amp;t=1115"&gt;from MLID&lt;/a&gt; and &lt;a href="https://videocardz.com/newz/amd-rdna5-rumors-point-to-at0-flagship-gpu-with-512-bit-memory-bus-96-compute-units"&gt;KeplerL2&lt;/a&gt;&amp;nbsp;point to AMD's low-end GPU cores supporting LPDDR5 - which, to the best of my knowledge, doesn't and has no roadmap to supporting higher than 2 GB modules in the spec. This is pretty bad for AMD's side of the equation because it means the only route to providing higher memory capacities is to double the number of memory modules on their bus. That would add cost and also cause issues with pure numbers of modules required to be procured in the current environment.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The higher end GPU cores are rumoured to support GDDR7 so, they can benefit from increased capacity without requiring more modules to be purchased. Thus, more wafer efficient.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Nvidia's lower end chips support GDDR7 (with the exception of the RTX 5050 (GB207) and it's likely that this will continue next generation or for GDDR7 use to be extended to the lowest GPU design, as well. This would allow Nvidia to increase VRAM capacity without having to utilise more memory modules.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Of course, Nvidia could further reduce the memory bus width, still managing to increase or maintain bandwidth and slightly increasing VRAM capacity from 8 GB to 9 GB. I have a feeling that this might be their course of action in the low end...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;These choices will&amp;nbsp;&lt;i&gt;all&lt;/i&gt;&amp;nbsp;result in further performance stagnation at the lower end of the product stack, further lengthening those bands I addressed previously in this blogpost.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, we're looking at an uncertain time ahead for the GPU market and, as a result, gaming.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh238v1yl36IKKWYYr7whk1HKaWFcaby0F0x4bI98QNCtL8-AKor9gv-OC0aX3za0CFVmAElIkh8VCItmuQEbKNeBsLNPWw8Aur5GksXJQRur1h4vgJ6gU9eCUTlmwfMw3ahyphenhyphen5WPp0HYW4INzfVNGVEib57TmwTJYVkdcyfuJLcp6CBj6trU9SW_NQJ0Sc/s747/GPU%20predicted%20per%20year.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="423" data-original-width="747" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh238v1yl36IKKWYYr7whk1HKaWFcaby0F0x4bI98QNCtL8-AKor9gv-OC0aX3za0CFVmAElIkh8VCItmuQEbKNeBsLNPWw8Aur5GksXJQRur1h4vgJ6gU9eCUTlmwfMw3ahyphenhyphen5WPp0HYW4INzfVNGVEib57TmwTJYVkdcyfuJLcp6CBj6trU9SW_NQJ0Sc/w640-h362/GPU%20predicted%20per%20year.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Passmark does not adequately rate the RDNA4 architecture at this point in time. I am hoping the situation will improve, otherwise I will be forced to find an alternative... Maybe you can suggest one?&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: left;"&gt;Shifting over to the compute performance of the GPU, I'm expecting&amp;nbsp; for the current trend to continue. Games aren't going to get lighter to run and engines and graphics APIs are all moving towards heavier features. This will result in the "recommended" experience targeting more performant GPUs.&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If I look at the trend line, anyone with an RTX 4070 Super and above level of performance will be in good stead for the next five years - potentially ignoring VRAM considerations. On AMD's side, that's an RX 9070 or RX 7900 GRE but taking into account increased RT demands, more appropriately the RX 7900 XTX.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Summing Up...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I spend a decent amount of time on Reddit, helping those with PC builds and troubleshooting build problems. From what I see, a lot of gamers who have updated their PCs to new(ish) hardware within the last two years will be fine for the vast majority of games over the next five years, at 1440p. Those who didn't maximise the GPU upgrade will likely have an easier path to do so at a later point in time when, and if, they need it.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Aside from the current bumps in the road for RAM and GPU performance (and potentially soon to be pricing - again!) I don't think consumers of PC games are in that much of a terrible position going into 2030. Sure, they won't necessarily be playing games at maximum settings but then that doesn't make a bad game good, does it?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Players will adapt but more importantly, devs are likely to keep lower-end hardware in mind for their new releases.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Of course, hardware manufacturers are here to help force developers hands - whether they like it or not!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/a/AVvXsEinVRShaqtfYVDsLH9z4tHvlyPnHv34YnlWy0K0-RE2tbcXLcMIPfUL6wygojmsLZJT_q1LHtt2s0TdkYYB-B4CtPQUPLJJ4JAApBty19QrME_JXTdPY0AKfNVD2LaAzFB5ALETGct6ZmPhI2PUyd4VDrrHngtOTtulX0htPTesDG_2CW5R52oF4MZvr7Q" style="margin-left: auto; margin-right: auto;"&gt;&lt;img alt="" data-original-height="383" data-original-width="825" height="298" src="https://blogger.googleusercontent.com/img/a/AVvXsEinVRShaqtfYVDsLH9z4tHvlyPnHv34YnlWy0K0-RE2tbcXLcMIPfUL6wygojmsLZJT_q1LHtt2s0TdkYYB-B4CtPQUPLJJ4JAApBty19QrME_JXTdPY0AKfNVD2LaAzFB5ALETGct6ZmPhI2PUyd4VDrrHngtOTtulX0htPTesDG_2CW5R52oF4MZvr7Q=w640-h298" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Summary of the Steam Machine specs, &lt;a href="https://www.digitalfoundry.net/features/hands-on-with-steam-machine-valves-new-pcconsole-hybrid"&gt;via DigitalFoundry&lt;/a&gt;...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The fact of the matter is, aside from both AMD and Nvidia giving lower-end GPUs worse performance and VRAM uplifts, there is another wave of downward pressure on game developers from implementing more demanding games: consoles and customised handheld hardware.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;First up we have the handhelds: Steam Deck - 8 CUs (RDNA2), Switch 2 &lt;a href="https://www.digitalfoundry.net/articles/digitalfoundry-2025-switch-2-the-tech-specs-that-nintendo-and-nvidia-are-not-sharing"&gt;equivalent to an RTX 2050&lt;/a&gt;, ROG Ally variants between 12 - 16 CUs (RDNA3) then we have the Xbox Series S - which is forever holding back the platform - 20 CUs (RDNA2), and finally, the latest in a long line of disappointments, the Steam Machine - 28 CUs (RDNA3).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;From my own estimations, the &lt;i&gt;best&lt;/i&gt;&amp;nbsp;of these (for compute) will be &lt;a href="https://x.com/Duoae/status/1988689398603661489?s=20"&gt;around the performance of the RX 6600&lt;/a&gt; - a GPU from 2021 and the worst an RX 6500 (with more RAM). That's pretty dire...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As the market gets flooded with more and more low-spec, expensive hardware, it pulls the average hardware performance lower, meaning that developers need to aim lower with the expected performance envelopes of consumers to reach the broadest market for their expensive to produce games.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Sure, we can wax lyrical about the scalability of graphics in modern engines but there are limits and these products have the potential to limit the types of games that could be made. The majority of the handhelds won't have an impact on this but Switch 2 will - it will reach a large enough market that it can pull real weight. Similarly, the Steam Machine has the potential to pull down the average performance of consumer hardware depending on how many units get sold into which market.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;But that's a blogpost for another time...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOmoRce-4vjawNWpExFXmI7oSogL1RnMMpzH26q46pDPUuXpHxs8jA5q17AuhM_WusbX3Z2x8h_vPZolZKuaEprS8UX0j8k1FG39QRuFlP6niE1Brgnjt4DX7e_U3CbDzBsw1sFc7TTduTaDb9FwmoOg_zfzQIzCQVNZvYs_qFwbWcGnxctW72GYSVl_c/s749/Polling%20data.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="499" data-original-width="749" height="426" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOmoRce-4vjawNWpExFXmI7oSogL1RnMMpzH26q46pDPUuXpHxs8jA5q17AuhM_WusbX3Z2x8h_vPZolZKuaEprS8UX0j8k1FG39QRuFlP6niE1Brgnjt4DX7e_U3CbDzBsw1sFc7TTduTaDb9FwmoOg_zfzQIzCQVNZvYs_qFwbWcGnxctW72GYSVl_c/w640-h426/Polling%20data.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Coming back to the current data I figured it would be interesting to see how my polling is doing over time. Due to the way I decide which games qualify (i.e. I keep track of major releases and popular games - mostly with strong graphical qualities) it can skew the results somewhat - though I try to avoid this as much as possible. In fact, you can see that I've generally increased the number of titles polled over time (2020 was pretty dire because of all the delayed titles which were then pushed to 2021). But the overall trend is that I'm polling more games - whether that's due to more games being released which "qualify" or whether it's because I've become more vigilant since I am tracking these things throughout the year instead of just before compiling this post.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's interesting to see how the number of CPU and GPU SKUs that are listed per year changes.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We had a lull in the GPU requirements during the PS4 Pro/Xbox One X period which began climbing again once we entered the PS5/Xbox Series generation. We saw this same thing during the end of the PS3/Xbox 360 generation and the beginning of the PS4/XBO generation, despite having fewer games polled per year. The reason for this is that the performance of the PC hardware in this period was much greater than that of the consoles and so a broader range of GPU hardware was able to run the games of the time. Additionally, we had a lot of hardware releases throughout the preceding years. Then, once we reached the PS4 Pro and One X release, game requirements started shooting up and older hardware just didn't meet the requirements.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, I don't think that the reason for increasing GPU SKUs is the same now. Sure, we do have some of that performance overlap - as I mentioned above in the banding discussion - but what I believe is a stronger force is that games are requiring less powerful hardware, overall.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That's not necessarily a bad thing but it does address the fairly common refrains of "developers need to optimise more" (they're already doing so!) and "they're going to have to start optimising for older hardware" (again, they're already and have increasingly been doing so!)&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXRM131jDyAeCiknD7DFR15_j-bS7tV7LQMg-6fTrsFXnv_idWvjtUEflYrSQr-ucfvhMpb_rB0pD-yvIhKLeJi1jNBhtQ6-VRcgSfR3dmoiAEHy3gEmk_qE_Q3_A016gDnhYAukGxZiPhs_87ZrmgqvL3JEEsRdYGTYdH9WAHPu9eGpkrHe6kur-EmI8/s1184/Class_summary%20perf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1011" data-original-width="1184" height="546" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXRM131jDyAeCiknD7DFR15_j-bS7tV7LQMg-6fTrsFXnv_idWvjtUEflYrSQr-ucfvhMpb_rB0pD-yvIhKLeJi1jNBhtQ6-VRcgSfR3dmoiAEHy3gEmk_qE_Q3_A016gDnhYAukGxZiPhs_87ZrmgqvL3JEEsRdYGTYdH9WAHPu9eGpkrHe6kur-EmI8/w640-h546/Class_summary%20perf.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Remember this beautiful spreadsheet? Imagine applying this to the RTX 50 and RX 9000 series... Would an RTX 5060 even be equivalent to a 50 series?&amp;nbsp;&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As noted in the CPU section, the gains in single core performance are really small and game requirements are really small, too. Meaning that many more CPUs can manage to play modern titles. That's great! It also explains the increase we're seeing in CPU SKUs, too.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Though smaller in %increase than GPU, this can easily be explained through AMD's lack of single core performance compared to Intel's various generations over the late-2010s. So, when Zen started making in-roads to gaming rigs, the CPUs could only match (and in some cases fail to match) generations old Intel architectures when playing games. That sort of ended around Ryzen 5000 and Zen 3 which is where we've started seeing increased numbers of SKUs reaching the same historic highs as at the end of the PS3/360 generation of consoles.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The thing to note about where we are now, aside from AI causing issues with the ability to obtain parts and new systems, is that manufacturing process nodes are not only slowing down in terms of performance gains but also increasing in expense. At the end of the PS3/30 era, we had a HUGE node shrink from 45/40 nm to 28/22 nm to 16/14 nm in the space of 5 years from 2011 to 2016 across CPU and GPU.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If there's one positive to potentially come from all of this is that we may see a renewed focus on hardware design optimisation over node-shrink gain mentality. We've seen this with the evolution of Zen (especially 3 and 5) and also RDNA (most notably 4!). However, we've seen less of this from both Intel and Nvidia, with the former relying on trying to cram in more in the same area and the latter focussing on software to bring increased performance.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The last time we had a focus on hardware design improvements from Nvidia was the GTX 9 and 10 series (with the RTX 30 series coming a decent third place in terms of impact generational performance impact). For Intel? I'm not sure but I felt both the 12th and 14th generations were pretty big in terms of the performance core design wins (if the latter ended up being a big risk and ultimately backfiring in their face due to the degradation problems).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If we can see the manufacturers squeeze optimisation wins out of their architectures, then I think we could be on the cusp of another 2016 - 2017 period where developers are able to stretch their legs and we'll see reduced numbers of SKUs in the recommendations. If we don't see that, then I expect the numbers of SKUs to continue increasing.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Ultimately, I am pretty optimistic for PC gaming over the next five years - as long as RAM prices and GPU prices stay sane... A pretty big ask, I know!&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2025/12/next-gen-pc-gaming-requirements-2025.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGakvCmvGqAVc8XhMHXRlemAvTfvIzMkjQXTimhSLcPmT3Bhd5pBDCt98lRklGv9yMO-P1UP_qo-KAjiXsJosornknUKcw5mY9YVCEAqGXYlowAqf39afWKP36VkJDM_eySRuuQHfY3mzHwRgB8spaB7iEEOfGmOm_lxUcBDSCMZOwTjlujsXDUhszmCw/s72-w640-h426-c/Title%202025.jpg" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-9179710256963613980</guid><pubDate>Sat, 02 Aug 2025 08:19:00 +0000</pubDate><atom:updated>2025-08-29T11:53:00.065+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>The Performance Uplift of RDNA 4 and performance review of the RX 9060 XT...</title><description>&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWZyN1zYV2lTWZhqgjGQ8OeVpbWMOHtfEYYbDdGlQ7dPVZpdaBMQGUveZCFFeUj1-8M5TN1WySgibi1Tb1x-sTnoH2CKe5EJnyE9jOCUOzAn3_VFOINags3_dbnEKhB1TlXS2dJl1ya3LEdJ-Khf73AvrE74DatsfSYPVAWQNzrvwqB20SvXFbWpgfIzY/s1920/Title.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWZyN1zYV2lTWZhqgjGQ8OeVpbWMOHtfEYYbDdGlQ7dPVZpdaBMQGUveZCFFeUj1-8M5TN1WySgibi1Tb1x-sTnoH2CKe5EJnyE9jOCUOzAn3_VFOINags3_dbnEKhB1TlXS2dJl1ya3LEdJ-Khf73AvrE74DatsfSYPVAWQNzrvwqB20SvXFbWpgfIzY/w640-h360/Title.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Previously, I've looked at the performance uplift of RDNA 3 over RDNA 2 - &lt;a href="https://hole-in-my-head.blogspot.com/2023/11/the-performance-uplift-of-rdna-3-over.html"&gt;both mid-range&lt;/a&gt; and &lt;a href="https://hole-in-my-head.blogspot.com/2024/10/the-performance-uplift-of-rdna-3-part-2.html"&gt;low-end variants&lt;/a&gt; of the architeture - by performing testing at iso-clock frequencies. I love testing these things so have picked up the RX 9060 XT to pair against the prior gen RX 7600 XT. Considering that RDNA 4 has been shown to be a pretty big upgrade over RDNA 3, I'm expecting some interesting results.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, let's dig into the numbers!&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Setting up...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As per previous instalments in this series let's get started with setting up the premise and background of this testing. For starters, I'm using the following system for testing:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;OS Windows 10&lt;/li&gt;&lt;li&gt;Intel i5-14600KF&lt;/li&gt;&lt;li&gt;Gigabyte B760 Gaming A AX&lt;/li&gt;&lt;li&gt;Patriot Viper DDR5 7200 MT/s&lt;/li&gt;&lt;li&gt;Sapphire Pulse RX 7600 XT&lt;/li&gt;&lt;li&gt;Sapphire Pulse RX 7800 XT&lt;/li&gt;&lt;li&gt;Sapphire Pulse RX 9060 XT&lt;/li&gt;&lt;/ul&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, you may notice a trend in the manufacturer of the GPUs but that's purely a price-conscious thing. These cards have always been at (or closest to) the MSRP in my region and, because I'm spending my own money, these are the cards I've picked up. I've also found that Sapphire cards are generally well-built and designed.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In the previous instalments, I noted that you shouldn't be testing GPUs near their power limits or at stock if you really want to see the generational improvement: you need to be in a non-power limited region of the voltage/frequency curve on each architecture. However, the RX 9060 XT presents a bit of an issue in this regard. As &lt;a href="https://www.computerbase.de/artikel/grafikkarten/blackwell-lovelace-rdna-4-rdna-3-performance-vergleich.93228/#abschnitt_amdbenchmarks_rdna_4_gegen_rdna_3"&gt;ComputerBase mentioned in their testing&lt;/a&gt;, you can't downclock the RDNA 4 cards low enough, nor the RDNA 3 cards high enough into an overlapping zone.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Actually, this isn't entirely true. RDNA 4 effectively &lt;i&gt;has no controls&lt;/i&gt;. Seriously, this is the &lt;i&gt;worst&lt;/i&gt;&amp;nbsp;software and most locked-down GPU generation I've ever experienced. It's very poor for a consumer to not be able to tweak the settings on the GPU and we've seen a regression in this aspect each generation from AMD since RDNA 1. RDNA 4 culminates in the almost complete loss of contrrol of the product you've purchased. Want to set a negative core frequency offset? The RDNA 4 card will ignore it. Or, sometimes it won't... Oh, and what frequency is it offsetting from? Who knows!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Want to increase the core frequency offset? The card will crash. &lt;i&gt;Hard&lt;/i&gt;. So, AMD, what exactly is your card doing? It will ignore user settings when it's convenient for you and then apply them unstably when it's not convenient for the user?&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This is just poor design and very poor user control options. Give me granularity. Hell, I'd even settle for RDNA 3-level of control...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj35iXPjcHU4nqrNbyxGw2Zh-8o3G_gSJW5Y3njIdJIzF0RXvR8vZZv12hO1lIgu1twglrr8QqWw9l7G_ELathG9fuCYA-j4sum5kovlSlBpGh_gFXsybjhgipIY62wTieAxXA3R_GFm02qY8ZI2nx-FHr99HNIbcjq_hwBFiXz9turBD2pGKZNiWHwo84/s1917/Adrenaline%20non-controls.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="925" data-original-width="1917" height="309" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj35iXPjcHU4nqrNbyxGw2Zh-8o3G_gSJW5Y3njIdJIzF0RXvR8vZZv12hO1lIgu1twglrr8QqWw9l7G_ELathG9fuCYA-j4sum5kovlSlBpGh_gFXsybjhgipIY62wTieAxXA3R_GFm02qY8ZI2nx-FHr99HNIbcjq_hwBFiXz9turBD2pGKZNiWHwo84/w640-h309/Adrenaline%20non-controls.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;What even is this..? Let me set a&lt;/b&gt;&lt;b&gt;specific&amp;nbsp;&lt;/b&gt;&lt;b&gt;core frequency, please, AMD!&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;*Ahem*&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Anyway, as I was saying, this inability to set the same core and memory frequencies is not entirely true. It can be done - I'm guessing that I just happen to have better silicon quality compared to the card ComputerBase was using for my RX 7600 XT. So, for most (not all) of the testing, the difference in operating is a few megahertz plus or minus around 2880 MHz on the core - which is a negligible difference.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There are some titles (CounterStrike 2 and Spider-man: Remastered) for which the RX 9060 XT just ignores the user-set controls and boosts up to the mid-2900s. There's nothing I can do about it and, as we will see, setting a negative power limit is a &lt;u&gt;&lt;i&gt;bad idea&lt;/i&gt;&lt;/u&gt;&amp;nbsp;on RDNA 4.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This does now introduce a caveat to the testing. I &lt;i&gt;know&lt;/i&gt;&amp;nbsp;that the RX 7600 XT, even with it's maximum +15% power limit is likely power-limited in some or all of the testing scenarios we are about to see. So, keep this in mind when looking at the results.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Similarly, the RX 7800 XT is struggling here. It's a MASSIVELY power-limited product and even with the +15% power limit enabled, and appropriate frequency controls set, it just cannot boost to the required 2880 MHz target that I've set. In most titles, it's hovering around 2450 - 2600 MHz but it does come close in those CS2 and Spider-man benchmarks I noted above.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Moving onto the memory frequency - my RX 7600 XT can easily manage 2500 MHz - very close to the minimum 2518 MHz we have on the RX 9060 XT. The RX 7800 XT can also go much higher (due to its superior quality GDDR6 modules) but I kept it at 2518 MHz to match the 9060 XT.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5YQatarCaj2GMkhjCaUW8duqMxx_GRQ2rrCNbsGjqrhurGDPrn0uukGGRNw-lIh3EbfJI5BqOYiHQo5Qc4SJiDmM3w-N_cjr6hMZ2S49LczXbzs9Pr_Kb8DhpD_7MetstEUAsuzjguqKoqo2kRaKavVwO1SnTCK6y5n1pJ-aykv0T2DwOhCbfwHz_Hlw/s469/Hardware%20comparison.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="265" data-original-width="469" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5YQatarCaj2GMkhjCaUW8duqMxx_GRQ2rrCNbsGjqrhurGDPrn0uukGGRNw-lIh3EbfJI5BqOYiHQo5Qc4SJiDmM3w-N_cjr6hMZ2S49LczXbzs9Pr_Kb8DhpD_7MetstEUAsuzjguqKoqo2kRaKavVwO1SnTCK6y5n1pJ-aykv0T2DwOhCbfwHz_Hlw/s16000/Hardware%20comparison.PNG" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Obviously, the main comparison is going to be between the N33 and N44 parts, but the N32 (RX 7800 XT) can be an interesting partner for this testing...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In terms of the hardware, the RX 9060 XT is the successor to the RX 7600 XT and this will be our main comparison. The two big differences we have to assess are the architectural improvements within the compute unit (CU) which are mainly focussed on ray tracing and I/O management and the larger L2 cache.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;a href="https://chipsandcheese.com/p/rdna-4s-raytracing-improvements"&gt;RT has been vastly improved&lt;/a&gt;&amp;nbsp;and we're essentially looking at doubling of base performance (&lt;a href="https://chipsandcheese.com/p/dynamic-register-allocation-on-amds?utm_source=publication-search"&gt;ignoring other optimisations that have been made&lt;/a&gt;). For I/O considerations things are more complicated: the L1 cache &lt;a href="https://chipsandcheese.com/p/amds-rdna4-architecture-video"&gt;has been turned into&lt;/a&gt;&amp;nbsp;&lt;a href="https://en.wikipedia.org/wiki/Data_buffer"&gt;a buffer&lt;/a&gt; and, if I &lt;a href="https://x.com/GawroskiT/status/1893987575657816126"&gt;understand the leaks correctly&lt;/a&gt;&amp;nbsp;(along with the &lt;a href="https://www.hwcooling.net/en/better-more-capable-than-expected-rdna-4-architecture-deep-dive/"&gt;actual reporting&lt;/a&gt;) this, combined with the out-of-order data request management in the CU itself, allows for the simplification of instructions, and improved parallelisation of workloads on the dual FP32 units per CU that RDNA 3 failed to fully utilise in practice.*&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;i&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*I'm not an expert, so don't take my word for it!&lt;/span&gt;&lt;/blockquote&gt;&lt;/b&gt;&lt;/i&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In practice, this appears to have required a larger L2 cache (256 bytes per WGP - Work Group Processor [or Dual Compute Unit/Compute Unit pair]*) which would make sense as more data now needs to be replicated between the L2 and L3 for each WGP to work on without increasing latency through immediate accesses to L3. Again, if my understanding is correct, &lt;a href="https://news-mynavi-jp.translate.goog/techplus/article/architecture-353/?_x_tr_sl=auto&amp;amp;_x_tr_tl=en&amp;amp;_x_tr_hl=pl&amp;amp;_x_tr_pto=wapp"&gt;this actually brings RDNA memory hierarchy management closer to that of Nvidia's&lt;/a&gt; where primary cache flushes at the WGP/SM level happen automatically instead of through a necessary instruction.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;a href="https://chipsandcheese.com/p/amds-rdna4-architecture-video"&gt;Chips and Cheese&lt;/a&gt; refer to it as a &lt;a href="https://ditec.um.es/gacop/tools/rsim-x86/manuals/html/node110.html#:~:text=The%20coalescing%20write%20buffer%20is,write%20buffer%20has%20zero%20delay."&gt;coalescing buffer&lt;/a&gt;. This optimisation potentially saves a lot** of energy and may improve latency and system management overhead (as I alluded to above).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*And you thought that Nvidia's pronunciation of Ti was an issue within the company! AMD said, "Hold my beer..."&lt;/span&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;**Relatively speaking!&lt;/span&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;And that's essentially the large changes, covered. There's also the process node improvement (TSMC N6 to N4P) which also increases energy efficiency and maximum clockspeed - and we'll see this difference on display later on in this blogpost. But, for now, let's move onto testing out the RDNA 4 part to see what it can (or can't) do...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Winding down...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As I pointed out before, in order to accurately assess the architectural uplifts we shouldn't be testing GPUs at or near their power limit. However, RDNA 4's lack of controls have forced us into a position of doing exactly that. Previously, I showed that some RDNA 2 and 3 products were operating in that region but we need to determine what is happening for the RX 9060 XT at stock settings.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjf4gILBHL9Helwt7ihcCprMHEK49Zsr1LqHeTbMnAqH4yykCNkUTrmwoANcxjfNLMfxoIrI80KR6QEjjq9qPehR6u6ZoFjLskwl5FRHRirVJDkZgEy1mZSt5cWHvtB6j7GSC35DkgGS2nLhUHc5EXPcOV1zFBlr51fc58MCLTGQnHTuI6Zlzy5WVv7-OQ/s751/Avatar_power_memory%20scaling.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="751" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjf4gILBHL9Helwt7ihcCprMHEK49Zsr1LqHeTbMnAqH4yykCNkUTrmwoANcxjfNLMfxoIrI80KR6QEjjq9qPehR6u6ZoFjLskwl5FRHRirVJDkZgEy1mZSt5cWHvtB6j7GSC35DkgGS2nLhUHc5EXPcOV1zFBlr51fc58MCLTGQnHTuI6Zlzy5WVv7-OQ/w640-h362/Avatar_power_memory%20scaling.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The RX 9060 XT is, much like it's RDNA 2 and 3 counterparts, power and memory bandwidth-limited...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;First off, I tried to use my stalwart testing platform - Metro Exodus: Enhanced Edition to see the power and memory scaling of the hardware. Unfortunately, this game is pretty broken for RDNA 4. So, I had to drop that and, instead turn to a more modern, but still complete, hardware workout: Avatar: Frontiers of Pandora. This title also includes a standardised built-in benchmark and provides a full-workout for all the hardware on the chip.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Using this new standard, we are able to see that there is some headroom available to the RX 9060 XT but not a lot. So, as with previous testing, we should opt to reduce the target core frequency to have a power headpace. This would also reduce memory bandwidth requirements, limiting the impact of the memory bottleneck that could occur - however small.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgX2minNRY69HmklYyLivw2F56rwMYk9Yr0ZlIXsGjIDH-KqjlbAl5IUbMjbjyr3piLlxdV8bdVAJSERJiRSyu_C8GHZ3dnL0dl3w63LabFSWTg7aGDf_3djG-a9T5kZWh2KjExmaoEgV2jWR1PxBTCbkTumIYcliLVQpZOHx9_GyEvMWfrT9Dn00FMKD8/s753/Avatar_power_memory%20scaling_undervolting.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="753" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgX2minNRY69HmklYyLivw2F56rwMYk9Yr0ZlIXsGjIDH-KqjlbAl5IUbMjbjyr3piLlxdV8bdVAJSERJiRSyu_C8GHZ3dnL0dl3w63LabFSWTg7aGDf_3djG-a9T5kZWh2KjExmaoEgV2jWR1PxBTCbkTumIYcliLVQpZOHx9_GyEvMWfrT9Dn00FMKD8/w640-h362/Avatar_power_memory%20scaling_undervolting.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Further tweaking through undervolting and increasing the available power resulted in around a 2% uplift - but these settings were not reliably stable...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;One thing that I have now added to this pre-testing assessment is a low hardware block utilisation, high throughput workload. For that, I could have chosen any e-sport title but I'm most familiar with CounterStrike 2.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;For this type of workload, there is essentially no power limitation (due to low utilisation of the hardware resources!) but we do see a 4% uplift from memory scaling. With a -50 mV adjustment on the core, the 2900 MHz memory test was unstable and actually resulted in a large regression to 259 fps, below the stock performance - so, user beware!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What I like about this test is that it shows us the performance of the part at high refresh rate - a completely different paradigm to that experienced in Avatar.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVt2dBYMfIBBvKwZeRBoFnpfoevGWYI8j8CidDh6XQMlWtv5MHSjcQ3LRXpAspDR_PBTZ0Cw8bQDNw_GDh7zKHRR9-AO1qSHet9vtIONFXIA35OnUZm_hdYKbugLn7XuSVZlfD-oTS5i4UaBnhBHpkCp4ipGgk67vq981x4e__qib8Jk3q_W_Ln0xuBdQ/s751/CS2_power_memory%20scaling.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="751" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVt2dBYMfIBBvKwZeRBoFnpfoevGWYI8j8CidDh6XQMlWtv5MHSjcQ3LRXpAspDR_PBTZ0Cw8bQDNw_GDh7zKHRR9-AO1qSHet9vtIONFXIA35OnUZm_hdYKbugLn7XuSVZlfD-oTS5i4UaBnhBHpkCp4ipGgk67vq981x4e__qib8Jk3q_W_Ln0xuBdQ/w640-h362/CS2_power_memory%20scaling.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;A 4% uplift from memory scaling in CS2 is not bad...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The main problem, from my testing, is that the RX 9060 XT &lt;i&gt;cannot&lt;/i&gt;&amp;nbsp;overclock the core frequency - or at least mine cannot*. Even a +100 or +200 MHz frequency limit (note: it's a boost limit, not a hard setting!) will result in instability and random graphics driver crashes. I feel like this is unintended behaviour since the user is &lt;u style="font-weight: bold;"&gt;not&lt;/u&gt;&amp;nbsp;forcing the graphics card to boost higher, it should be boosting based on available thermal, voltage and power headroom. So, this seems like a bug at the time of testing - I will have to see if it gets fixed at some point in the future...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, the main take-away from all of this is that, unfortunately, all cards I will be testing will not be in the optimal configuration for this analysis - we would expect higher numbers for the RX 7800 XT, slightly higher numbers for the RX 7600 XT and, finally, very slightly higher numbers for the RX 9060 XT, despite the uncontrolled downclock, due to its memory bandwidth bottleneck...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*I'm sure people will tell you that you can - but how thoroughly have they tested? On what types of games and engines?&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Iso-Clock Benchmarking...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I guess we have to start somewhere and it seems I've gone through my tests in an alphabetical order this time, so hang onto your hats for this wild ride of iso-clock* testing!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;span style="color: #274e13; font-style: italic; font-weight: bold;"&gt;&lt;/span&gt;&lt;blockquote style="font-style: italic; font-weight: bold;"&gt;&lt;span style="color: #274e13;"&gt;*As much as I was able!&lt;/span&gt;&lt;/blockquote&gt;&lt;div&gt;What's important to note about the test graphs is that I'm charting the "average moving fps", the "differential frametime" (aka dF) - which is an indicator of&amp;nbsp; relative smoothness of experience, and the "maximum frametime".&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The way you can interpret these numbers is that the average fps is telling you the user experience across the entire test - and is not purely an average of frametimes in the test (&lt;a href="https://hole-in-my-head.blogspot.com/2024/02/we-need-to-talk-about-fps-metrics.html"&gt;which is wrong&lt;/a&gt;), we then have the maximum frametime, which corresponds to the worst "stutter" in the test period (i.e. higher is worse!) and finally we have the differential frametime which tells you the number of times that the performance of the card deviated from the average over a certain limit. This limit is currently set at 3 standard deviations from the mean (where I wasn't using an in-game benchmark and not testing the results directly - i.e. Avatar and Returnal). What the dF shows you is how many times the frametime was resulting in a big enough positive sequential frametime deviation that the user might notice and feel.&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The only confounding factor here is that a higher framerate will generally result in a higher dF number if the same "process performance" is achieved compared with a lower framerate GPU result. So, if you have a higher fps and a higher dF, it doesn't mean that your result is worse, it just means that it's less stable but not terribly so!&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Give me feedback in the comments!&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;One last thing. I've performed the majority of these tests at 1080p and that's mostly due to the quality settings I'm testing against (I really want to challenge the RT performance of the 9060 XT) and also due to the general capability of the GPUs on show. While the 7800 XT is clearly a 1440p card, the other two are struggling even at 1080p in many cases...&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgH4s1vaRM900AnNlmOPUUXyy1kFkb8NBBqg78o5Dwev41c0Uc0E8eN7HNtw0KSo_ajR5CwRKFXT9Bped716xRR8KkPthV06p_Uoj60YiyTYRqPJd85ATBLBZI-QNfVa8fQHgacdMxfwMe7-Myc961TwNNikFI93aqSEF-CXO5JCxISjmZIsJSP4PZE44E/s1481/ISO_AW2_Avatar.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="401" data-original-width="1481" height="174" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgH4s1vaRM900AnNlmOPUUXyy1kFkb8NBBqg78o5Dwev41c0Uc0E8eN7HNtw0KSo_ajR5CwRKFXT9Bped716xRR8KkPthV06p_Uoj60YiyTYRqPJd85ATBLBZI-QNfVa8fQHgacdMxfwMe7-Myc961TwNNikFI93aqSEF-CXO5JCxISjmZIsJSP4PZE44E/w640-h174/ISO_AW2_Avatar.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;*RX 7800 XT only achieved 2450 MHz in AW2...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;First off, let's look at Alan Wake 2. We can see here that the 9060 XT is performing well above its weight class at around 31 fps compared to 22 fps for the 7600 XT at essentially the same core and memory frequency. That's a pretty decent uplift, gen-on-gen. What that uplift doesn't do, is beat the -400 MHz core frequency that the RX 7800 XT is operating at. It clocks in at 37 fps showing that the wider compute really helps here. Additionally, I am pretty sure that the N32 part (7800 XT) has a greater bandwidth to the L3 cache as well as the GDDR6 - almost double, in fact!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Avatar shared a similar but even more impressive story - the gap between the 7600 XT and the 9060 XT reduced somewhat while it increased between the 7800 XT and the 9060 XT at Ultra settings. Meanwhile, at Low settings, the distance between the 7800 XT and 9060 XT widened to 33% but decreased from the 9060 XT to the 7600 XT to just 18% and I feel pretty confident that this behaviour is down to the memory bandwidth available as the processing power drops off at lower settings.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIScWx9aZtnFFwblXfk2sxWa4qLbwcQ-O_qGQmb8BNa3rtlajvh3umBoepVt0dJ3hhSkAp6Eof5sW6g7kfZc0hTwfxtltVgu2jTa1Bb7ZiBpvED2rRHIt2bSvSGL8pFGqjpSneB2-KUVk-b4PZ0Za7oiE2aHs7ZKQnEiYWZ21zEX3eKzzdVF33uJMfuBI/s1483/ISO_CS2_Hogwarts.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="401" data-original-width="1483" height="174" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIScWx9aZtnFFwblXfk2sxWa4qLbwcQ-O_qGQmb8BNa3rtlajvh3umBoepVt0dJ3hhSkAp6Eof5sW6g7kfZc0hTwfxtltVgu2jTa1Bb7ZiBpvED2rRHIt2bSvSGL8pFGqjpSneB2-KUVk-b4PZ0Za7oiE2aHs7ZKQnEiYWZ21zEX3eKzzdVF33uJMfuBI/w640-h174/ISO_CS2_Hogwarts.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;*RX 7800 XT achieved 2855 MHz in CS2 and 2456 MHz in Hogwarts...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;CounterStrike 2 is the first, and only, game where there's a table flipped: not only does the 7600 XT beat the 9060 XT in average fps but it also has a lower max frametime and dF value - indicating that when tested at iso-frequency, the RDNA3 part provides a better &lt;i&gt;and smoother&lt;/i&gt; experience!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Obviously, the 7800 XT blasts the other two GPUs out of the water. It doesn't even need to be said. The very large dF value might give you pause. However, the MUCH lower max frametime tells you that these deviations are smaller, if more frequent than on the other two GPUs. i.e. The RX 7800 XT is a beast at e-sports titles... and in this test, it is able to reach and maintain 2855 MHz throughout the test.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This is really the first sign of weakness for the 9060 XT and, in my opinion, points to another weakness in that card - the lack of dedicated (or perhaps more accurately, managed) L1 cache.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I believe that this performance deficit can only be explained through added latency caused by having to travel out to L2 or L3 when on the 7600 XT, the game can work more closely to the CU in L1...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What's interesting is that no one else I've seen reporting on the RDNA4 architecture has covered anything similar to this*...&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;&lt;a href="https://youtu.be/Zprtr_xOI30?si=q8GYtDAZijgLNOzC&amp;amp;t=602"&gt;*Of course, last night, eTeknix did so just to spite me!&lt;/a&gt;&amp;nbsp;(though not iso-clock testing)&lt;/span&gt;&lt;/blockquote&gt;&lt;a href="https://youtu.be/Zprtr_xOI30?si=q8GYtDAZijgLNOzC&amp;amp;t=602"&gt;&lt;/a&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We'll come back to this later.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Hogwarts shows an impressive lead over the other two cards for the RX 9060 XT at 47.8 fps, &lt;i&gt;just&lt;/i&gt;&amp;nbsp;inching past the 7800 XT at 46.9 fps. However, it's not a happy win for the new contender because we can see that the smoothness of the result is pretty poor compared to the RX 7800 XT with more sequential frametime deviations despite a slightly lower max frametime...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWDS5-xlNZGP8j_B6KCfpbvqxYJUepwLtSRF_oJulaRvw7Lsw8kCz_mKfe6ceIpGBIE25WlhAcTF5cAc4hs6vSvkouFFS58obSBMHqKMR78iTlnxFGO6crb9ofiIZ4Uzjlbnk6jY1b-XSRj6alxHmMFrKTCqLOdDX6DHalCYwndLBgpvhzQJJsVIJ2SQI/s1481/ISO_Indy_Ratchet.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="403" data-original-width="1481" height="174" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWDS5-xlNZGP8j_B6KCfpbvqxYJUepwLtSRF_oJulaRvw7Lsw8kCz_mKfe6ceIpGBIE25WlhAcTF5cAc4hs6vSvkouFFS58obSBMHqKMR78iTlnxFGO6crb9ofiIZ4Uzjlbnk6jY1b-XSRj6alxHmMFrKTCqLOdDX6DHalCYwndLBgpvhzQJJsVIJ2SQI/w640-h174/ISO_Indy_Ratchet.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;*RX 7800 XT achieved 2567 MHz in Indy and 2696 MHz in R&amp;amp;C...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Indiana Jones brings us back to the status quo, with the 7600 XT really struggling to play the game in a smooth fashion, even if the average is north of 60. We can see that as for Hogwarts, the ray tracing chops of the 9060 XT are gnawing at the workload but the experience is a little uneven in comparison to the 7800 XT. This may be a driver thing but it may also be a memory bandwidth issue.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If we take a look at the frametime plots for each configuration on the 9060 XT and 7800 XT we can see spikes throughout the test run. The spikes actually reduce on the 7800 XT when the memory is running at stock, instead of 2518 Mhz - which may indicate that the overclocked configuration is unstable in this title - it's able to push a faster frame but then instantly falls behind.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This game probably needs to work with an fps cap to be running more smoothly - at least on this selection of cards.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzx0XK_tq2XkwVYn2nN2vjnTbOty_cBbxpgc6k7bRhnGGR3JnP7cUsqhybftRMY0P2qWlg269inGRY4AhAjpycomLBpwibT6pB81VPYABlFnXla6_upHN3tBSOJh1IaffzbhAe2Q8YgQLbFKDWSpXvG3BXCGHPXpw5k5jJy7J0yVK0budnLcpew7zMK4M/s1124/Indy_Frametimes.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="680" data-original-width="1124" height="388" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzx0XK_tq2XkwVYn2nN2vjnTbOty_cBbxpgc6k7bRhnGGR3JnP7cUsqhybftRMY0P2qWlg269inGRY4AhAjpycomLBpwibT6pB81VPYABlFnXla6_upHN3tBSOJh1IaffzbhAe2Q8YgQLbFKDWSpXvG3BXCGHPXpw5k5jJy7J0yVK0budnLcpew7zMK4M/w640-h388/Indy_Frametimes.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Frame-pacing is quite bad in this title and on these cards. On a VRR display, I mostly don't feel it but some of the big drops below 16.6 ms are felt even with that...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Ratchet and Clamk, meanwhile, is another upset - this time in favour of the 9060 XT. It's essentially almost matching the 7800 XT, if not quite in terms of smoothness... but it's &lt;i&gt;very &lt;/i&gt;close. Insomniac's engine really likes RDNA architectures but the improvements on display here show that the way RT and data structures are managed in the console have probably reflected well on RDNA4 and probably 5 -&amp;nbsp; resulting in performance which is beyond other titles in this analysis.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;These titles are therefore statistical outliers and should be considered so...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1dKH0TXCCGbP4AwKdp2WBnPSV0yXJZku2xiesSj6HaliWTc8B3kyJSjGRTfbMJZ-inMud5t1tbdLlBYAqKuCL5I7hUqgD4S60hEmPG5mbwESnW-uGFKDNDmScvWCQmdBx7mXeM_lJEl7jaCl-KU8wJwSH_Kdf43bB9adwapy4iYIaq8icY6LKS8wNXxY/s1483/ISO_Returnal_Spiderman.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="401" data-original-width="1483" height="174" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1dKH0TXCCGbP4AwKdp2WBnPSV0yXJZku2xiesSj6HaliWTc8B3kyJSjGRTfbMJZ-inMud5t1tbdLlBYAqKuCL5I7hUqgD4S60hEmPG5mbwESnW-uGFKDNDmScvWCQmdBx7mXeM_lJEl7jaCl-KU8wJwSH_Kdf43bB9adwapy4iYIaq8icY6LKS8wNXxY/w640-h174/ISO_Returnal_Spiderman.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;*RX 7800 XT achieved 2813 MHz in Spider-man...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Returnal brings us back to normality.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The 9060 XT is better than the 7600 XT but not the 7800 XT. What's interesting is that the maximum frametime is better on the 9060 XT than the other GPUs.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;For Spider-man: Remastered, we're reflecting the results of Insomniac's engine: the 9060 XT is beating the 7800 XT in average fps and maximum frametime but matching in smoothness.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, overall, the RX 9060 XT is around 10% better than the RX 7600 XT in an iso-clock configuration if we ignore the outliers of the Insomniac engine games. If we take those biased games into account, it's a 16% gen-on-gen architectural increase.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In comparison, the RX 7800 XT is 36 % better architectural increase (ignoring the Insomniac outliers) and 30% better with the outliers included.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This is bad...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Architectural Measurements...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Once again, I'll be taking advantage of the wonderful tests &lt;a href="https://nemez.net/projects/gpuperftests/"&gt;authored by Nemez&lt;/a&gt; to perform these microbenchmarks. Maybe one day I will have the time to actually be able to build the evolutionary descendant version of these tests from Chips and Cheese...&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMEOf38EPbMcH_OrBR4cTf1bdvNc8J8cJc64dQns1yL2QCORiMn-BhAFHw6ylvbmqxpyUk4CTn1nZaf5fBG6QfQ2bmqi9R1z2mhx1YGtRynkgaJ93itmvPrZKYN7A7jxXxlwe_OY26gX04ISHezdvN7jWOfh9zeAa4stV-NN4hd7fpib_tnN_vHxli0ks/s1145/Architectural_1.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="687" data-original-width="1145" height="384" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMEOf38EPbMcH_OrBR4cTf1bdvNc8J8cJc64dQns1yL2QCORiMn-BhAFHw6ylvbmqxpyUk4CTn1nZaf5fBG6QfQ2bmqi9R1z2mhx1YGtRynkgaJ93itmvPrZKYN7A7jxXxlwe_OY26gX04ISHezdvN7jWOfh9zeAa4stV-NN4hd7fpib_tnN_vHxli0ks/w640-h384/Architectural_1.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;These tests measure the competencies of the core architecture, so we'd expect similar results gen on gen per resource unit...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;In an iso-frequency test, the RX 9060 XT has a slight archiectural improvement over the RX 7600 XT in all tests. Of course, the RX 7800 XT, having 1.875x more compute resources is around 1.8x the RX 7600 XT and 1.7x the RX 9060 XT - which is completely in line with expectations because these things do not scale linearly...(and we also know that the RX 7800 XT is power limited and may not even been able to maintain the required core frequency during these tests).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuoF3uRRovW8styLcSHmOFJEiKLiZ5iwtjD4cEBJT2yQ5e-izYbbOgAkPSjC8quu9OSSzpHxK7iiEw3xk3wGEYlU_sdZVmNE2xLfOM3Kmpyech1i6cemZgbPfD_p2oG5mA3v9u1YLVu7XoPb8O1kO0S7SnVk5fRhjtRZeyt1Ooq2XuWnGHbPtCR-otdKw/s1145/Architectural_2.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="687" data-original-width="1145" height="384" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuoF3uRRovW8styLcSHmOFJEiKLiZ5iwtjD4cEBJT2yQ5e-izYbbOgAkPSjC8quu9OSSzpHxK7iiEw3xk3wGEYlU_sdZVmNE2xLfOM3Kmpyech1i6cemZgbPfD_p2oG5mA3v9u1YLVu7XoPb8O1kO0S7SnVk5fRhjtRZeyt1Ooq2XuWnGHbPtCR-otdKw/w640-h384/Architectural_2.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Moving onto more complicated calculations, we had &lt;a href="https://hole-in-my-head.blogspot.com/2024/10/the-performance-uplift-of-rdna-3-part-2.html"&gt;previously observed&lt;/a&gt;&amp;nbsp;a big difference between RDNA 2 and RDNA 3 when comparing the RX 7600 XT to the RX 6650 XT. Here, though, the architectures are pretty much of a muchness! We do see the 7600 XT eke out a victory over the 9060 XT in FP16 inverse square root result.... which I'm sure is really useful.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, after the pretty boring calculation results the cache and memory bandwidth results are quite the interesting story!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlSoR5iWf51NbjBq5FPyi4VgI-hbQssp3POGe58B-KpdQjXC4F5uh4R33Zg2KgFlXqulF3VkmnZqUdjYPesQpHi1jLB-SVLuKXkKLXTig7uQsCGtq3cy6b3dXd-ZXsNYxnBirLez5imDAMjFcodcszLpAQ4VRbNCi1o0RWjm9LrCWTBDyfe0v2KrrJIPg/s1009/Architectural_3.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="519" data-original-width="1009" height="330" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlSoR5iWf51NbjBq5FPyi4VgI-hbQssp3POGe58B-KpdQjXC4F5uh4R33Zg2KgFlXqulF3VkmnZqUdjYPesQpHi1jLB-SVLuKXkKLXTig7uQsCGtq3cy6b3dXd-ZXsNYxnBirLez5imDAMjFcodcszLpAQ4VRbNCi1o0RWjm9LrCWTBDyfe0v2KrrJIPg/w640-h330/Architectural_3.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The larger L2 cache in the 9060 XT makes a big difference...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The 7800 XT still massively wins in small data transfers but the 9060 XT matches and beats it in two areas in the graph - around 80 - 128 KiB and 1 - 8 MiB. For the former, I am wondering if the previously covered buffer between the compute unit and L2, paired withat that wider and faster L2 cache, is the reason. The subsequent drop-off after 128 MiB makes sense given that (I believe) the buffer is still 128 MiB in size (as was the L1 cache in RDNA 3) but what &lt;i&gt;really&lt;/i&gt;&amp;nbsp;confuses me, is the plateau&amp;nbsp; between 1 - 8 MiB.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I don't really have a good mental model as to what's going on, there. This result really blasts away the prior gen equivalent 7600 XT and the much stronger 7800 XT. I could understand if this plateau occurred up to 4 MiB (the size of the L2 cache) but continuing up to 8 MiB is a little weird. Or, perhaps it's not... The 7600 XT also has a plateau from 2 MiB (L2 cache size) to 4 MiB (the 7800 XT, has no such plateau until double the L2 cache size).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What is quite shocking is the abrupt drop-off &lt;i&gt;after&lt;/i&gt;&amp;nbsp;that plateau.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The other RDNA cards I've tested show a more gradual decline in bandwidth but once the 9060 XT is done, &lt;i&gt;it's done&lt;/i&gt;! And this will perhaps explain the CounterStrike 2 performance we're getting to, next...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRZnvuYouIHPaejYkdnhR86nu6YlwFeZN3JlCOlUWOpDyEwS4eOyCUUDbKKEkW8OdecivBJquOejd0y2_dALU1CpVRVOUefUtt4vc2JJbydcfuk3L3UldMHaro7jMdTgOWiyKgcOU3d_IvnEpF59SPkutETl7uaM3NzEN-dC4ayuNNcAsw6G7DZMVnKEE/s1009/Architectural_4.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="517" data-original-width="1009" height="328" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRZnvuYouIHPaejYkdnhR86nu6YlwFeZN3JlCOlUWOpDyEwS4eOyCUUDbKKEkW8OdecivBJquOejd0y2_dALU1CpVRVOUefUtt4vc2JJbydcfuk3L3UldMHaro7jMdTgOWiyKgcOU3d_IvnEpF59SPkutETl7uaM3NzEN-dC4ayuNNcAsw6G7DZMVnKEE/w640-h328/Architectural_4.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Looking at the vector and scalar cache latencies, we see that the RDNA 4 architecture suffers a latency hit WAY earlier than encountered for RDNA 3. What's happening here is that once the caches on the comupte unit are exceeded, the (former L1) buffer does nothing for latency and instead everything is predicated on the performance of the L2 - which is why we see it flat until the L2 is exceeded. Latency then takes another relatively large hit (compared to RDNA 3) when heading out to L3 cache, and once again when fetching to the VRAM.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This also ties in with the poor performance we will discuss for CounterStrike 2...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWtncKEAdK-ImiooU5kol679Loj8wmo189gyd0VKAQyp2BbHVOHQeIySB4a_8sdxIe5kCzsbQEn4yMDXKYREKCFVVnHJAv683wdsTK-qlSwJdFFczjsXGf-iEfn81ojbis1HDqnCslOzKldsovwt_guibNVltAI3OwNhVdSnQQ89Hv0nwS6kZFZATO2EQ/s1009/Architectural_5.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="515" data-original-width="1009" height="326" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWtncKEAdK-ImiooU5kol679Loj8wmo189gyd0VKAQyp2BbHVOHQeIySB4a_8sdxIe5kCzsbQEn4yMDXKYREKCFVVnHJAv683wdsTK-qlSwJdFFczjsXGf-iEfn81ojbis1HDqnCslOzKldsovwt_guibNVltAI3OwNhVdSnQQ89Hv0nwS6kZFZATO2EQ/w640-h326/Architectural_5.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Yes, this is a little broken for the RDNA4 part past 16 MiB... Ignore it.&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;E-sports failure...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let's circle back and take a look at that CounterStrike 2 result where the RX 7600 XT beats the RX 9060 XT under iso-clock conditions...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEht89xzWb-ah2QIC7CqS2qwDwQGptM73Y_nhs5jMIiLW_3OIe8tK9KBv0IOEJfTKXMrmMmhb2YgebouDTVPgKQU3T5FVd2-u7HETWBzajzJtwmoQmLf6NDsVKBApopth5_z_5YAJrpkoRWlt4YmAM1sNwxKVAxk0_l1_8mhJZ2N6z_rMOmP8SABEZpqDNI/s1292/Scaling.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="364" data-original-width="1292" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEht89xzWb-ah2QIC7CqS2qwDwQGptM73Y_nhs5jMIiLW_3OIe8tK9KBv0IOEJfTKXMrmMmhb2YgebouDTVPgKQU3T5FVd2-u7HETWBzajzJtwmoQmLf6NDsVKBApopth5_z_5YAJrpkoRWlt4YmAM1sNwxKVAxk0_l1_8mhJZ2N6z_rMOmP8SABEZpqDNI/w640-h180/Scaling.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Now back to stock settings, we investigate the scaling in CS2...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Unfortunately, as I mentioned previously, you can't reliably push the upper clock limit higher in the Radeon Adrenalin software - it just crashes the driver. I did manage to capture a couple of minor increments (between crashes!) and you can see the results of those in the above blue chart. What we can observe is that we get a 1% (5 fps) increase in average fps for a 200 MHz core increase. Additionally, we see a regression in performance if we switch the VRAM to the "tighter" timings in the Adrenalin software*.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;i&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*I recommend not using this on RDNA 4, the looser timings appear to be better for the memory system, in general from anecdotal testing...&lt;/blockquote&gt;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I already showed memory scaling earlier in this post. A a 4% gain is nothing to write home about for 400 MHz extra memory speed, so, most likely there's a bottleneck elsewhere in the architecture but is is more related to data management than core speeds and memory timings.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What is interesting is the scaling between settings on the three cards:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnzoyiWYpqAYn9vWDyal1oiVgBY0fcS5Wwiy24j40RIjKuq7GqMFXpAt3wYEwj5LqsEUuxCS0FG1oFd_xWOCQXjFQRxbuP2zzHIllyEIQmCfKakqdmfjBR1sjOVJms3WYYwPGxvq_YM3eebRo6OSuyHxNWSr8EkBz4jULzBqPfIoktvxnZ0t2kM3cK-UU/s737/Ratio_frequency%20scaling.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="147" data-original-width="737" height="128" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnzoyiWYpqAYn9vWDyal1oiVgBY0fcS5Wwiy24j40RIjKuq7GqMFXpAt3wYEwj5LqsEUuxCS0FG1oFd_xWOCQXjFQRxbuP2zzHIllyEIQmCfKakqdmfjBR1sjOVJms3WYYwPGxvq_YM3eebRo6OSuyHxNWSr8EkBz4jULzBqPfIoktvxnZ0t2kM3cK-UU/w640-h128/Ratio_frequency%20scaling.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Performance ratio and GPU utilisation... (Stock hardware)&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;At low setttings, we can assume that we're at a CPU limit as the performance of all three cards is pretty identical. Moving up to medium settings, the 9060 XT is crushing it - no effect from scaling! The other two cards scale, with the smaller cache and compute 7600 XT suffering a bit more.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, we move up to the high settings and the 9060 XT just crashes and burns. Now, I'd like to explain this by the %GPU utilisation - both the 7600 XT and 9060 XT have reached their limit but the 7800 XT is very close in utilisation, as well, but does not suffer this drop-off in performance.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I've said before that &lt;a href="https://hole-in-my-head.blogspot.com/2024/11/how-cpu-limited-is-modern-mid-range-pc.html"&gt;GPU utilisation isn't a number that you can blindly trust&lt;/a&gt;* and, yes, this is another of those occasions. I know for a fact that different workloads will present as 100% utilisaiton, despite not "utilising" the entire GPU. Hence why I test different types of games in my initial scaling tests when investigating an architecture and "where" I should be testing in terms of frequency, power and memory limits.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;i&gt;&lt;b&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*In fact, &lt;a href="https://hole-in-my-head.blogspot.com/2023/09/the-problem-with-dumb-metrics-argument.html"&gt;I'm at the point where we can't just point to any simplistic metric as a real and trusted measure&lt;/a&gt; ...&lt;/span&gt;&lt;/blockquote&gt;&lt;/b&gt;&lt;/i&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You see, the 7800 XT climbs 15% utilisation from medium to high whereas the 9060 XT climbs 10% and the 7600 XT climbs just 4%. This indicates to me that the cards with the smaller L2/L3 cache are spending more time "utilised" in data management. It's the same VRAM capacity conversation that everyone's having, only applied to memory subsystems in the GPU. The 9060 XT's larger 4MB L2 saves it until it doesn't once the bandwidth to L3 has to be saturated.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4L7G-L8iliC2J2IDnBIvrba7ZjrxfpxqDUTJWOw4m9jvgOuUnGKXKhdmw9s8KXPzvQ-sUYrbTw8-t-FtlTUN1fNnhv8C8WOJtkuuRPVVJAKlT3y1JrHlfEWbfY12xQswB2dF_Em0CBOdjuJcmMw7d0XXYMDrAMEbmC1No8Jw96F-sfdbHUrXDc-kdFRs/s693/Settings%20comparison.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="263" data-original-width="693" height="243" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4L7G-L8iliC2J2IDnBIvrba7ZjrxfpxqDUTJWOw4m9jvgOuUnGKXKhdmw9s8KXPzvQ-sUYrbTw8-t-FtlTUN1fNnhv8C8WOJtkuuRPVVJAKlT3y1JrHlfEWbfY12xQswB2dF_Em0CBOdjuJcmMw7d0XXYMDrAMEbmC1No8Jw96F-sfdbHUrXDc-kdFRs/w640-h243/Settings%20comparison.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;CounterStrike 2 settings comparison...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Looking at the differences in quality settings, we can see that moving from medium to high has a large quality difference. If we take a look at the reason for the drop by manually changing each individual setting, we can see that the biggest offenders are the move from 2x to 4x MSAA, and turing off upscaling.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, I'm not the most knowledgeable person when it comes to rendering technologies. Upscaling I understand - we're talking about rendering at a lower resolution and scaling up to the native resolution (typically*) of the monitor you're outputting to. MSAA (&lt;a href="https://en.wikipedia.org/wiki/Multisample_anti-aliasing"&gt;MultiSample Anti-Aliasing&lt;/a&gt;) I'm less confident on, but if I understand correctly, it's a way of sub-pixel sampling which optimises for the quantity of triangles present within the sampled locations of the pixel. From what the wiki article says, it seems that this is a very intensive process in terms of bandwidth and fillrate.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;i&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*We could render to a higher resolution and then allow a downscale to the native resolution. I do sometimes do this - for example, on 1080p screens...&lt;/blockquote&gt;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We know the 9060 XT has a higher latency for memory operations and if that 8 MiB limit is breached (seen above) then bandwidth falls off a cliff. In my understanding, that's what's happening - the higher resolution requires more data and more pixels to be assessed and the 4x MSAA is essentially doubling that work from the lower setting. Individually, each of these is pushing up against those data and bandwidth limits but combined destroy the RDNA 4 part.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlzqJn8XR1eh7zUc7be6jt6ThSdi8KF_dr8GZwMQf3G4gX30XT_7Sv3ln1jmhsXwk9WLjsQsBDc7i3RMRNNOUoO-5AaQ3z3EFcAQY7zbmXZciW0VLNTOJyACMkcn7ZY-qHI0hp5RHzPza4wCo0xFK-JmsDayBsnkp3Dd3LXpNqtGCHDoQ_94RHSYC5NA4/s717/Scaling%202.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="423" data-original-width="717" height="378" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlzqJn8XR1eh7zUc7be6jt6ThSdi8KF_dr8GZwMQf3G4gX30XT_7Sv3ln1jmhsXwk9WLjsQsBDc7i3RMRNNOUoO-5AaQ3z3EFcAQY7zbmXZciW0VLNTOJyACMkcn7ZY-qHI0hp5RHzPza4wCo0xFK-JmsDayBsnkp3Dd3LXpNqtGCHDoQ_94RHSYC5NA4/w640-h378/Scaling%202.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Trying to see what the cause of the precipitous performance penalty is... (Stock hardware)&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, here's what all this data is pointing towards: RDNA 4 doesn't have a massive Compute Unit architectural uplift over RDNA 3 - with the huge caveat that ray tracing improvements are quite large. We can see from the micro benchmarks that the 9060 XT is only minimally faster than the 7600 XT. Where the big change is coming is the cache hierarchy and it seems that this is both a positive and a negative - depending on the situation.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Where the RDNA 4 part is partially rescued is the fact that, like RDNA 2, RDNA 4 has another clockspeed bump - we're typically in the range of 400 - 600 MHz higher core frequency compared to the 7600 XT...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Stock Benchmarking...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Taking that into consideration, let's re-look at those iso-clock tests, now at the stock hardware configurations.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1IltsOt66oD8soYWPWcp9hfiNVi0js4oWZeyl125v5Fkpd4a12x6xBt9mfsTQTOdYYB-o3ZbpjuQuRCpHrFfuBMUxIVKFLb_Gbk76kos8qqIlxNBuJvKyiEN9ox6C1dRik1l3jqIP0svQtd98olkJPv8HAAlXXfH0N5z4tX5oDSqeVQbO6JMrR0vg1jA/s1477/Stock_AW2_Avatar.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="403" data-original-width="1477" height="174" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1IltsOt66oD8soYWPWcp9hfiNVi0js4oWZeyl125v5Fkpd4a12x6xBt9mfsTQTOdYYB-o3ZbpjuQuRCpHrFfuBMUxIVKFLb_Gbk76kos8qqIlxNBuJvKyiEN9ox6C1dRik1l3jqIP0svQtd98olkJPv8HAAlXXfH0N5z4tX5oDSqeVQbO6JMrR0vg1jA/w640-h174/Stock_AW2_Avatar.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;In Alan Wake 2, both RDNA 3 cards regress slightly due to the lower clocks but the RDNA 4 card doesn't improve much as the core frequency only raises by around 250 MHz - this game is pretty hard on RDNA cards, in general!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Avatar is pretty much the same story. The RDNA 4 part just doesn't perform better. In both of these tests, I think we're bumping up against a bandwidth or power limitation - though I have tested these configurations with the power raised as high as it will go and the RDNA4 part &lt;i&gt;does not&lt;/i&gt;&amp;nbsp;perform better.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlQo1CdBYp3EnXz1FFuc-7TJfd-6Ht8Nrd7BZ6QB9KMxXAFXTGNpGQ-LVf-hx75yGal5vug1VMVWuqCPPAhC4IjdSu4vaJiiV_Xwpyc0-ueM1XOiPFz5PrNfNQZ6dmHTRpo8Z_uC8sQjm1IjCsdYnZawOD8kxwVRyfQHiPKKj3WVLxBbUUdlJxPFQRD9Q/s1475/Stock_CS2_Hogwarts.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="401" data-original-width="1475" height="174" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlQo1CdBYp3EnXz1FFuc-7TJfd-6Ht8Nrd7BZ6QB9KMxXAFXTGNpGQ-LVf-hx75yGal5vug1VMVWuqCPPAhC4IjdSu4vaJiiV_Xwpyc0-ueM1XOiPFz5PrNfNQZ6dmHTRpo8Z_uC8sQjm1IjCsdYnZawOD8kxwVRyfQHiPKKj3WVLxBbUUdlJxPFQRD9Q/w640-h174/Stock_CS2_Hogwarts.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Counterstrike sees a small ~25 fps uplift for around a 470 MHz core clock increase. Hogwarts does see a slight increase for both 9060 XT and 7800 XT, with a small regression for the 7600 XT. It's possible that the 7800 XT was a little unstable with the increased memory speed and so that could explain the worse result in the "iso-clock" configuration.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJZrgnYelMa_d36QIhGzKM4Dlkn-puMxzumo9u1tl4w4NqRxUXhS62AXO4_69yPUYxGKw73nQ6pDLKzh8yVuJaMvzMTzdCuHGrwe6yvqqYNVUBYLaYHHex6sQNDMudWOqB2KRNeNieO6Y3c85-cyWm5ZLo_NmpY-26Sjn_qcy7trg7Qj1GS1gf_TzR4YM/s1477/Stock_Indy_Ratchet.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="403" data-original-width="1477" height="174" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJZrgnYelMa_d36QIhGzKM4Dlkn-puMxzumo9u1tl4w4NqRxUXhS62AXO4_69yPUYxGKw73nQ6pDLKzh8yVuJaMvzMTzdCuHGrwe6yvqqYNVUBYLaYHHex6sQNDMudWOqB2KRNeNieO6Y3c85-cyWm5ZLo_NmpY-26Sjn_qcy7trg7Qj1GS1gf_TzR4YM/w640-h174/Stock_Indy_Ratchet.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Indiana Jones has a slight improvement for the 9060 XT and an associated decrease for the 7600 XT - due to the changes in core clock speed on both cards: +340 MHz and -155 MHz respectively.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Ratchet and Clank has a pathetic 5 fps increase for 400 MHz core increase on the 9060 XT, while the other two cards drop 1-2 fps for their -180 MHz and -80 MHz core clock regressions.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtAlVJX32jSyiL0zwuKXj8U4OvLdixz2ZwfCGLbnGwGB5Qm-0X27j0H7cggDBnbcehbWAvFh1Lx6xlObL0iCPkDsYA4ozCRBF-Hv2qz2OUVcNeArL83k5fAxXVe1ZUw4fOv9UyN57WBY4O-22Tm6qda5b0CUQdwg_DwVHhkZtehd7h0Tp2XSvz5rxd_Cg/s1477/Stock_Returnal_Spiderman.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="405" data-original-width="1477" height="176" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtAlVJX32jSyiL0zwuKXj8U4OvLdixz2ZwfCGLbnGwGB5Qm-0X27j0H7cggDBnbcehbWAvFh1Lx6xlObL0iCPkDsYA4ozCRBF-Hv2qz2OUVcNeArL83k5fAxXVe1ZUw4fOv9UyN57WBY4O-22Tm6qda5b0CUQdwg_DwVHhkZtehd7h0Tp2XSvz5rxd_Cg/w640-h176/Stock_Returnal_Spiderman.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Returnal doesn't see much change at all. Similarly, we get a 4 fps boost for the 9060 XT and effectively the same performance for the other two cards.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;And, finally, Spider-man showed some very strange results - which I can only assume to be a driver issue on the latest Adrenalin drivers because both the 9060 XT and the 7600 XT performed worse than when I was manually setting clock limits. So, let's just leave that title aside, for the time being...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Conclusion...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's very clear that RDNA 4 has a heavy focus on ray tracing (finally?) but isn't that impressive in other aspects compared to the prior generation. A lack of memory and cache bandwidth can't overcome the limitations when a workload overwhelms them.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;When I first started testing the RX 9060 XT, &lt;a href="https://x.com/Duoae/status/1941772032372334813"&gt;it felt weird&lt;/a&gt;. There was something off about it. Performance is &lt;i&gt;not bad&lt;/i&gt;&amp;nbsp;but also &lt;i&gt;not good&lt;/i&gt;. Performing slightly worse than the 54 Compute Unit RX 7700 XT is impressive in its own way but terrible when you consider that we didn't really get much of a generational uplift from 2020 to 2023 (from both Nvidia and AMD!) and the fact that the RX 7800 XT either matches the 9060 XT, or blasts it out of the water is an indictment on the over-segmentation of the GPU tiers within each generation. AMD need to be offering more CU per price point, not less.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Sure, the RX 9070 XT seems like a more impressive part (unfortunately, no simialr comparison can be made, there!) but it's not priced like a mid-range product. It's priced like a high-end part.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I don't dislike the 9060 XT but I also think it's priced too high. I bought mine for, essentially MSRP at €375*. It's merely &lt;i&gt;okay&lt;/i&gt;&amp;nbsp;at that price because the RX 7700 XT was available at €400 at points over the last year and a half. Sure, there's a decent uplift compared to the RX 7600 XT - but I bought that card for €320. When you factor in the price increase, you are only getting ~25% more performance in RT workloads and +10% more performance in rasterised workloads for your money.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That's not great.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*You can now get them for around €360 - 370...&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Here's my final thoughts: If you want to play the latest RT triple-A games at higher settings, you should be buying a more expensive GPU - the RX 9060 XT just doesn't have the grunt to pull that off. If you want to play rasterised games, then the 9060 XT is the cheapest 16 GB graphics card you can buy, and in that sense, it's worth it.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, if you want to play high fps e-sports titles, don't upgrade from any GPU within the last generation or two. You won't see a real benefit from RDNA 4 unless you're looking to spend above €600...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;[Update 29/08/2025]&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I realised now that I forgot to add the uplift summary as I have previously done. So, here it is!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBsFjxc6IAS3vb8m1BIxVBBac54DhRCgwkaY8yDEIeKfmL1FuUlQvB5pGkChI-m7fcKXphgCefNMqlukYc7pLBQliEujYqt-jvJv0lZP8h_HOf53ym4oOEKfHDKsB9guf8wdQL_QCdGLS35X5iTOgXVd1aIXaJPthKNcnwaaicM9URo1ANExrAKchpnls/s519/Uplift%20summary.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="519" data-original-width="459" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBsFjxc6IAS3vb8m1BIxVBBac54DhRCgwkaY8yDEIeKfmL1FuUlQvB5pGkChI-m7fcKXphgCefNMqlukYc7pLBQliEujYqt-jvJv0lZP8h_HOf53ym4oOEKfHDKsB9guf8wdQL_QCdGLS35X5iTOgXVd1aIXaJPthKNcnwaaicM9URo1ANExrAKchpnls/s16000/Uplift%20summary.PNG" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Compared to the &lt;a href="https://hole-in-my-head.blogspot.com/2024/10/the-performance-uplift-of-rdna-3-part-2.html"&gt;uplift of RDNA3&lt;/a&gt;, RDNA4 fixes a lot of things that were holding the architecture back... (Performed at iso-clock settings)&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;I actually performed some extra non-RT tests which I've included above and, as you can see, the doubling of the RT throughput in each DCU is where the majority of the gen-on-gen performance uplift comes from. The averaged non-RT performance uplift comes to 1.15x, whereas the averaged RT performance uplift comes to 1.34x. Looking at the micro benchmark results, we see an average of 1.05x uplift over RDNA3.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;All of these combined results imply that the majority of the remaining performance uplift lies with the improvements to the data management in the cache hierarchy.&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2025/08/the-performance-uplift-of-rdna-4-and.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWZyN1zYV2lTWZhqgjGQ8OeVpbWMOHtfEYYbDdGlQ7dPVZpdaBMQGUveZCFFeUj1-8M5TN1WySgibi1Tb1x-sTnoH2CKe5EJnyE9jOCUOzAn3_VFOINags3_dbnEKhB1TlXS2dJl1ya3LEdJ-Khf73AvrE74DatsfSYPVAWQNzrvwqB20SvXFbWpgfIzY/s72-w640-h360-c/Title.jpg" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-5624261965138619584</guid><pubDate>Sat, 24 May 2025 07:37:00 +0000</pubDate><atom:updated>2025-05-24T08:37:28.675+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">curmudgeon</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">screenestate</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>So, What's Next...? The rate of advancement in gaming performance... (Part 2) Is PC gaming dying?!</title><description>&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNQluatMHUF9_SxItA7G2dZNGVqLF5fmXfnbAr1-r5a7pszEWj-uJ-v_l059rZWI6OLhIedqIMzl8pT_gw8jsVbpPqAzilNMPByNr73l1Ev1PdkPqzP_p4dPLLRFHmdXLaURK2QuWCiJRRvTzpFXdZ946rBPCl6H9Ea4jcO7Zv_J0puTaqsgNuvnnvz8A/s1920/Header.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNQluatMHUF9_SxItA7G2dZNGVqLF5fmXfnbAr1-r5a7pszEWj-uJ-v_l059rZWI6OLhIedqIMzl8pT_gw8jsVbpPqAzilNMPByNr73l1Ev1PdkPqzP_p4dPLLRFHmdXLaURK2QuWCiJRRvTzpFXdZ946rBPCl6H9Ea4jcO7Zv_J0puTaqsgNuvnnvz8A/w640-h360/Header.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;A couple of years ago, I looked at the &lt;a href="https://hole-in-my-head.blogspot.com/2022/02/the-rate-of-advancement-in-gaming.html"&gt;rate of advancement of GPUs over time&lt;/a&gt;. It's the only way of tracking how the user experiences technological advancement. Recently, we've seen a lot of movement in &lt;a href="https://youtu.be/3E82_nElkkw?si=crYryOjc5iohSNA4"&gt;the industry&lt;/a&gt; &lt;a href="https://youtu.be/wxNHZZDfEI0?si=grjXPufzZDTV5wId"&gt;of various&lt;/a&gt; &lt;a href="https://youtu.be/rsJINWKTKOc?si=Fqbwj_7paQD8A3oF"&gt;commentators&lt;/a&gt; &lt;a href="https://www.youtube.com/watch?v=JR1A-4MBlW4&amp;amp;ab_channel=Vex"&gt;reaching&lt;/a&gt; &lt;a href="https://youtu.be/3LQOToy6e4c?si=3CjSGiKrCRKYoPQY"&gt;the same&lt;/a&gt; &lt;a href="https://youtu.be/cvL-Mplhog8?si=iUXOuIaIIRq_fqoo"&gt;conclusions&lt;/a&gt; I did, way back when... Or, maybe I'm a particularly pessimistic person and these thoughts are the logical conclusion that people will come to, given enough time and data.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;But given that &lt;a href="https://hole-in-my-head.blogspot.com/2021/07/the-uncertain-future-of-pc-gaming-plus.html"&gt;I was&lt;/a&gt; &lt;a href="https://hole-in-my-head.blogspot.com/2021/08/the-discrete-gpu-market-and-where-were.html"&gt;ahead of&lt;/a&gt; &lt;a href="https://hole-in-my-head.blogspot.com/2021/09/does-amds-dominance-even-matter.html"&gt;the curve&lt;/a&gt;, I feel I have a little more to give than even the current trends in discussion...&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;&lt;span style="color: #274e13;"&gt;Recappping the past...&lt;/span&gt;&lt;/h3&gt;&lt;br /&gt;To briefly summarise where we are, let's look at how things have progressed since the last time I covered this topic and I can do so in just a single sentence: &lt;u&gt;we're shipping fewer dedicated GPUs (dGPUs) than even in 2021.&lt;/u&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/a/AVvXsEjxeTgQElKF0cTOGWuBEygUCW4dvkuVNOSaLlrj00h2HMyr5JGkNuC4JZ0nDY3qVIAgRldvcLWgWA7PzFDqmiyrqZO1_7CUrTkyuu9C5mAJJlYr8XwestKW3FAkBdBHWdjDqN9esYQhgfgfpbBMxJ48HruoSig_7UF3RfBD8cDG53EMGUyBNLP4uCaThuY" style="margin-left: auto; margin-right: auto;"&gt;&lt;img alt="" data-original-height="535" data-original-width="1002" height="342" src="https://blogger.googleusercontent.com/img/a/AVvXsEjxeTgQElKF0cTOGWuBEygUCW4dvkuVNOSaLlrj00h2HMyr5JGkNuC4JZ0nDY3qVIAgRldvcLWgWA7PzFDqmiyrqZO1_7CUrTkyuu9C5mAJJlYr8XwestKW3FAkBdBHWdjDqN9esYQhgfgfpbBMxJ48HruoSig_7UF3RfBD8cDG53EMGUyBNLP4uCaThuY=w640-h342" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;AMD's marketshare on new GPUs has dropped, and that's not news... [&lt;a href="https://www.tomshardware.com/tech-industry/amd-grabs-a-share-of-the-gpu-market-from-nvidia-as-gpu-shipments-rise-slightly-in-q4"&gt;Tom's Hardware&lt;/a&gt;]&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;br /&gt;Now, these charts are not a 100% fair comparison. Back in 2006-2010, iGPUs were not the predominant force they are in the modern world &lt;i&gt;and&lt;/i&gt;&amp;nbsp;laptops constitute a larger proportion of the market than was even dreamed about in that time period.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;According to Canalys, &lt;a href="https://canalys.com/newsroom/global-pc-shipments-q4-2024"&gt;laptop shipments comprise of 80% of the market&lt;/a&gt;. So, we should expect some level of drop to have occurred over this time period as laptops became the predominant force. The problem with this accounting is that it completely ignores direct-to-consumer data, meaning that the number of desktops made by consumers is missing from this dataset. The other aspect to consider is that dGPUs can be attached to *&lt;i&gt;existing&lt;/i&gt;* desktops and so, this explains the difference in yearly dGPU shipments and "PC desktop" shipments from system integrators.&lt;br /&gt;&lt;br /&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjocHlJY0CZanoOKRaZ02WwmbBT2wH2phXzyRfC31uQMNJHFJCQ1HJ3nHH5hkIo0h7i7CZxzwUmcOAs7CXw5xWHWBUq7fr08kSbAwsmln5Jo-RDxUujWXyv5cC6rekD_8Xcmfmy7Wov-MRiZHi2Y_-5nvqbUuyJmZDVklnfOrwDRj6gkNKkAeAF1RnseMQ/s900/Canalys%20global%20PC%20shipments.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="506" data-original-width="900" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjocHlJY0CZanoOKRaZ02WwmbBT2wH2phXzyRfC31uQMNJHFJCQ1HJ3nHH5hkIo0h7i7CZxzwUmcOAs7CXw5xWHWBUq7fr08kSbAwsmln5Jo-RDxUujWXyv5cC6rekD_8Xcmfmy7Wov-MRiZHi2Y_-5nvqbUuyJmZDVklnfOrwDRj6gkNKkAeAF1RnseMQ/w640-h360/Canalys%20global%20PC%20shipments.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The total market in 2024 was around 68 million units, with desktops making 13.6 million of that number... [&lt;a href="https://canalys.com/newsroom/global-pc-shipments-q4-2024"&gt;Canalys&lt;/a&gt;]&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;In case the graphs don't really tell the story well-enough, the problem is that &lt;a href="https://www.pcgamer.com/gaming-industry/new-report-says-pc-games-are-outselling-console-games-calling-pc-gaming-a-bright-spot-in-a-troubled-industry/"&gt;PC gaming is stronger&lt;/a&gt; and more popular than ever. We're talking &lt;a href="https://explodingtopics.com/blog/pc-gaming-stats"&gt;a ~40% increase in PC gamers since 2008&lt;/a&gt;. The existing PC gaming install base is huge and &lt;a href="https://hole-in-my-head.blogspot.com/2022/02/the-rate-of-advancement-in-gaming.html"&gt;the stagnation we've seen in hardware&lt;/a&gt; is only adding to this increase as older systems remain relevant and do not require as many upgrades to be able to play a majority of games. This means that GPUs are not only being sold to new builds, they're sold to generations-old system owners, which means that the Total Addressable Market (TAM) is actually very large.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We've seen this disparity between supply and demand play out in real-life over the last half a year as both AMD and Nvidia stopped producing their last gen higher end GPUs, to &lt;a href="https://www.pcgamer.com/hardware/graphics-cards/where-the-af-are-all-the-graphics-cards-its-not-just-the-new-rtx-50-series-thats-impossible-to-buy-finding-any-decent-gpu-in-stock-at-the-major-us-retailers-right-now-is-like-staring-into-an-abyss-of-nothing/"&gt;scenes of completely empty online and physical stores for these products&lt;/a&gt;. It's not a country-specific phenomenon, it's worldwide.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;It's simple: &lt;b&gt;&lt;u&gt;&lt;span style="color: #274e13; font-size: medium;"&gt;There are not enough GPUs produced to service the market.&lt;/span&gt;&lt;/u&gt;&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/a/AVvXsEhxPsmZ8xpKhsIERpwfmYvfJOSuTn3tqXJ6cGnjd0ym0N_fXAFobfzwQ9KF36uLSOVXSNFxwSBkHdZhY8RGkXsFYyoDqrOV3apXDlwppY1lp_7xe0BQaj3WUg44KBE4TYh4pPyb4ABdDeDaQTlvfpORI4rpe-aLVMc4asq2QuK_xtlyw293KCfitfgYWSY" style="margin-left: auto; margin-right: auto;"&gt;&lt;img alt="" data-original-height="535" data-original-width="1002" height="342" src="https://blogger.googleusercontent.com/img/a/AVvXsEhxPsmZ8xpKhsIERpwfmYvfJOSuTn3tqXJ6cGnjd0ym0N_fXAFobfzwQ9KF36uLSOVXSNFxwSBkHdZhY8RGkXsFYyoDqrOV3apXDlwppY1lp_7xe0BQaj3WUg44KBE4TYh4pPyb4ABdDeDaQTlvfpORI4rpe-aLVMc4asq2QuK_xtlyw293KCfitfgYWSY=w640-h342" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Overall, we're now shipping around 40% of dGPUs per year compared to 2006...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;br /&gt;And that tells us that, despite laptop dominance of the full system (complete solution) market, the reduction in dGPU shipments is outpacing the increase in marketshare of all-in-one solutions like NUCs and laptops in the consumer space.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If GPUs are almost &lt;a href="https://www.pcgamesn.com/nvidia/rtx-4070-4090-stock-issues"&gt;instantly out of stock at inflated prices&lt;/a&gt; after a couple of months stoppage in supply, it means that the market was not anywhere close to being serviced.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If new GPUs &lt;a href="https://videocardz.com/newz/amd-radeon-rx-9070-pricing-affected-by-supply-greed-and-selective-rebate-system"&gt;can launch&lt;/a&gt; and &lt;a href="https://youtu.be/yyufZd7W5Kc?si=9Gzg3VLn-IaK2vV1"&gt;instantly have price hikes&lt;/a&gt; to &lt;a href="https://www.techpowerup.com/332164/asus-msi-us-official-stores-raise-geforce-rtx-5090-5080-msrps?cp=2"&gt;multiple tens of percent above MSRP&lt;/a&gt;, it means that the market is not being serviced.&lt;br /&gt;&lt;br /&gt;If GPU makers can &lt;a href="https://hole-in-my-head.blogspot.com/2023/10/rtx-40-series-aka-did-nvidia-jump-shark.html"&gt;consistently market&lt;/a&gt; &lt;a href="https://youtu.be/O0srjKOOR4g?si=9w8XlRZj5WVkShWM"&gt;a lower-end product&lt;/a&gt; &lt;a href="https://youtu.be/J72Gfh5mfTk?si=YY_Nq1T1oqINWfYu"&gt;as a higher-end one&lt;/a&gt;, then it means that the market is not being serviced.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If GPUs already on the market do not lose their value over their lifespan, then it means that the market is not being serviced.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;br /&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjutBKOpB_o9SKqeOUE8whylk-bfXD3rwGUX1weeWRdRwcYjMHD71-GTTjKH5YDChYGh9GoG-Q6En7EgPz76MR3oRHhXUDmf_J3ZORg1RwoVYiP3r3Bu9WtWw5R9ZPuDE_ke3o1AwslDBULrdUfms_m2G9XcOjHxsVWEBQEn7uIdVnIwRpYqEoROxWZvs/s1184/Class_summary%20perf.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1011" data-original-width="1184" height="546" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjutBKOpB_o9SKqeOUE8whylk-bfXD3rwGUX1weeWRdRwcYjMHD71-GTTjKH5YDChYGh9GoG-Q6En7EgPz76MR3oRHhXUDmf_J3ZORg1RwoVYiP3r3Bu9WtWw5R9ZPuDE_ke3o1AwslDBULrdUfms_m2G9XcOjHxsVWEBQEn7uIdVnIwRpYqEoROxWZvs/w640-h546/Class_summary%20perf.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;&lt;a href="https://hole-in-my-head.blogspot.com/2023/10/rtx-40-series-aka-did-nvidia-jump-shark.html"&gt;As I said back then&lt;/a&gt;, the performance uplift of the 40 series was the smallest since I started trending (i.e. the GTX 9 series)...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;As I mentioned earlier, there is stagnation in GPU hardware but the issue with fewer dGPUs being manufactured year-over-year in a market that is larger than ever is that we have attrition. The rate of release of new GPU generations has slowed to a historical crawl. That's expected based on the slowing of technological advancement in the lithography sector, meaning that new manufacturing process nodes are taking longer and are more expensive to reach maturity. Additionally, low-hanging fruit in terms of algorithms and hardware architecture design have already been plucked and, outside of a complete redesign in how computation is performed and data is transmitted, there are very few changes to make for marked improvements*.&lt;/div&gt;&lt;div&gt;&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;&lt;b&gt;&lt;i&gt;*I say this but it seems RDNA4 has done &lt;/i&gt;just this&lt;i&gt; and fixed a load of bugs which have resulted in some of the largest performance increases per unit of compute that I've ever seen...&lt;/i&gt;&lt;/b&gt;&lt;/blockquote&gt;&lt;/span&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;All of this means that as GPUs in consumers hands age-out, die of malfunctions &lt;a href="https://news.ycombinator.com/item?id=27203530"&gt;due to heat and silicon degradation&lt;/a&gt;, &lt;a href="https://www.youtube.com/watch?v=n3p14E4EhUQ&amp;amp;ab_channel=TechYESCity"&gt;or accidents&lt;/a&gt;, &lt;i&gt;and&lt;/i&gt;&amp;nbsp;because they no longer meet the &lt;a href="https://www.reddit.com/r/indianajones/comments/1h5y6e0/indiana_jones_and_the_great_circle_pc/"&gt;performance requirements for modern games&lt;/a&gt; (or the particular user's expectations/preference), the pool of "viable" dGPUs already in the market shrinks over time.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Given the size of the PC desktop gaming market, and the decreasing quantity of dGPUs shipped per year, it is not unreasonable to realise that, at some point, there will be an intersection as the demand outstrips the supply. This would cause a tipping point whereby runaway price inflation would not only occur at the point of sale but also in pricing structures from the manufacturers.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;I believe we have already passed this point.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The high prices of GPUs are not due to increased prices in manufacturing*, they are not due to competition with cryptocurrency miners, nor AI farmers**, &lt;a href="https://www.pcworld.com/article/2637566/pc-builder-says-theyre-getting-scalped-for-gpus.html"&gt;they are not solely due to scalping&lt;/a&gt;***. The RTX 40 series and RX 7000 series did not see price drops as the generation progressed. That leaves one reason - &lt;b&gt;&lt;a href="https://hole-in-my-head.blogspot.com/2021/08/the-discrete-gpu-market-and-where-were.html"&gt;chronic undersupply...&lt;/a&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;blockquote&gt;&lt;div&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;*They have increased but not by any proportional amount close to what we see at point of sale.&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;**There are different products which cater to those markets.&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;***&lt;a href="https://videocardz.com/newz/amd-radeon-rx-9070-pricing-affected-by-supply-greed-and-selective-rebate-system"&gt;The base MSRP price is increased per tier of performance before we even get to the point of scalping&lt;/a&gt;.&lt;/span&gt;&lt;/b&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;div style="text-align: left;"&gt;&lt;span style="text-align: justify;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;span style="text-align: justify;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/a/AVvXsEgCYM6Uju-5rVzOdtGonvDWs-fc2k5dpxYkAui-OBvQNKPgUeCXlRXQc0KN0fcxgT5UjqnHGy27RJST1y6YviSEcTgffjcR3vJ24Dfuf4rhoGqRXT7GUBCd3jF0NihKNtkQrNygiHoB8MGdHnprfVMqYPSKIphKb-4FSUjPeKUeb3bOu6gq7zqCQo0c0jg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img alt="" data-original-height="517" data-original-width="839" height="394" src="https://blogger.googleusercontent.com/img/a/AVvXsEgCYM6Uju-5rVzOdtGonvDWs-fc2k5dpxYkAui-OBvQNKPgUeCXlRXQc0KN0fcxgT5UjqnHGy27RJST1y6YviSEcTgffjcR3vJ24Dfuf4rhoGqRXT7GUBCd3jF0NihKNtkQrNygiHoB8MGdHnprfVMqYPSKIphKb-4FSUjPeKUeb3bOu6gq7zqCQo0c0jg=w640-h394" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;I've never seen a product have such a consistent price - and it's not just this model... [&lt;a href="https://de.camelcamelcamel.com/product/B0BZDYYS74"&gt;CamelCamelCamel&lt;/a&gt;]&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;h3 style="text-align: left;"&gt;&lt;span style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Expectations...&lt;/span&gt;&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: left;"&gt;&lt;span style="text-align: justify;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Simultaneously, with the lack of hardware and hardware uplifts per price point, we have a new paradigm shift in that good quality 1440p high refresh rate monitors are becoming affordable. This means that &lt;a href="https://youtu.be/S10NnAhknt0?si=6LYLDQbjSOpQcV97"&gt;1440p is now the base resolution, instead of 1080p&lt;/a&gt;. GPU performance at 1080p is good for graphics card comparisons but a large (and growing) proportion of PC gamers will be moving to 1440p as their expected GPU requirement. Additionally, 30 fps is no longer acceptable and 60 fps is a bare minimum. Gamers want and expect higher refresh rates to be supported by their GPUs, especially when they are costing so much to purchase.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This has two repercussions: a greater than generational uplift for each GPU tier in performance; and increased VRAM requirements.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, at the low end of the spectrum in the &lt;i&gt;not so cheap&lt;/i&gt;&amp;nbsp;$300-400 range, GPUs are worth less than they were and are at the point of being relegated to legacy gaming and low-end gaming. It used to be that the mainstream GPU would get you good performance at the mainstream resolution and medium to high quality settings. Yes, you can look at the &lt;a href="https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam"&gt;Steam Hardware Survey&lt;/a&gt; but this data is highly affected by markets which are not focussed on the new hardware. Sure, existing gaming PCs have 1080p screens, but what are they going to buy when they upgrade? What are &lt;i&gt;new&lt;/i&gt;&amp;nbsp;PC builders going to purchase? 1080p? Or a nice cheap, WAY better 1440p IPS high refresh rate screen? &lt;a href="https://x.com/Duoae/status/1785300520287994243"&gt;I know what &lt;i&gt;&lt;b&gt;I&lt;/b&gt; &lt;/i&gt;chose&lt;/a&gt;... and if you're spending more than $300 on a GPU, the average person probably will, too! Hell, you can get them &lt;a href="https://www.amazon.com/ASUS-Gaming-1440P-Monitor-VG27AQ3A/dp/B0BZR9TMBJ/ref=sr_1_3?crid=1M5LYZBRFXO10&amp;amp;dib=eyJ2IjoiMSJ9.YRofmkZkOeRWalo_xOfeWvBfCk9QJM61xeIn0p5xyQlEk32GiP8gYv8issDAOP16ckQ6V9UQVtoOXbXlH3DH7uzINuoRBoLcZEsiQFqyL09zFrrREKZnzK3zU5neLNk5Dekf7f5-v83zwZYsU-JMZ2U0Uovy4Unkb0BHp6oBvUBneaNXgfSY9zD_JLtOjlPC2CbzpEHjVL7hRSf0gOuFmAoF08EwpkRHt1sdISxg4PY.hOT2rX8DIB_5Ou3hZRuhm9ZEJJeKfhYqsdSzzPkgR1w&amp;amp;dib_tag=se&amp;amp;keywords=1440p%2Bips%2Bmonitor&amp;amp;qid=1748014274&amp;amp;sprefix=1440p%2Bips%2Bmonito%2Caps%2C198&amp;amp;sr=8-3&amp;amp;th=1"&gt;for around $200&lt;/a&gt;, now... That's cheaper than the crappy $300 mainstream GPU we're talking about!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbcsa1yHDyX_uMQ3uZwt7STv9c4zDON2KgTMJ8Bd8vKIZ39Q6sih5X8ub5n8NBhDG5SGnXVTkWH5SLxuairmDBWf7x_kpmAW-H3_Pubxgf_3b81fr9kn5V3SYeB04rcjW8YKEsNdhTuuL0jWLEygigOfeLusaGpRwZUErx1MTHZNMhKZ7MryDiHx_KOUE/s403/60%20series%20performance.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="395" data-original-width="403" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbcsa1yHDyX_uMQ3uZwt7STv9c4zDON2KgTMJ8Bd8vKIZ39Q6sih5X8ub5n8NBhDG5SGnXVTkWH5SLxuairmDBWf7x_kpmAW-H3_Pubxgf_3b81fr9kn5V3SYeB04rcjW8YKEsNdhTuuL0jWLEygigOfeLusaGpRwZUErx1MTHZNMhKZ7MryDiHx_KOUE/s16000/60%20series%20performance.PNG" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The RTX 5060 is bringing consumers all the way back to prior eras when Nvidia wasn't giving enough performance to meet then-current demands...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;I &lt;a href="https://x.com/Duoae/status/1925179175570182429"&gt;recently posted on Twitter&lt;/a&gt; that Nvidia's 70 class has seen a 130% performance uplift since the RTX 2070 to the RTX 5070. Meanwhile, the 60 class has seen an improvement of just 45%. If we saw that same level of improvement in the 60 class, the RTX 3080 would be the level of performance we should expect... somewhere slightly faster than an RTX 4070 12 GB. The RTX 5060 doesn't even meet a consistent RTX 3070 level of performance...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;This brings us to the uncomfortable question: If there are no (or very little) generational performance uplifts in the price range that the majority of gamers buy, what do consumers and developers do?&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;Well, &lt;a href="https://x.com/AzorFrank/status/1925651286998794443"&gt;you get behaviour like this&lt;/a&gt;...&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXVg3TKolCguwST0jSMIdTQay3PxSXeqorrmOShFsUUTFiy1GHK_0csGXrml0gVsHAkwkN9PsjL2fvjex72HYMY13MAJk8w6MKWN96kMruDdaJU6XfK79X4ZQsUjhjd6PNxlg12y7m7WPmtNdrsYWhRP8oawmxykZyQTth8TxBWiHtbPAgWpLJy29H0fM/s1080/1000006923.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="949" data-original-width="1080" height="281" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXVg3TKolCguwST0jSMIdTQay3PxSXeqorrmOShFsUUTFiy1GHK_0csGXrml0gVsHAkwkN9PsjL2fvjex72HYMY13MAJk8w6MKWN96kMruDdaJU6XfK79X4ZQsUjhjd6PNxlg12y7m7WPmtNdrsYWhRP8oawmxykZyQTth8TxBWiHtbPAgWpLJy29H0fM/s320/1000006923.jpg" width="320" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Confusing causation with choice is a big problem, here...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: left;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;Playing at 1080p for many years, including to this day on my TV, I have plenty of reasons to want more than 8 GB VRAM. Graphically demanding titles require more, they did back in 2020 when I was playing Cyberpunk 2077 on my RTX 3070 with raytracing enabled. People aren't buying 8 GB graphics cards because they want to, &lt;u&gt;&lt;b&gt;they are doing it because there isn't an alternative&lt;/b&gt;&lt;/u&gt;.&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, coming back to the awkward question - consumers keep buying the graphics cards at the prices they can afford, and developers keep advancing games which require more VRAM because monitor and TV ownership is increasingly moving to 1440p and 4K resolutions. 1080p is the 900p of 2012. It's dead, on its way out - the bargain bin of resolutions. Look, Nintendo have chose 1080p as the handheld screen resolution for the Switch 2. You know, Nintendo, that company that likes to live on the forefront of technol-oh! What's that you say? They &lt;i&gt;don't&lt;/i&gt;?!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Yes, in 2025, 1080p is now "handheld" resolution.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This is all bad for gaming, as a whole. There's no upside. We have almost complete stagnation - both in VRAM and in graphics performance in the largest segment of the market of hardware buyers.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Consequences...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;All of this is leading to a very unhealthy gaming hardware market as well! Continued high prices on new parts, paired with the lack of performance uplift is keeping used PC parts high as well. That means there's no trickle-down effect from new gamers upgrading and so less affluent gamers can't game like they could in times past.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The low supply means that AIBs suddenly have lower sales and revenue numbers, and this will necessitate higher prices on their side as they need more profit per unit sold in order to keep their employees in their jobs.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Developers will suffer more complaints and (potentially) lower sales as players can't get cards that perform well or cards that can even manage to play the game (e.g. is ray tracing is a requirement).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Reviewers will get fewer views on their content and so less revenue, potentially leading to fewer independent outlets being able to operate in the hardware review space.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This also comes hot on the heels of&amp;nbsp;&lt;a href="https://wccftech.com/geforce-rtx-5060-driver-wont-be-releasing-before-retail-launch/"&gt;the recent shenanigans Nvidia pulled&lt;/a&gt; with the RTX 5060 by denying reviewers the ability to give information to the consumers, meaning that we're likely to see negative effects on both reviewers, "review" sites which capitulated to Nvidia's demands, and users having less money in their pockets to purchase hardware that is worthwhile...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's a pretty negative-feeling time in the industry (at least from my perspective).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There are some positives, though. Just like the Intel/AMD CPU wars, the stagnation in performance across Nvidia's entire stack is allowing AMD to catch up with their GPUs* but the stagnation at the bottom of the product stack is also giving Intel a chance to catch up and to release a decent product at a decent price.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, this is a pretty slim chance but it's a potential silver lining on the horizon...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;&lt;i&gt;*If only they had some competitive pricing to match!&lt;/i&gt;&lt;/b&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Moving Forward...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I'd like to see both real competition from Intel at the low-end of the graphics card market, and better products from all three manufacturers at the low end. I'd prefer that they made better products across the stack but let's focus on where we need the most improvement...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, until graphics cards are manufactured in enough supply to satisfy the market, things will never be fixed. We need a total output across all manufacturers of around 50 million units per year to correct things a little and the current trend is heading in the totally opposite direction.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Previously, I had been cautiously optimistic that Nvidia would push silicon back up a performance level if AMD brought good competition. Well, AMD did and Nvidia didn't... though AMD hasn't brought any price competition so there's no pressure for Nvidia, at all...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I don't see a bright future for PC gaming if we keep on this trajectory. I've advocated for developers to target the performance level of the RTX 3070 in prior years and I will continue to leave this advice unaltered for the time being because that's all the performance that most consumers can access...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Coming back to the article sub-title: Is PC gaming dying? No, it'll never truly die but we're looking at a contraction and it's not looking very healthy...&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2025/05/so-whats-next-rate-of-advancement-in.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNQluatMHUF9_SxItA7G2dZNGVqLF5fmXfnbAr1-r5a7pszEWj-uJ-v_l059rZWI6OLhIedqIMzl8pT_gw8jsVbpPqAzilNMPByNr73l1Ev1PdkPqzP_p4dPLLRFHmdXLaURK2QuWCiJRRvTzpFXdZ946rBPCl6H9Ea4jcO7Zv_J0puTaqsgNuvnnvz8A/s72-w640-h360-c/Header.jpg" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-1521658335155094874</guid><pubDate>Mon, 30 Dec 2024 15:33:00 +0000</pubDate><atom:updated>2024-12-30T15:33:37.089+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">curmudgeon</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>Looking back at 2024 and predictions for 2025...</title><description>&lt;div style="text-align: left;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKp2xDjUXnNVAt65wjK4SFvmJ0LtJltrAyWQ9MDIgLzARk76EiNmuKqxfBnyb9qaS41MW-1utRdTY_260UKpr6Lq4nY9vZcBI_5Laj2w9TwFPXBxAOhWEheergp3W5LHGtuzvMsLmNDRbY0o3gE6NmmO7hWGehtqRt-hvmze8Fn_c7RSmSZKmJiqIGt5I/s1420/Happy%20birthday!.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1078" data-original-width="1420" height="486" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKp2xDjUXnNVAt65wjK4SFvmJ0LtJltrAyWQ9MDIgLzARk76EiNmuKqxfBnyb9qaS41MW-1utRdTY_260UKpr6Lq4nY9vZcBI_5Laj2w9TwFPXBxAOhWEheergp3W5LHGtuzvMsLmNDRbY0o3gE6NmmO7hWGehtqRt-hvmze8Fn_c7RSmSZKmJiqIGt5I/w640-h486/Happy%20birthday!.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;It's, again, that time of year where we take stock of the past and also look towards the future. Last year was fairly busy for me, travelling the world and a huge increase at work resulting in less time to spend on this blog. Unfortunately, 2025 is looking just as busy in some ways given I am now tasked with increased responsibilities at work. So, that will drain more of my focus from the first of January, onwards... But, let's make hay whilst we can!&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;2024 Recap...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Last year, I achieved around a 40-50% accuracy rate. I was a bit disappointed with that so I actually spent a lot more time thinking about the industry and where things were going and I think that (aside from being lucky) that was time well-spent.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let's see how I did! First off, let's take a trip to GPU land:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;The client (desktop) RTX 50 series from Nvidia will not release in 2024.&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;Intel will not launch Battlemage desktop GPUs this year.&amp;nbsp;&lt;/b&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;No new Radeon cards will launch.&amp;nbsp;&lt;/b&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I had a version of this post partially written in the early days of November and until that point this was 100% correct on all fronts. I was worried for a bit there with the &lt;a href="https://videocardz.com/newz/amd-radeon-rx-7650-gre-rumored-to-feature-navi-33-gpu-ces-2025-unveil-uncertain"&gt;RX 7600 GRE&lt;/a&gt; leaks and we almost managed to scrape through to the end of the year. But then Intel decided to finally deliver on &lt;i&gt;something&lt;/i&gt;&amp;nbsp;from their GPU division so that middle prediction was WRONG...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In summary: two correct, one wrong.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;If Zen 5 desktop launches this year, it will launch with an X3D part in the line-up.&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Wrong! I really thought this was going to be the launch - and we all saw how much the Ryzen 9000 launch needed the X3D parts to actually be good (or at least better)... However, after the lacklustre launch of the non-X3D parts, AMD launched the 9800X3D a much smaller period between the releases thus far! So, this prediction will probably be good for the next generation of AMD processors.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;If Zen 5 launches, no new motherboard generation will launch. Prices will not drop on current line-ups.&lt;/b&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Correct! The 800 series motherboards are &lt;a href="https://youtu.be/vU6zEJpz6d8?si=e_UpBPOmzxZZMpPx"&gt;almost entirely rebadged 700 series &lt;/a&gt;- the same chipsets, just with USB4 tacked on (in some cases).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;PC ports/releases will continue to get better from the current low in terms of quality. 2023 was an outlier.&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I would say that, although you could argue both ways, I was incorrect on this one. Too many AAA games released with major issues (Silent Hill, Dragon's Dogma 2, [Arguably] Stalker 2, etc. The list goes on!).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;Microsoft will charge a nominal fee for Windows 10 security updates from client; maybe ~$25 per year.&amp;nbsp;&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I was right (&lt;a href="https://arstechnica.com/gadgets/2024/10/home-users-can-only-buy-one-year-of-extra-windows-10-updates-for-30-per-pc/"&gt;and closer than I ever thought it would be!&lt;/a&gt;) $30 for the first year for consumers... years two and three? Currently not offered, but we'll get to that in a bit.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Overall score: 57% correct! But if Intel hadn't have come along and ruined the party, it would have been 71%... Booo! One more reason to dislike Intel.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Predictions for 2025...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I haven't had as much time to ponder the possibilities for next year but I'll try my best. What's hurting my percentages is the number&amp;nbsp; predictions. One right or wrong answer has a greater than 10% change in the calculation. I should &lt;i&gt;try&lt;/i&gt;&amp;nbsp;and increase the number of predictions to get a better idea of my true prognostication powers but let's see how imaginative I can be...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;The RTX 5060 will be the first Nvidia GPU to feature 24 Gb (3GB) modules.&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Look, I'm pretty sure Nvidia and AMD have gotten the message that 8 GB GPUs are not going to cut it in the low-end of the range (I'm not speaking about entry-level). However, we have to be realistic - what's holding them back is the price of components and the fact that memory makers have not started commercial production of the necessary next generation GDDR.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That's finally happening, with SK-Hynix &lt;a href="https://www.extremetech.com/computing/sk-hynix-32gbs-gddr7-memory-is-entering-mass-production-in-q3"&gt;beginning mass production as of this last quarter&lt;/a&gt;&amp;nbsp;on both 16 and 24 Gb modules. Samsung is a bit behind, but is also&amp;nbsp;&lt;a href="https://news.samsung.com/global/samsung-develops-industrys-first-24gb-gddr7-dram-for-next-generation-ai-computing"&gt;bringing those 24 Gb modules to the fray&lt;/a&gt;, while Micron also &lt;a href="https://www.trendforce.com/news/2024/06/27/news-gddr7-emerging-as-a-new-driver-for-memory-industry/"&gt;appears to be operating on the same timeframe&lt;/a&gt;&amp;nbsp;with similar capacities&amp;nbsp;but targeting low-power (e.g. laptop) applications with their design*.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;blockquote&gt;&lt;i&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;*Or, at least, that's what I can tell from the press releases.&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/blockquote&gt;&lt;div&gt;With what I'm seeing from the various news reports and press releases, it looks like we'll have those 24 Gb modules somewhere around Q2 2025 but at slower speeds - which would line up well with a potential RTX 5060 release. Which leads nicely into the third prediction.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;However, there is another reason why I believe that the RTX 5060 will feature these 24 Gb modules, first and perhaps none of the other products in the Nvidia stack: the typical memory bus "widths" used in the various GPU performance/price tiers.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/a/AVvXsEjz6wRzCFlL3WCQEgSGnQ0Xoych_wjEE-lTq43eL5pNd76IGzA6LausFfP3S0elvPbJtWxMzmIzzl6TdN5EE_40FXHX13-CHywR7fQp4qbjEFSZnEYS6VIIKjR61LRh4O2zkOOzkgp7NQcjs7uG6Ruzf7ncM1lMkwT0sVFQ9nwJB1JGNqRtBLGOKq-GtoI" style="margin-left: auto; margin-right: auto;"&gt;&lt;img alt="" data-original-height="547" data-original-width="1117" height="314" src="https://blogger.googleusercontent.com/img/a/AVvXsEjz6wRzCFlL3WCQEgSGnQ0Xoych_wjEE-lTq43eL5pNd76IGzA6LausFfP3S0elvPbJtWxMzmIzzl6TdN5EE_40FXHX13-CHywR7fQp4qbjEFSZnEYS6VIIKjR61LRh4O2zkOOzkgp7NQcjs7uG6Ruzf7ncM1lMkwT0sVFQ9nwJB1JGNqRtBLGOKq-GtoI=w640-h314" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The memory bus "width" determines both the die size and capacity of VRAM... &lt;a href="https://videocardz.com/newz/nvidia-geforce-rtx-5070ti-series-specs-have-been-leaked-rtx-5070-with-12gb-memory"&gt;via Videocardz&lt;/a&gt;.&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;p&gt;&lt;/p&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;If we look at the rumoured specifications (&lt;a href="https://videocardz.com/newz/nvidia-geforce-rtx-5070ti-series-specs-have-been-leaked-rtx-5070-with-12gb-memory"&gt;helpfully compiled by Videocardz.com&lt;/a&gt;) we see the memory bus size (width) reduces at each tier of GPU in the stack - this is also true historically. The reason for this is also two-fold: cost of the GPU silicon, and cost of the memory modules and applying those modules to the circuit board that makes up the base of the graphics card.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Memory controllers are relatively large structures on the GPU die and so the ideal situation is that the fewer of them that need to be included - the better. The reason is that the size of the die is proportional to the cost of each individual die (i.e. more dies that can be produced from a single silicon wafer, the cheaper each individual die is as the wafer cost remains constant!). Additionally, smaller dies have an increased possibility of being either defect free or free of a defect which renders them unsalvageable in some way (e.g. &lt;a href="https://gamersnexus.net/guides/1140-silicon-die-bin-out-process-explained"&gt;the process of binning&lt;/a&gt;).&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;On the other hand, running the traces for the memory modules, the layering of the pads to attach the modules add expense to the design and production of the graphics card PCBs. In addition to the cost of each individual memory module and even the assembly line process to apply the modules also add additional cost to the final BOM and COGs of a graphics card.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Therefore, the ideal GPU is one which is very small but very performant and has very little memory attached to it!&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Unfortunately, for the product managers at all three GPU manufacturers (and Jensen's next Leather jacket purchase), games require data to be available for the GPU to operate, which means that they need VRAM. They also need a lot of parallel processing power in order to calculate all the required operations very quickly, which requires lots of transistors. So, these things mean a bigger GPU is "better" and a certain amount of VRAM is required to have consistent and high performance.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;That quantity varies based on the games/applications that are running on the GPU but the long and short of it is that modern graphically-demanding gaming titles require around 10 - 12 GB VRAM even at a resolution of 1080p, though, &lt;a href="https://hole-in-my-head.blogspot.com/2024/12/next-gen-pc-gaming-requirements-2024.html"&gt;as I pointed out last time&lt;/a&gt;, 1440p is quickly becoming the standard resolution for modern gamers and this "rule"&amp;nbsp;&lt;i&gt;definitely&lt;/i&gt;&amp;nbsp;applies to that resolution...&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;The big point here is that if you don't have enough VRAM, it's &lt;u&gt;bad&lt;/u&gt;. If you do? It doesn't matter how much &lt;i&gt;more&lt;/i&gt;&amp;nbsp;you have.&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;This means that for all of the cards above the RTX 5060, the card which will most likely have a 128 bit memory bus, the GPU manufacturers will have enough VRAM for the majority of modern games at 1080p and 1440p that will be released over the coming 3-4 years when using 16 Gb (2 GB) modules. Above a 128 bit bus, that gives you a minimum of 10 GB VRAM to work with and that's &lt;i&gt;enough&lt;/i&gt;... Bringing us to the second prediction:&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgS-nN7gqUt0r8k3dgJov4FkqkJpggArDXfJFThK-9UYZIOpJn8QmG-rKrx2enUIC2XAy4oCzVyGy-TvIn66ZK-BvJus8LEgl1xwOmPZJqWciuEZoKg85E7O5IDHbGA9TnTFThK7kLlLnh2JkqsPUnotJglLBkw_uGp3CvkHnBXard3oHdkZLHrO0sW-aM/s753/VRAM%201.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="753" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgS-nN7gqUt0r8k3dgJov4FkqkJpggArDXfJFThK-9UYZIOpJn8QmG-rKrx2enUIC2XAy4oCzVyGy-TvIn66ZK-BvJus8LEgl1xwOmPZJqWciuEZoKg85E7O5IDHbGA9TnTFThK7kLlLnh2JkqsPUnotJglLBkw_uGp3CvkHnBXard3oHdkZLHrO0sW-aM/w640-h362/VRAM%201.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The vast majority of games are unlikely to require more than 12 GB VRAM at recommended settings in the coming years...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;The RTX 5060 will be the &lt;i&gt;only &lt;/i&gt;RTX 50 series GPU that utilises the 24 Gb modules.&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Moving onto the third prediction:&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;The desktop RTX 5060 will either release with 12 GB VRAM or have a variant with the 12 GB configuration.&lt;/b&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Those 24 Gb modules I mentioned above will enable a a graphics card with a 128 bit bus to host a 12 GB framebuffer instead of the historic 8 GB we've been lamenting for around the last four years. This is MUCH needed at the low-end of each vendor's product stacks and especially on the targetted cheaper mid-range models that both Nvidia likes to put out (e.g. RTX 4060 Ti) which, if fitted with a larger framebuffer (aka VRAM), &lt;a href="https://youtu.be/ecvuRvR8Uls?si=SWIiV0FB1RVI5ycZ"&gt;have been proven to perform better&lt;/a&gt; and &lt;a href="https://www.techspot.com/review/2714-nvidia-rtx-4060-ti-16gb/"&gt;output smoother sequential frametimes&lt;/a&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;RTX 50 series and RX 90 series will be unimpressive in the price to performance compared with the current generation. We will not get large performance gains at each price point - with the exception of the RTX 5090.&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This is less of a prediction and more of a continuation of the trend for the last two generations - we're not getting better performance at the same price point anywhere below the top-end cards. &lt;a href="https://hole-in-my-head.blogspot.com/2023/10/rtx-40-series-aka-did-nvidia-jump-shark.html"&gt;Nvidia have the opportunity&lt;/a&gt; to make the RTX 50 series something special... but I'm 80% sure that they won't. They won't because they don't have to. Crypto- sorry, I meant AI is still selling GPUs and all of their most important wafer allocation is heading towards that endeavour. AMD and Intel are not competing at the high-end, so Nvidia can do what they please... and what they please is to not have to focus on the consumer gaming market any more than they need to.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;They will save that amazing performance uplift for when they &lt;i&gt;really&lt;/i&gt;&amp;nbsp;need it. (i.e. If either AMD or Intel are able to compete, Nvidia has a whole 20-30% extra performance uplift per non-top card tier to pull out of the hat).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixNeqSXu60fwL57VUuo4vUDwxklT976BDhwJuc-83_tm_LXM3MOpbfmgqsK2WP-fpkLcOC4-P7pcdtUEJqDwZ842h5cpe2ulZ7JksT_Lh6fiTQLrZ5iCp6QqrMaOMTGvcq_ueoG-ng4hrrADC6C7NDKi3teUAA2nbySRvZxJKQeA9HGmMOpzwKkhDzz68/s842/RX%207900%20XT_performance.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="317" data-original-width="842" height="241" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixNeqSXu60fwL57VUuo4vUDwxklT976BDhwJuc-83_tm_LXM3MOpbfmgqsK2WP-fpkLcOC4-P7pcdtUEJqDwZ842h5cpe2ulZ7JksT_Lh6fiTQLrZ5iCp6QqrMaOMTGvcq_ueoG-ng4hrrADC6C7NDKi3teUAA2nbySRvZxJKQeA9HGmMOpzwKkhDzz68/w640-h241/RX%207900%20XT_performance.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The RX 9070 XT (rumoured top RDNA4 card) has the above leaked results. (&lt;a href="https://www.techpowerup.com/review/xfx-radeon-rx-7900-xtx-magnetic-air/33.html"&gt;FPS data from TechPowerUp&lt;/a&gt;)&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;RDNA4 will still not be an impressive uplift in terms of ray tracing ability and will not be as performant as the RTX 4080.&lt;/b&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Here's the thing, the &lt;a href="https://www.guru3d.com/story/rumor-initial-amd-radeon-rx-9070-xt-performance-leaks/"&gt;recent performance leaks&lt;/a&gt; &lt;a href="https://overclock3d.net/news/gpu-displays/amd-radeon-rx-9070-xt-ray-tracing-benchmark-leaks/"&gt;for the RX 9070 XT&lt;/a&gt; are not really making sense to me but people are going crazy for them. Let's get this out of the way: &lt;a href="https://hole-in-my-head.blogspot.com/2024/09/how-powerful-is-ps5-pro.html"&gt;Ray Tracing performance is a function of the Raster performance of any given GPU&lt;/a&gt;... If your GPU is able to perform RT with 85% efficiency, you will get approximately 85% of the fps of the rasterisation performance of the GPU.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Looking at the table above, the leaked performance has a ratio of 0.64 from the Port Royal score to the Timespy score. That's higher than the ratio of Ada Lovelace-based architecture RTX 40 series cards which hover around 0.62, with the exception of the RTX 4070 which is (in my opinion) power-starved. That's an impressive uplift when the RX 7000 series is hovering around a ratio of 0.52... Maybe not out of the realm of possibility, though.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, if we then look at actual real world fps averages, at 1080p and 1440p we're actually looking at a ratio of 0.70 for the RTX 40 series and 0.56 for the RX 7000 series. Nvidia gets a bigger performance uplift in many of these game titles in the real world.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What's mostly concerning to me is the comparisons between the RX 7900 XT and RTX 4070 Ti in the leaks. If an RX 7800 XT is able to almost match the raster performance of a RX 6900 XT with 20 CU less, but also match its performance in the synthetic Port Royal test but generally win-out in real-world testing how would a 64 CU part (the rumoured spec of the RX 9070 XT) win over it so handily?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Then we can move onto the RTX 4070 Super, a card which manages the same score as the RX 6900 XT in Timespy synthetic raster but which handily beats the RX 7900 GRE in real-world gaming at 1080p and matches it at 1440p, despite that card having a surplus of 2000 points over it!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The point I'm trying to make, here, is that Timespy and Port Royal scores do not match the same real world GPU performance hierarchy... So, these leaks don't mean &lt;i&gt;anything&lt;/i&gt;&amp;nbsp;when it comes to how the RX 9070 XT will perform on the track, so to speak.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;RDNA4 will not be impressive from a price to performance perspective.&lt;/b&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This is the same prediction as above for Nvidia but aimed at AMD's line-up. The problem we have is that AMD seemingly can't think for themselves and have, effectively, no agency in this market. They want to only do what is necessary to continue developing their technology to keep them afloat and in the race but not to actually improve and compete. It's a fool's errand to expect them to offer something compelling and a card like the RX 7800 XT almost seems like an accident.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There is also another reason reason which will be discussed in the next prediction:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;RDNA4 will initially only release as a mid-range product - no low end GPU will be present for the first half of 2025. The RX 7600/XT may not even see a release until 2026 but this is dependent on whether Nvidia launches the RTX 5060 or not during the year.&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There's a simple reason for this prediction: There's just &lt;i&gt;too damn&lt;/i&gt;&lt;i&gt; much&lt;/i&gt; inventory on the market! Seriously, while the RTX 4070 is basically out of stock (despite the non-GDDR6X variants releasing) everything else below the RTX 4070 Super is still in the market, hogging that space. It seems like, due to the increased prices on the 4070 Ti and Ti Super indicate that these are also end-of-life (rather than in high demand) but comparing this situation with AMD shows that &lt;i&gt;all&lt;/i&gt;&amp;nbsp;of their cards are going for below MSRP and readily available at all performance tiers.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What is AMD going to do? Launch products which will destabilise the majority of their current line-up and thus devalue the inventory they have on the market? I don't believe even AMD are that stupid...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj82vWr4oKNJPsBHwmm0rGJj1uLmNjzb-VnTFt7GrcNU8O-Z3cvGZLH2sWke58a4vbe3OUbh-GADW1pozcflP_OS0fui9-_CPVAcoFeNoevL1WNJeF_pAmvRLAGjwXVnneR4ny09GnqgXyvqbIOMdDallE4Lx82VPPuzZvb5FrKvn40I2qfD5IrVU4Ac4o/s835/Price%20evolution.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="215" data-original-width="835" height="165" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj82vWr4oKNJPsBHwmm0rGJj1uLmNjzb-VnTFt7GrcNU8O-Z3cvGZLH2sWke58a4vbe3OUbh-GADW1pozcflP_OS0fui9-_CPVAcoFeNoevL1WNJeF_pAmvRLAGjwXVnneR4ny09GnqgXyvqbIOMdDallE4Lx82VPPuzZvb5FrKvn40I2qfD5IrVU4Ac4o/w640-h165/Price%20evolution.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Despite generally high prices, most GPUs have actually decreased from their release prices...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, that leaves us with the obvious - they need to release something to show they are keeping up with Nvidia. Nvidia is likely to only release the top cards. AMD's top-end RDNA4 card will likely only have RX 7900 XT performance, therefore, devaluing only one or two cards makes more sense than devaluing the whole stack.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;The RDNA 4 top card will release at around €649/$600.&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As I mention above, AMD can't undermine their swathes of inventory on the market and wouldn't want to. The RX 7900 XT and GRE are "salvaged" products - they're the dregs of the RX 7900 XTX - and destroying their market won't hurt the lower-end parts nor the aforementioned RX 7900 XTX's position in the stack.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This means that AMD can't price the part &lt;i&gt;too&lt;/i&gt;&amp;nbsp;low, which would rule out a $500 price-point. Plus, Nvidia (and thus AMD) have not given improvements in price-to-performance at the launch of the RTX 40 series, are are unlikely to do so for the RTX 50 series. Therefore, it's likely that AMD will follow the same strategy.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This will allow both companies to continue to sell their (in reality) poor value SKUs at the lower-end of the stack. Meanwhile, AMD can cut the price on the 7900 XTX a little, to help with sales but not cannibalise the profit too much...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;No one wins.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;SteamOS will make a return for DIY desktop gaming PCs... (Linux through the back door).&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;All the leaks surrounding Steam certified devices recently have me pining for the potential of SteamOS from back in the early 2010s. I'd love for that to become a reality in 2025, so I'm putting it on my wishlist, despite no real credence for it at this juncture!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Aaaaand, that's it! I could have come up with more but time just was not on my side! I've been writing this article on and off for over a month, now but with work, a promotion, family, and holidays I just don't have a lot of spare time to sit, think and produce &lt;span style="color: #274e13; font-style: italic;"&gt;#content&lt;/span&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I wish you all a very happy Christmas and pleasant New Year!&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2024/12/looking-back-at-2024-and-predictions.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKp2xDjUXnNVAt65wjK4SFvmJ0LtJltrAyWQ9MDIgLzARk76EiNmuKqxfBnyb9qaS41MW-1utRdTY_260UKpr6Lq4nY9vZcBI_5Laj2w9TwFPXBxAOhWEheergp3W5LHGtuzvMsLmNDRbY0o3gE6NmmO7hWGehtqRt-hvmze8Fn_c7RSmSZKmJiqIGt5I/s72-w640-h486-c/Happy%20birthday!.png" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-6265967432653572303</guid><pubDate>Tue, 10 Dec 2024 19:07:00 +0000</pubDate><atom:updated>2024-12-10T19:07:13.440+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">screenestate</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>Next Gen PC gaming requirements (2024 update)</title><description>&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-UZqT2U4WCN3FdPI0bwOh9D4s2pSqAzfMN6BNYmQjAH9pBT615jhg_4zPDcZBTbbGg09IP3NAhra34c_vWHr4K63KMIAkaoSoDP7qm-OnmgAtgwdBflFT951BK3E5xy9f-H7EM1aRtaGMQ8lfN0nT8UAJ3ShfE-fCqyGuukFxbVcDOplrqE6J9LmwuMs/s1024/Title_2024.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="681" data-original-width="1024" height="426" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-UZqT2U4WCN3FdPI0bwOh9D4s2pSqAzfMN6BNYmQjAH9pBT615jhg_4zPDcZBTbbGg09IP3NAhra34c_vWHr4K63KMIAkaoSoDP7qm-OnmgAtgwdBflFT951BK3E5xy9f-H7EM1aRtaGMQ8lfN0nT8UAJ3ShfE-fCqyGuukFxbVcDOplrqE6J9LmwuMs/w640-h426/Title_2024.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;This year, I'm on time...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Snow is falling, winds are blowing, and it's that time to look back over the year and trend the night away! &lt;a href="https://hole-in-my-head.blogspot.com/2024/05/next-gen-pc-gaming-requirements-2023.html?m=1"&gt;2023 was a bit of a disappointment&lt;/a&gt; but I've got a good feeling about this year*...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;blockquote&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;*Okay, let's be honest, I've already seen the data!&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;In Brief...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I'm not going to rehash the ins and outs of how and why I do the trending, you can visit any of the &lt;a href="https://hole-in-my-head.blogspot.com/2020/12/next-gen-pc-gaming-requirements-part-4.html"&gt;prior years'&lt;/a&gt; &lt;a href="https://hole-in-my-head.blogspot.com/2021/12/next-gen-pc-gaming-requirements-2021.html"&gt;assessments&lt;/a&gt; &lt;a href="https://hole-in-my-head.blogspot.com/2023/01/next-gen-pc-gaming-requirements-2022.html"&gt;for&lt;/a&gt; &lt;a href="https://hole-in-my-head.blogspot.com/2024/05/next-gen-pc-gaming-requirements-2023.html"&gt;that&lt;/a&gt;. What's important to note is that this isn't a definitive measurement, there are caveats and limitations on the data gathered.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As usual, all data is &lt;a href="https://docs.google.com/spreadsheets/d/1O1_0bsnKrmazhoxtyJzuzXRKGFTIjvnRheF3nlVFDu4/edit?usp=sharing"&gt;available, here&lt;/a&gt;. And with that, on with the show!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Safety Dance...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If last year felt like a regression or a disappointment, 2024 is a welcome return to form. Way back in 2020, when I began this series, the latest generation of consoles hit and in retrospect, it's now possible to see that in years where console hardware increments itself, there are spec bumps across the board.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Previously, I had operated under the assumption that required specifications for games worked along a sort of "long-tail" approach. Whereby, hardware is released and developers predict future performance of future hardware releases from when they start development. However, I'm coming around on this assumption. Sure, it still occurs - it's only natural to predict - but it is becoming apparent in this trending data that developers "react" to hardware launches in real-time, too*.&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;*Though this didn't appear to be a trend when the PS4 and Xbox One released...&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;We'll come back to that point later but I wanted to pre-emptively frame these analyses with that thought in mind!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlUdiKJ1CB3SUKy9aq5wbmzdB9XJtWaSZYmO8wCt6Ub7LhzpOfYc3J1bQJl1lsVSDYX7J5npmS1BFTLclGRrmi_7UIcROnSAIxwTJDv8DIY7eHOSk1fxQMkgXyTrdRkRfeBAcYUjRRKcjPHpXpIiKGtr-vA2JwqaewLwk8aNpl9u-BWSw1rNb7wBj4JAo/s749/CPU%20perf%202.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="749" height="364" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlUdiKJ1CB3SUKy9aq5wbmzdB9XJtWaSZYmO8wCt6Ub7LhzpOfYc3J1bQJl1lsVSDYX7J5npmS1BFTLclGRrmi_7UIcROnSAIxwTJDv8DIY7eHOSk1fxQMkgXyTrdRkRfeBAcYUjRRKcjPHpXpIiKGtr-vA2JwqaewLwk8aNpl9u-BWSw1rNb7wBj4JAo/w640-h364/CPU%20perf%202.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Last year, I was predicting around the 10600K-level of performance for both CPU single and multi-core performance in game requirements in 2025. What we've seen from this year is a large uptick in those predictions, with the single core performance exceeding a 10600K - which is around that of an i9-9900K - and multi-core performance matching that of a Ryzen 7 3700X.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The big takeaway from this is that games &lt;i&gt;are&lt;/i&gt;&amp;nbsp;getting more cognisant of multi-threading and taking advantage of those threads, even if it feels slow from a consumer perspective. UE5 is a particular driver of this trend and as games that have adopted that technology are released, we will see the multi-core performance requirements increase further.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTHBCHIy7vER99FiQno3msjiufel_RenONUDT383YJdY6TJTPB_yK_xw_w7mMAqIXc4WY3EBjgyVQm1H0ynQKScubhcVxifYgOkMLLTt6qaQvkt-i92of0DUAY6vC3ASkrehdK_kxtBDtT_i9fMLaVS73ZYIXhAhVKbpy9744r0zWbuUkz2aYg8XGeIpI/s751/CPU%20perf%203.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="751" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTHBCHIy7vER99FiQno3msjiufel_RenONUDT383YJdY6TJTPB_yK_xw_w7mMAqIXc4WY3EBjgyVQm1H0ynQKScubhcVxifYgOkMLLTt6qaQvkt-i92of0DUAY6vC3ASkrehdK_kxtBDtT_i9fMLaVS73ZYIXhAhVKbpy9744r0zWbuUkz2aYg8XGeIpI/w640-h362/CPU%20perf%203.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;What we &lt;i&gt;can&lt;/i&gt;&amp;nbsp;see is that there is not as large of a correlation between multi-core performance and hardware releases, as there is with the single-core performance. That's pretty interesting but, I think, speaks more to the dominance of single-thread throughput in computing applications, in general. Yes, more threads help, but they're not the driving force in the need to upgrade your PC hardware. Something that Hardware Unboxed has been relaying in &lt;a href="https://youtu.be/fG6PB16gm78?t=901"&gt;excruciating directness&lt;/a&gt; &lt;a href="https://www.youtube.com/watch?v=Tp-phTJBqME&amp;amp;ab_channel=HardwareUnboxed"&gt;for a&lt;/a&gt; &lt;a href="https://www.youtube.com/watch?v=0mO4op3bL90&amp;amp;ab_channel=HardwareUnboxed"&gt;long time&lt;/a&gt;... and something that &lt;a href="https://hole-in-my-head.blogspot.com/2020/12/in-defence-of-cores-and-future-of-gaming.html"&gt;I've also devled into&lt;/a&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Saying that, we can trend the requirements for cores/threads on the CPU and I was obviously predicting &lt;i&gt;way too hard&lt;/i&gt;&amp;nbsp;back in 2020, where I thought the rise of the multicore CPU would push things further than it has in reality. In this reality, we are still able to purchase six core CPUs at the low end of the spectrum and we haven't yet migrated to eight cores as the sort of minimum generational product like we did when we jettisoned four cores and adopted six...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Both my predictions for the mode of cores and threads are beyond what developers are requiring: further giving evidence against that argument mentioned above - overall CPU performance matters. The big development we see in 2024 is that the &lt;i&gt;average&lt;/i&gt;&amp;nbsp;cores and threads have once again risen above the modes of those respective metrics. This indicates that there &lt;i&gt;is &lt;/i&gt;an upward trend, it's just not large enough to affect the value in a discete sense. Again, that's a large jump for the average thread increase.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj01BXPsmu-liTE4T-qCQPEZHXkDhyphenhyphenra26lfIBKCrgt5ea4N8ElxRxJ-M3Lx1rUwdBfpQ-CbCzQ5QSKHzemyf7lqcsnyfqJ-P1YGMPCMcF1bAbwxSXW9g73zrRtKIE_Qle-_o9VzshKdIX6I1WIYhGKBLIMIMEtLSExPjahpIuPnMsx3s9wMCQ2xisQVes/s751/CPU%20cores.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="427" data-original-width="751" height="364" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj01BXPsmu-liTE4T-qCQPEZHXkDhyphenhyphenra26lfIBKCrgt5ea4N8ElxRxJ-M3Lx1rUwdBfpQ-CbCzQ5QSKHzemyf7lqcsnyfqJ-P1YGMPCMcF1bAbwxSXW9g73zrRtKIE_Qle-_o9VzshKdIX6I1WIYhGKBLIMIMEtLSExPjahpIuPnMsx3s9wMCQ2xisQVes/w640-h364/CPU%20cores.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Looking at this data, I think we might be able to predict that the time of the six core CPU is coming to a close within the next 2 years. We may even see AMD's next generation not even have a six core part. Intel has already made this jump as of the &lt;a href="https://www.intel.com/content/www/us/en/products/details/processors/core-ultra.html"&gt;Arrow Lake (Core Ultra) generation&lt;/a&gt; of products, though it seems less necessary on AMD's part since they still have a performance advantage in gaming...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Getting back to that point about developers responding to generational console hardware increments, there is a clear lull in the year before the release of new hardware and then a jump in the performance the year of release; 2016, 2020, and 2024. Prior to this, at least in my trending data, there is no correlation. We can even observe a "double hump" in 2016 - 2017 to cover the staggered release of both the Pro and One X.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I'm left wondering whether the scalability of modern game engines has resulted in a situation where developers are able to cram in more things towards the end of development when the new console hardware is shown (or disclosed) to them or whether this is a huge coincidence?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;All, hail to the king...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhI69b5xjJJnQTZ9IQrLkQOFGWGeN07UWWjO8viwDa7SFEaUvawdd7ow8YKynTiqtxrHJGPlajZkxk0gQtvJoLxJqdlYK6Te8HpHyV975JgSBRFrGOQYqPtk0M5gnRUDK6YKKJ07CaaXZA0PitB_-xI2WCIiqZJQo9qNHay6SnUhsEcM1iFfkyyyey6SLs/s753/GPU%20perf%201_comp.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="427" data-original-width="753" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhI69b5xjJJnQTZ9IQrLkQOFGWGeN07UWWjO8viwDa7SFEaUvawdd7ow8YKynTiqtxrHJGPlajZkxk0gQtvJoLxJqdlYK6Te8HpHyV975JgSBRFrGOQYqPtk0M5gnRUDK6YKKJ07CaaXZA0PitB_-xI2WCIiqZJQo9qNHay6SnUhsEcM1iFfkyyyey6SLs/w640-h362/GPU%20perf%201_comp.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The 'needle' has barely moved between 2023 and 2024...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;GPU performance requirements show no such correlation and that may be because GPU performance still has not actually stagnated due to various reasons and because the requirements for GPU performance are also not stagnating. This is an important distinction to make!&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;GPU performance (or perhaps more accurately, &lt;i&gt;&lt;u&gt;perceptual GPU performance&lt;/u&gt;&lt;/i&gt;) has been steadily&amp;nbsp; advancing through a combination of more powerful architectural designs, smaller process nodes, and the increasing use of upscaling and machine learning techniques.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;On the other hand, the requirements of GPU performance are also not stagnating. This is through a combination of ever-expanding graphical feature sets (e.g. ray tracing, DX12), GPU compute, and, lastly (and finally) a resolution shift.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Back in 2021, I looked at &lt;a href="https://hole-in-my-head.blogspot.com/2021/08/the-relative-value-of-gpus-over-last-10.html"&gt;the relative value of GPUs throughout the years&lt;/a&gt;. I compared value within generations, between generations, and within the same class. However, one of the salient points I made was this:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;i&gt;&lt;b&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;"Graphics cards and their performance are really relics of their time: without games to push them and without the popular resolutions of the day being present, their performance and price ratios are meaningless."&lt;/span&gt;&lt;/blockquote&gt;&lt;/b&gt;&lt;/i&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I noted that we had a pretty static resolution target for around ten years of 1024x768 from around 2000 to 2011, which was followed by 1920x1080 in around 2010-2011*. 1080p has pretty much been the main focus of both reviewer's and consumers until this year, 2024. Yes, many benchmarking sites and reviewers also include 1440p and 4K data in their summaries but all GPUs released have their baseline&amp;nbsp; testing set at 1080p to this day.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;i&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*I'm speaking about the benchmarking scene, here. Not what's in the most adventurous and well-heeled consumers' hands!&lt;/span&gt;&lt;/blockquote&gt;&lt;/b&gt;&lt;/i&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, in 2023 the VRAM question really took off in consumers' minds and was answered &lt;a href="https://www.youtube.com/watch?v=oPqNy21ONuw&amp;amp;ab_channel=HardwareUnboxed"&gt;by reviewers&lt;/a&gt; in increasing frequency. Yes, these questions &lt;a href="https://youtu.be/Jk1glH5p0u8?si=oN_PVUd5Shf5gQY7&amp;amp;t=574"&gt;existed &lt;i&gt;before&lt;/i&gt;&lt;/a&gt;. But, let's say &lt;a href="https://youtu.be/FRZdRS9K9LU?si=YS63P_xAMq1sSCbT"&gt;it wasn't&lt;/a&gt; &lt;a href="https://youtu.be/Gd1pzPgLlIY?si=41LI6Ary5-L5Mtqq"&gt;as big &lt;/a&gt;&lt;a href="https://www.youtube.com/watch?v=trBG9KAy2vA&amp;amp;ab_channel=AncientGameplays"&gt;of a deal&lt;/a&gt; &lt;a href="https://youtu.be/rO1NwLGWe1g?si=8ixN5ZHFu4VV5_2q"&gt;as consistently&lt;/a&gt; as&lt;a href="https://youtu.be/iwVpJm6Hir0?si=95Nen9UT9qHIngHO"&gt; it became &lt;/a&gt;(which is fair enough!).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The reason this is important is because, aside from increased pressure by improved graphical features and texture detail, it is &lt;a href="https://youtu.be/S10NnAhknt0?si=HKmVBliISJX78Ani"&gt;now becoming apparent&lt;/a&gt; that we are in the midst of the real shift to a resolution of 2560x1440 being the baseline target for consumers. Yes, monitors of decent quality have finally become available at a low-enough price that 1080p monitors are no longer even really a consideration except for the most ultra-budget cases. This is going to put a large amount of increased pressure on GPU manufacturers for increased VRAM capacity.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1wmF5MtJxIhKYi8rHjiSs1ObK_WyuZXRmE_APxzYNdwN9NZ7WBjiTq9AAvY9OK-OQ1soIe2cst4pT1vX8Ony28rpg0L_j4DWVUMBdtJUZxcEs77Tl-oLujr3PmtTZdW7rTZHmrcB-4irwnc8QxYGTwM11eNi3YNwnnPXCmeKgdXvuEUNUUgWfhBgDPeI/s747/GPU%20perf%202.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="747" height="364" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1wmF5MtJxIhKYi8rHjiSs1ObK_WyuZXRmE_APxzYNdwN9NZ7WBjiTq9AAvY9OK-OQ1soIe2cst4pT1vX8Ony28rpg0L_j4DWVUMBdtJUZxcEs77Tl-oLujr3PmtTZdW7rTZHmrcB-4irwnc8QxYGTwM11eNi3YNwnnPXCmeKgdXvuEUNUUgWfhBgDPeI/w640-h364/GPU%20perf%202.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;This is interesting for two reasons: 1) VRAM manufacturers are &lt;a href="https://videocardz.com/newz/first-generation-of-gddr7-graphics-cards-sticking-to-16gbit-2gb-modules-3gb-on-roadmaps"&gt;&lt;i&gt;still&lt;/i&gt;&amp;nbsp;dragging their feet&lt;/a&gt; on &lt;a href="https://wccftech.com/micron-roadmap-gddr7-memory-next-gen-nvidia-gpus-up-to-24gb-32-gbps-dies-in-2024-24gb-36-gbps-in-2026/"&gt;both faster&lt;/a&gt; and &lt;a href="https://www.tweaktown.com/news/101949/samsung-to-show-off-its-42-5gbps-24gb-gddr7-memory-modules-at-isscc-2025-in-february/index.html"&gt;higher capacity&lt;/a&gt; GDDR modules; and 2) the &lt;a href="https://www.techradar.com/computing/gpu/great-news-for-next-gen-gpus-nvidias-rtx-5060-may-use-less-power-than-the-rtx-4060"&gt;rumoured low&lt;/a&gt; and &lt;a href="https://overclock3d.net/news/gpu-displays/nvidias-planned-12gb-rtx-5070-plan-is-a-mistake/"&gt;mid-range&lt;/a&gt; RTX 50 series cards are still supposedly shipping with 8 and 12 GB of VRAM, respectively, next year. Though, I am hesitant to buy-in on that RTX 5060 rumour - which is mostly based on leaks relating to the laptop configuration. I'm hoping that the desktop RTX 5060 will use 24 Gb modules which would grant a total of 12 GB VRAM - matching that of the rumoured RTX 5070, though with a narrower memory bandwidth.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, we are &lt;a href="https://bethesda.net/en/article/3Od8RFBcAOGNxNDlD801Rp/indiana-jones-and-the-great-circle-pc-specs"&gt;already seeing&lt;/a&gt; that at 1440p, 12 GB equipped GPUs are the recommended configuration for modern, demanding titles and this is only likely to accelerate as we go forward. But I'm getting ahead of myself - we'll address VRAM trending later in this article.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Getting back to actual performance numbers, the main problem with moving to a new resolution standard/target is that the GPUs are really not ready to handle it - let alone at logical price points! We had this same pain back in the transition to 1080p, so, it's not unexpected but it is a little disappointing. Both Nvidia and AMD have known for a long time that 1440p was "coming" but neither have provisioned for it in their low-end cards. Intel's newly released B580 is the first card in that segment to actually do so...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, we're probably looking forward to these low-end cards from Nvidia (if not AMD, as well) becoming sub-60 fps cards at native resolution* in 1440p in demanding titles.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*Line up and make your arguments in the comments below...&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3VbrFXK9aZ4zs-aWpDpdL3Ml09aaAvufpyFO2AJuLnsXAkzq21j0dbSqDtmx4EJ3TS7YmQJWznYDfUSV0EblfdZYFVrD-nti45_GCJKwm1hxUdL_ZqKTiZ4XC8LivOsKhLFI3Mf63xEKLljiBvN_H0V9TJnVJ5N1j8ETVn-MNRi3cXkP7b0FHsPe9uuc/s461/Demanding%20game%2060%20series.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="457" data-original-width="461" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3VbrFXK9aZ4zs-aWpDpdL3Ml09aaAvufpyFO2AJuLnsXAkzq21j0dbSqDtmx4EJ3TS7YmQJWznYDfUSV0EblfdZYFVrD-nti45_GCJKwm1hxUdL_ZqKTiZ4XC8LivOsKhLFI3Mf63xEKLljiBvN_H0V9TJnVJ5N1j8ETVn-MNRi3cXkP7b0FHsPe9uuc/s16000/Demanding%20game%2060%20series.PNG" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;This data is generalised and shows approximate performance for a demanding game at point of release for each card...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Returning from my pontificating, GPU performance is right on track with where it was last year and only slightly off (reduced) from the predictions in 2020, where I had marked 2024 being the year requiring RTX 2080 Super levels of performance, we're now expecting that in 2025.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There's not much to analyse, here, other than to note that GPU requirements inexorably increase year after year in a rather nice curve...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Rock around the Christmas Tree...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNtaH10bRzj4GdrywA7FLAOWp97l6zGvL-VJQ0zOyfLOugiR6dYU0OrYrQhp8aNwjbrQqXHLkisLVJR1j99ugi3s9kOCT67cRRZ_etnE5qxhcP-qBdLseER7ySLg5w6Xi6peRkwXnP1jprBrfS5HJcqphxeNmOfL5V7deQMilQFOQpDFQc3ENfsY1TsRk/s753/RAM%201.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="753" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNtaH10bRzj4GdrywA7FLAOWp97l6zGvL-VJQ0zOyfLOugiR6dYU0OrYrQhp8aNwjbrQqXHLkisLVJR1j99ugi3s9kOCT67cRRZ_etnE5qxhcP-qBdLseER7ySLg5w6Xi6peRkwXnP1jprBrfS5HJcqphxeNmOfL5V7deQMilQFOQpDFQc3ENfsY1TsRk/w640-h362/RAM%201.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Like Mt. Everest pushing through the clouds...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;This is where things get a little exciting for me... sad as that may seem.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Last year, I lamented the stagnation we were seeing in the RAM requirements of games and, if you look at that chart above, you'll see that we've been fannying around with 8 and 16 GB of system memory since 2016, with 12 GB giving a good show for all its uneven glory! But, wait, what's this? Doth mine eyes deceiveth me?!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Okay, I made a choice in this presentation of data for third place for two reasons: 1) you can't buy a sensible configuration of 12 GB of RAM! and 2) if we're looking to increase from 16 GB, why would we be looking back at a lower quantity of RAM?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The fact of the matter is that we have suddenly, jarringly, observed a rather large uptick in system memory requirements in 2024. It's something that I've been calling for, for years and now it's finally coming to pass, I feel a tear welling from the corner of my eye. Almost 11 % of the games I tracked this year called for 32 GB of RAM - that's a bigger jump than when we first saw 16 GB requirements back in 2014/2014 of around 3 - 4 %.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I'm pretty happy it's finally happening...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcbOk_GAzRUzDezqGwqWlEnfUFNk34DlUGbS2j0JYjbtIMA5qWZO8a6W7uIMyoUmr2FfmTu8Vcp_EPp5VoDgObRzbmrF5egUUr7f-ySVzvPDBrVcgWFBfQeA8IxCHoyhjxYjMQoqtHuP2-mttEqQpvbJbqknJl1gJplD6stbXZgYu9IFHqrVigX8Scd2A/s753/VRAM%201.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="753" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcbOk_GAzRUzDezqGwqWlEnfUFNk34DlUGbS2j0JYjbtIMA5qWZO8a6W7uIMyoUmr2FfmTu8Vcp_EPp5VoDgObRzbmrF5egUUr7f-ySVzvPDBrVcgWFBfQeA8IxCHoyhjxYjMQoqtHuP2-mttEqQpvbJbqknJl1gJplD6stbXZgYu9IFHqrVigX8Scd2A/w640-h362/VRAM%201.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;However, the story surrounding VRAM isn't as pretty and, as I noted previously, developers are stuck between a rock and a hard place because of the lacklustre provisions from GPU manufacturers.&amp;nbsp; The 8 GB requirement sits at a whopping 86 % of the tracked games and this is the highest proportion that any VRAM requirement has risen to. Previously, the highest requested was a 1 GB framebuffer back in 2013 with 57 %...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, seeing a small jump in 3rd place to 12 GB shows that the tide &lt;i&gt;is&lt;/i&gt;&amp;nbsp;turning and both AMD and Nvidia need to get onboard with providing more VRAM to consumers at the low-end in their product stacks.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTNKtfyE6axdlsXC_HSr0JzdksQeEmmB-nfEcT1aFMURMyM6lYnsWbTVLHPoNnj2z23C1AzCblVyNg6Dut2sTbclGuXtCBdnDwBznQcukkFp_vATBDFZmcWuKxhrN9aPGhEFWb0Q4yIyl24_zmwWZzNutjwNvBXzWAss589BUCc7ihyphenhyphenk6UKEEqSLGrYr4/s1807/RAM%20table%203.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="429" data-original-width="1807" height="152" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTNKtfyE6axdlsXC_HSr0JzdksQeEmmB-nfEcT1aFMURMyM6lYnsWbTVLHPoNnj2z23C1AzCblVyNg6Dut2sTbclGuXtCBdnDwBznQcukkFp_vATBDFZmcWuKxhrN9aPGhEFWb0Q4yIyl24_zmwWZzNutjwNvBXzWAss589BUCc7ihyphenhyphenk6UKEEqSLGrYr4/w640-h152/RAM%20table%203.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Yes, yes! I could have chosen 12 or 24 GB of RAM in the graphs above... So, sue me!&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Console Comparisons...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We can also take a look at the relative performance increase of the required PC specification for CPU in the year of each console hardware launch.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We're seeing the single- and multi-core performance nearing a similar delta that was observed during the transition to the PS4 Pro and Xbox One X. We're not quite there, yet but we're likely to close the gap in the next year.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In terms of trending there is something that I'm not sure I've pointed out before but it's that we were getting a new piece of console hardware every time we reached 1.5x the performance in the year of the last hardware release. Since we have no real CPU increase in the current gen, that pattern doesn't appear to be holding for the release of the PS5 Pro...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGDX9Fhk7qxaxAE-c_SP19-l9OUHl8fJNSSbd469ftgpAOEA1cvGbSbFhU56pT6nQ_rJGCuOzZr1V7sOeOmmhlOTZSF8gv-PdJpzcxKYp__u79XBkqis5T4A__W8QzmD_SoWe4kQbLjZRkbFaY_lcriRNIdzF1mcE0eS8OYQh7-PUuVvm5ii26AMvxtdw/s789/CPU%20perf%204.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="429" data-original-width="789" height="348" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGDX9Fhk7qxaxAE-c_SP19-l9OUHl8fJNSSbd469ftgpAOEA1cvGbSbFhU56pT6nQ_rJGCuOzZr1V7sOeOmmhlOTZSF8gv-PdJpzcxKYp__u79XBkqis5T4A__W8QzmD_SoWe4kQbLjZRkbFaY_lcriRNIdzF1mcE0eS8OYQh7-PUuVvm5ii26AMvxtdw/w640-h348/CPU%20perf%204.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;CPU performance, relative to the current most performant console (not including current year of release)...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;On the other hand, the same comparison between the performance increase relative to that of the average GPU requirement in the year of each console hardware release shows a continued downward trend as the &lt;a href="https://hole-in-my-head.blogspot.com/2022/02/the-rate-of-advancement-in-gaming.html"&gt;GPU performance gained per generation at the low-end becomes smaller and smaller&lt;/a&gt;...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Believe it or not, but this data flies in the face of gamers claiming that system requirements are "getting out of hand" or "unoptimised games!". The fact is that developers are requiring less and less from gamers in terms of GPU performance as time goes on... (Not that they can actually demand more since gamers can't buy more performance - but that's another discussion!)&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWdMN3aZCfCB-sEpr4mVtmc_XYJpT0jOZKNWlwCt3b5PTvL_HeZ7O2s89DBTNqFUP4EodtTdevuT3kQ38yaW9SNyFs_TVABDxdbo3VwEaF_xt2CbTq0m8SZnbpowq3WplssmzkcCA8hwxBfXqz9Ybq_XbdTDO_rPbvP78mBjONCM1TYgsWe3P2VtCvpc4/s747/GPU%20perf%203.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="747" height="364" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWdMN3aZCfCB-sEpr4mVtmc_XYJpT0jOZKNWlwCt3b5PTvL_HeZ7O2s89DBTNqFUP4EodtTdevuT3kQ38yaW9SNyFs_TVABDxdbo3VwEaF_xt2CbTq0m8SZnbpowq3WplssmzkcCA8hwxBfXqz9Ybq_XbdTDO_rPbvP78mBjONCM1TYgsWe3P2VtCvpc4/w640-h364/GPU%20perf%203.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;You're getting less and less from GPU manufacturers over the years, so developers are requiring less and less from you...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br style="text-align: left;" /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Another interesting angle is the comparison with the current most powerful console. In 2024, that's the PS5 Pro... What's interesting here is that the PS5 Pro hasn't provided any real performance uplift over the base PS5 in the CPU department, meaning that the delta in CPU performance is going to keep increasing until the next gen consoles release. At the moment, that delta is very small and it makes sense given that the CPU in the current generation of consoles is more equivalent to a desktop part than any before it since the mid-2000s.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The comparison with the GPU requirements relative to the PS5 Pro obviously take a dive but I'm anticipating a similarly quick recovery like we saw after the introduction of the PS4 Pro - games are getting more graphically demanding and, despite the consoles being held back by both the Xbox Series S and "lower than desktop low" settings, recommended requirements on the PC are still increasing.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhk4JcPh3BxjBCRwXLN2jQa2kQPv3TvU0bRMnmEEt_RduYrBhB6avO3KAxzAik3KwbLKRKE5A3YhkVTqyANCnWIDqVASgbRShvjEQcKpCu6Rh1HEDhsvI9lCS_cRHffP4FAvTHLR0ikZMtd0QHkUQpBjfxdBvz0NdJEPtHQ5x8ROYH838OlLU8VRaxQqXM/s785/Console%20comparison.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="429" data-original-width="785" height="350" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhk4JcPh3BxjBCRwXLN2jQa2kQPv3TvU0bRMnmEEt_RduYrBhB6avO3KAxzAik3KwbLKRKE5A3YhkVTqyANCnWIDqVASgbRShvjEQcKpCu6Rh1HEDhsvI9lCS_cRHffP4FAvTHLR0ikZMtd0QHkUQpBjfxdBvz0NdJEPtHQ5x8ROYH838OlLU8VRaxQqXM/w640-h350/Console%20comparison.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;This chart isn't surprising but it does show how 'good' and 'balanced' the current generation of consoles really was in a technological sense...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Wrapping Up...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Well, that's it - once again! I hope you found this installment of the yearly trending interesting and perhaps even enlightening!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;One thing I want to add over the coming weeks and months is new predictions for the next few years. Yes, we still haven't reached the cut-off period but I think it'll be interesting to derive the future afresh, given all the data we currently have amassed.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;While I didn't show it above, the RAM yearly requirements prediction was that 3rd place would be 32GB and second place would be 24 GB. Those predictions are reversed, which actually makes sense given RAM configurations and prices as we currently know them. Meanwhile, my VRAM predictions are now a little off: in 2024, I was predicting 10 GB would be 2nd most required while it remains at 6 GB... So, there's some tweaking to perform!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2024/12/next-gen-pc-gaming-requirements-2024.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-UZqT2U4WCN3FdPI0bwOh9D4s2pSqAzfMN6BNYmQjAH9pBT615jhg_4zPDcZBTbbGg09IP3NAhra34c_vWHr4K63KMIAkaoSoDP7qm-OnmgAtgwdBflFT951BK3E5xy9f-H7EM1aRtaGMQ8lfN0nT8UAJ3ShfE-fCqyGuukFxbVcDOplrqE6J9LmwuMs/s72-w640-h426-c/Title_2024.jpg" width="72"/><thr:total>2</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-29234950660592657</guid><pubDate>Fri, 22 Nov 2024 17:59:00 +0000</pubDate><atom:updated>2024-11-22T17:59:10.602+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">Roundup</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>How CPU-limited IS a modern mid-range PC...? (Part 2)</title><description>&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxzK5y8OG5DrBFP9bJajP90GH7PpIYtz1_V9fYDcbQyRA68h8Xa6QHeY2NWoDBLtD7wHcYNtn5KvQYqB5xULZJy0ueAClezlT8hyN_JlrNzQiZitlOTXR74vhdWpuN_qKOdETGTgpZNlNB5V7tf2vRinueKnkD70HJm40_N_nKe5YS7VoHiYTqrrVesXI/s1920/Header.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxzK5y8OG5DrBFP9bJajP90GH7PpIYtz1_V9fYDcbQyRA68h8Xa6QHeY2NWoDBLtD7wHcYNtn5KvQYqB5xULZJy0ueAClezlT8hyN_JlrNzQiZitlOTXR74vhdWpuN_qKOdETGTgpZNlNB5V7tf2vRinueKnkD70HJm40_N_nKe5YS7VoHiYTqrrVesXI/w640-h360/Header.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;I'm always interested in how mid-range PC components perform in games. Previously, I explored this from the &lt;a href="https://hole-in-my-head.blogspot.com/2024/07/how-cpu-limited-is-modern-mid-range-pc.html"&gt;standpoint of relative performance&lt;/a&gt; on each CPU I had available to test. I came to the conclusion that, even though some games will be CPU-limited by the more powerful mid-range GPUs I tested with, other, more graphically-demanding titles, will still benefit from the stronger GPUs.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I also noted that, in some titles, memory and PCIe bandwidth were also very important. So, today, I wanted to take a look at some of the various situations I can pull out of the data I generated back then...&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Getting over it...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The test systems are as described &lt;a href="https://hole-in-my-head.blogspot.com/2024/07/how-cpu-limited-is-modern-mid-range-pc.html"&gt;last time&lt;/a&gt;, so I won't repeat that, but all the data is taken from that same testing that I performed months ago. For this article, I want to dredge up how each of these platforms actually affects the individual games in question.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Let's start with Avatar: Frontiers of Pandora&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtiFiynoDIqNiVAGjSpexXqVD5jvveweAScFiOCrWP3-Wt5XBqnj6s5_nzfHYY4PiTgW2EXUyoiHIgahqOEJNQM65uLSROqi-9E5ryRbO48QZpYoeEd4NS2bgKk8Dac_q27G0YqOnTCwrwAyfR2Q9bUfGc6oBGTyzexajcQwovZyEBiSyRCo8Z91-NY3k/s757/Avatar_Low.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="399" data-original-width="757" height="338" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtiFiynoDIqNiVAGjSpexXqVD5jvveweAScFiOCrWP3-Wt5XBqnj6s5_nzfHYY4PiTgW2EXUyoiHIgahqOEJNQM65uLSROqi-9E5ryRbO48QZpYoeEd4NS2bgKk8Dac_q27G0YqOnTCwrwAyfR2Q9bUfGc6oBGTyzexajcQwovZyEBiSyRCo8Z91-NY3k/w640-h338/Avatar_Low.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Yes, the data bandwidth bottleneck is strong in this game, especially for the RTX 3070...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;At low settings, we can see that the CPU processing power and data transfer from system memory are the primary limitations. Starting with the Ryzen 5 4600G and finishing with the i5-12400 on DDR5, we see the RTX 4070 and Super, and RX 6800 increase and level off; the RTX 3070 increases in performance in step with every processing power increase and memory bandwidth increase - likely due to the increase in memory bandwidth.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Despite the 12400 paried with DDR5 showing a slightly higher performance for a few of the cards, and the relative standard deviation for these stock tests being around 1 for both Low and Ultra settings, these appear to be outliers in the potential performance space of the components. So, we're talking essentially equivalent performance with both DDR5 setups.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Meanwhile, the RX 7800 XT is really doing its own thing - a story we will see repeated throughout this compilation of results.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXkUKcKBElceG1Y30Km3MiqlqgKW2HnqDtEHBO-s6KVitw77sVyYgL5G8d6lxIZLDW2ka6PQEv3esMMXm3kLhyphenhyphenrYFzJk51gyLsEBdjYO-2ZWAygfDU9UghCYukMY3wBL1iPMUr3PFQS3isNDoc5H-D4WLmlwe0cPXRFT3up3wv1ZuG5xjwK36bE7n8LeQ/s757/Avatar_Ultra.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="397" data-original-width="757" height="336" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXkUKcKBElceG1Y30Km3MiqlqgKW2HnqDtEHBO-s6KVitw77sVyYgL5G8d6lxIZLDW2ka6PQEv3esMMXm3kLhyphenhyphenrYFzJk51gyLsEBdjYO-2ZWAygfDU9UghCYukMY3wBL1iPMUr3PFQS3isNDoc5H-D4WLmlwe0cPXRFT3up3wv1ZuG5xjwK36bE7n8LeQ/w640-h336/Avatar_Ultra.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;There is still an effect of data bandwidth bottleneck but with this setting, the GPU and CPU have a stronger effect...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;At ultra settings, we see a similar story, though more limited in absolute numbers. Generally speaking, we see the incremental jumps in processing power of each CPU design Zen 2 &amp;lt; Zen 3 &amp;lt; Alder Lake, but not to Raptor Lake. The effect of greater memory bandwidth doesn't help as much and we see the gap between highest and lowest performance for each GPU shrink as the GPU-limitation kicks in.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You might question &lt;i&gt;why&lt;/i&gt;&amp;nbsp;or &lt;i&gt;how&lt;/i&gt;&amp;nbsp;I know this? Well, it's written in the utilisation figures of the CPU/GPU:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgaBOXVTu_-vBzrjRqb3kpEoroKdoWPT8KZ6QUTXocX4m78_2zwoZIUupjNITj-3-f4bVfvgz2Dhjh44gZc151pVifMHeyB42ufdAGJnhuOpzOU2uAf4k5r3fq4dSnt1giz0YVjuhTnui_GeCf55iWGHuqGVzswKK974M6ycDtQizvzfEbcJC4_wEuyd54/s2420/Avatar_5700X3D%20vs%205600X.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="726" data-original-width="2420" height="192" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgaBOXVTu_-vBzrjRqb3kpEoroKdoWPT8KZ6QUTXocX4m78_2zwoZIUupjNITj-3-f4bVfvgz2Dhjh44gZc151pVifMHeyB42ufdAGJnhuOpzOU2uAf4k5r3fq4dSnt1giz0YVjuhTnui_GeCf55iWGHuqGVzswKK974M6ycDtQizvzfEbcJC4_wEuyd54/w640-h192/Avatar_5700X3D%20vs%205600X.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;For the Ryzen processors, the GPU is pretty much slammed the entire time, less so on the 5700X3D...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4uQR-RPIfs4t27Mw2fHuhu0NqNGpDKBhpnNL5WPI3ENWIIdvwAik83of3-okJFqodlw4hwN3cFVqffY8w_5HJbheesWAzffCg8oOD8SmjtQ4tZ74wRa6zIwX3AxlWV_pzBwG9XZS5FaBl16Qc7fMsQ7Gc-6iDrxit7IkTPsVfAh3OmwzfSptyaMlzmZI/s2420/Avatar_DDR4%20vs%20DDR5.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="726" data-original-width="2420" height="192" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4uQR-RPIfs4t27Mw2fHuhu0NqNGpDKBhpnNL5WPI3ENWIIdvwAik83of3-okJFqodlw4hwN3cFVqffY8w_5HJbheesWAzffCg8oOD8SmjtQ4tZ74wRa6zIwX3AxlWV_pzBwG9XZS5FaBl16Qc7fMsQ7Gc-6iDrxit7IkTPsVfAh3OmwzfSptyaMlzmZI/w640-h192/Avatar_DDR4%20vs%20DDR5.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;For the Intel configurations, we can see the drop in CPU utilisation when moving to DDR5 but the GPU is still falling asleep half the time, despite performance actually increasing...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;For the R5 5600X and i5-12400 (DDR4), we see higher CPU utilisation than for the R7 5700X3D* and i5-12400 (DDR5), even when taking into account the difference in cores/threads and the reason is quite simple: &lt;u&gt;data management&lt;/u&gt;. The RTX 3070 is actually struggling with it's 8 GB frame buffer, despite the low settings used in this test. Below, we can see the reduction in effect from the 16 GB framebuffer on the RX 7800 XT - though it is still present in the R5 5600X graph, if you look closely.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What is confusing me is the GPU utilisation on the intel chip in both configurations. The memory is faster, so the data transfer should be&amp;nbsp;&lt;i&gt;less&lt;/i&gt;&amp;nbsp;of a bottleneck**. In fact, we do see a shorter time spent on all CPU operations for the stronger Alder Lake design but what we also observe is the RTX 3070 has a much &lt;i&gt;longer&lt;/i&gt;&amp;nbsp;time spent calculating items related to the workloads that the GPU performs.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*We must keep in mind that while the 5700X3D has 2 extra cores, so %utilisation will be lower, in general, the Snowdrop engine is able to take advantage of the extra cores, as well because it's quite multithreaded... The point is, that it still shows overall lower utilisation.&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;**Hence the increased performance...&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvGtUgk7TAQe3trJd6q1OlGzsW_xoOBLfQdd0_VfhMGM8ykhePyo4S7T3n9hRf3sROqKoGVA_Bpmix2Ar9na_5gE54YE9SoA0HNVav63IUrZ94_pvN8-CklPATOVKQaARbT7ikN0sSNmKGU1DhwjZCRRnS0YQtXLDDemj1YnNf_FoCqAdN3gejJdpMncg/s985/Avatar_GPU%20times.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="273" data-original-width="985" height="178" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvGtUgk7TAQe3trJd6q1OlGzsW_xoOBLfQdd0_VfhMGM8ykhePyo4S7T3n9hRf3sROqKoGVA_Bpmix2Ar9na_5gE54YE9SoA0HNVav63IUrZ94_pvN8-CklPATOVKQaARbT7ikN0sSNmKGU1DhwjZCRRnS0YQtXLDDemj1YnNf_FoCqAdN3gejJdpMncg/w640-h178/Avatar_GPU%20times.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The RTX 3070 takes longer to perform everything than any other card - even ray tracing!&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Given how much worse it is than even the RX 6800 at ray tracing, it seems like the low utilisaiton we see on the GPU side is related to internal stalls across the chip while a) various calculations await the outcome of prior calculations, and b) data is shuttled in from system memory...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;It seems clear that the Snowdrop engine isn't able to handle the RTX 3070 very well and that there is more performance in the tank for that card if it had been gifted a larger framebuffer!&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If we take a look at that middle section of the benchmark (&lt;a href="https://www.ubisoft.com/en-gb/game/avatar/frontiers-of-pandora/news-updates/6WF5Ud05UCEmLp2R8cjjVn/avatar-frontiers-of-pandora-pc-features-deep-dive"&gt;which is ostensibly stressing world data streaming&lt;/a&gt;) the plots show higher CPU utilisation on the 12400 when paired with the RTX 3070 than with the RX 7800 XT - despite the more powerful card requiring more data to feed it, the 16 GB framebuffer can just hold more, &lt;u&gt;even on the low setting at 1080p&lt;/u&gt;&amp;nbsp;which, historically, we assume is not an issue for 8 GB framebuffers!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It seems that is not the case.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnv1fXyIwRJu8v0K8F_H0cH5HQzaUx9pC76QyjpjLqjpghMKZHS_ae5vGREFcDzrSoHXy3BG_rJMSsEu415NJ1c-TqC2RWl8jqmU0jEU8FmcLlzowvytQBd-AOKStyEk5hKMw9Xd-9RPdA-_8nMtUSpI0W8A2J7sI6H1e_pzW9oH2CCCNLmCJ6RWvMCQM/s2420/Avatar_Intel%20vs%20Ryzen_RX%207800%20XT.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="726" data-original-width="2420" height="192" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnv1fXyIwRJu8v0K8F_H0cH5HQzaUx9pC76QyjpjLqjpghMKZHS_ae5vGREFcDzrSoHXy3BG_rJMSsEu415NJ1c-TqC2RWl8jqmU0jEU8FmcLlzowvytQBd-AOKStyEk5hKMw9Xd-9RPdA-_8nMtUSpI0W8A2J7sI6H1e_pzW9oH2CCCNLmCJ6RWvMCQM/w640-h192/Avatar_Intel%20vs%20Ryzen_RX%207800%20XT.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Here, with the RX 7800 XT, we see less of a data management bottleneck using the same Low settings as above...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Next up: Hogwart's Legacy&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Hogwart's paints a much simpler picture (thankfully!), with the title performance scaling nicely with increase in processing power, memory bandwidth, GPU power and, especially important, PCIe bandwidth.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Yes, looking at the 4600G, not only is there an issue with the on-chip cache sizes and Zen 2 cores, there's a very clear bottleneck due to the PCIe gen 3 limitation of that CPU bringing the pretty powerful GPUs &lt;i&gt;all&lt;/i&gt;&amp;nbsp;crashing down.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What is interesting on the 4600G is the performance of the Radeon parts being stronger than the Nvidia cards. I don't know this for certain but I wouldn't be surprised if this is related to the Nvidia driver overhead - of which we haven't seen hide nor hair of for quite a while!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQTGnR8CPQ1u0IKis0EITtEM-tn782w4noEeeVORxwaxgANV0Zjs169o_cwlK_zapXcg1NW7uy83rMQpDFAqX3UeP7UU7zur0HXJjVKxItqo2wu4IAotvL3W8NkuToc2txgsINYaSweCJ0GVHdAeapR59Gs61FqBK_C12EUYCnzzwAozN_cgsvuSmcbN0/s757/Hogwarts.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="401" data-original-width="757" height="340" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQTGnR8CPQ1u0IKis0EITtEM-tn782w4noEeeVORxwaxgANV0Zjs169o_cwlK_zapXcg1NW7uy83rMQpDFAqX3UeP7UU7zur0HXJjVKxItqo2wu4IAotvL3W8NkuToc2txgsINYaSweCJ0GVHdAeapR59Gs61FqBK_C12EUYCnzzwAozN_cgsvuSmcbN0/w640-h340/Hogwarts.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;"BOOM! Take that, RTX 4070 Super!", Said the R5 4600G...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;What I find interesting about the above chart is the performance of the RTX 4070 Super on the i5-14600KF: it's like the GPU is released from a prison and all of this extra performance comes out of nowhere! This result actually is similar to that obtained by&amp;nbsp;&lt;a href="https://www.techspot.com/review/2801-amd-ryzen-5700x3d/" style="text-align: left;"&gt;Hardware Unboxed&lt;/a&gt;&amp;nbsp;in a similar becnhmarking area, though in that case, they're running an RTX 4090, and it just goes to show that all that extra GPU power is just going to waste!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Unfortunately, I can't see how the 4070 non-super would react to being paired with the 14600KF since I gave it away, but the RX 7800 XT is pegged at near-enough 100 % from the Ryzen 7 5700X3D and up on the chart but it's still increasing in performance by a little... So, while there's clearly a CPU bottleneck, the GPUs I have on hand are mostly getting tapped-out in this demanding title!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4-4DORl4knDK1tgVDJ2fkZZnIfWaPIFrmVw_Jkcs0c0aVeEBHqCdiGXwcaUW8akRTJb1JBoJNLEOjDZX-2I_zGa6HNFfEMbdWmkXPeyYeIyOSsu_YgxTXRhaqgmgI_XF6CSx1l7UGK4nXYqY0ahaaeIwc6ce72FJlcl0opx1zMBwob60DqcmSisQwH58/s834/Hogwarts_GPU%20util.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="500" data-original-width="834" height="384" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4-4DORl4knDK1tgVDJ2fkZZnIfWaPIFrmVw_Jkcs0c0aVeEBHqCdiGXwcaUW8akRTJb1JBoJNLEOjDZX-2I_zGa6HNFfEMbdWmkXPeyYeIyOSsu_YgxTXRhaqgmgI_XF6CSx1l7UGK4nXYqY0ahaaeIwc6ce72FJlcl0opx1zMBwob60DqcmSisQwH58/w640-h384/Hogwarts_GPU%20util.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The stronger CPU unleashes some more of the potential of the RTX 4070 Super...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;We also see increases going from DDR4 3200 to DDR4 3800 to DDR5 6400, with the RX 7800 XT coming closer and closer to 100% utilisation, even ignoring the increase in CPU power (though that is a primary factor).&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTTOcYLtVPFSrGMRvmm5aPF5oyWiRzga_BM1nP0dDqFzxamF-eNFxpSqILwJ7tdzA4vk77GWwggdX572ds6lS2K2RFrkIgCtxNK59ZJxG_AbYyemRwPl2uyDiU8sKhYxT_h4yrQNXOyPVH6jlnU8FTkUzU7cOeaN53G3PBIfWadX_kPYOFCbfHl0yMn04/s834/Hogwarts_GPU%20util_RX%207800%20XT.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="500" data-original-width="834" height="384" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTTOcYLtVPFSrGMRvmm5aPF5oyWiRzga_BM1nP0dDqFzxamF-eNFxpSqILwJ7tdzA4vk77GWwggdX572ds6lS2K2RFrkIgCtxNK59ZJxG_AbYyemRwPl2uyDiU8sKhYxT_h4yrQNXOyPVH6jlnU8FTkUzU7cOeaN53G3PBIfWadX_kPYOFCbfHl0yMn04/w640-h384/Hogwarts_GPU%20util_RX%207800%20XT.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Looking at the increase in utilisation: the 12400 DDR4 has an average of 97.8% while the 12400 DDR5 has an average of 98.6%...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Ratchet and Clank: Rift Apart&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Here we start on the heavier of the two Insomniac Engine games. In fact, it's a &lt;i&gt;little too heavy&lt;/i&gt;&amp;nbsp;for the RTX 3070, with the game crashing when paired with the 5700X3D* and performing &lt;i&gt;worse&lt;/i&gt;&amp;nbsp;on the 5600X compared to the 4600G. Yeah, I think we can safely discount the RTX 3070 as not functioning well in this testing.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*And, in fact, in some of my more recently testing, too&amp;nbsp; - it seems that a game update has made the game WAY less stable on 8 GB VRAM cards...&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The game has an appetite for data bandwidth to the CPU, with the 5700X3D performing decently well but not enough to overcome the generally higher bandwidth memory used on the intel platforms that's paired with the more performant Alder Lake cores.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Obviously, this game is very GPU-dependent, given its ray tracing chops, and the hardware with enough VRAM along with dedicated RT silicon (i.e. the RTX 4070 and Super) outclass all the other cards in the race...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbIA1Fqsb8p0D7Z85UC8z3idNac3WWNZM9XQP9mQIy7naYHJScaH3e8V7TkYgR646F2LP2_A9CYOtLI6RAVgSAyNfLZ4OvS_rVGKagSDyFmfeMxMOE_nEM0kwYe7KHQSbHB599xrbhVhwZlcOWDSwlzmtpKbgcUEyTelJHtGQNUqpdattMTRwsMiVLy04/s755/Ratchet.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="397" data-original-width="755" height="336" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbIA1Fqsb8p0D7Z85UC8z3idNac3WWNZM9XQP9mQIy7naYHJScaH3e8V7TkYgR646F2LP2_A9CYOtLI6RAVgSAyNfLZ4OvS_rVGKagSDyFmfeMxMOE_nEM0kwYe7KHQSbHB599xrbhVhwZlcOWDSwlzmtpKbgcUEyTelJHtGQNUqpdattMTRwsMiVLy04/w640-h336/Ratchet.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The 5600X looks like it does well compared to the Intel parts in the RX 7800 XT testing, but it's only a 1-2 fps difference...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Meanwhile, on the Radeon side of things, the RX 6800 becomes the bottleneck very quickly, being outshined by the other three newer generation cards. This is another one of those results where the 7800 XT is performing very strangely! It just doesn't appear to function that well on the X3D CPU or the Intel DDR4 platform but recovers nicely on the DDR5 platform - this seems to imply the PCIe to memory bandwidth is limiting the actual performance of the game* and this is something that is not helped by the extra 3D cache.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;blockquote&gt;&lt;div&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;&lt;i&gt;*We know Insomniac titles heavily utilise the PCIe lanes on GPUs to quickly swap data in and out of VRAM...&lt;/i&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp;&lt;/div&gt;&lt;/blockquote&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKu6V6jzXZqD0zjQGFy62YH8PdEnbHfaO-9UuJujjn8XYy1hfT0SU3eteRu1SK4bRdFagRkaYvLFLl883xp2Xm1Ii_AtuBVyLf1iPxDF6pATFTjXkaCP7oOC2UUnCTvwUHe4hDDxq2j59JCbzqgQQmxeNAQkiu2kApowbWsPPyi1IN2xqQbg5AenEU68Q/s832/Ratchet_GPU_7800XT.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="500" data-original-width="832" height="384" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKu6V6jzXZqD0zjQGFy62YH8PdEnbHfaO-9UuJujjn8XYy1hfT0SU3eteRu1SK4bRdFagRkaYvLFLl883xp2Xm1Ii_AtuBVyLf1iPxDF6pATFTjXkaCP7oOC2UUnCTvwUHe4hDDxq2j59JCbzqgQQmxeNAQkiu2kApowbWsPPyi1IN2xqQbg5AenEU68Q/w640-h384/Ratchet_GPU_7800XT.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The fact that the 12400 DDR4 is performing worse than the 5700X3D shows that data management is vitally important to this title as opposed to it&amp;nbsp;&lt;i&gt;simply&lt;/i&gt;&amp;nbsp;being a funciton of CPU power...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div&gt;Taking all the factors into account, it becomes apparent that the CPU frequency is playing a big part in why the 5700X3D is dipping in performance ever so slightly compared to the 5600X. CPU compute is a bigger factor in the overall performance picture of this title than data movement is - despite both being important. The biggest question mark surrounding the Radeon cards' performance is when they are paired with the 4600G - or maybe conversely the 12400 DDR4 platform - and I am yet to fully understand the reason for that.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikBNiWOjp5g8KQ_5wFnBNbvlYv3WTmXw1CdRawEqashrN2jVkqXhaEkI0DSJyo17HKTZBr0e2PjQetCYMuCTS42M1eHijYgLHoylbxshiCejRd9zovNipCboP8LlxFEc__-iJNqTvCp3FkRtLj2aM2ZTnHXxkMh-sad2BI5uqUbvE15XdVr4vuC_Z__AM/s833/Ratchet_GPU.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="500" data-original-width="833" height="384" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikBNiWOjp5g8KQ_5wFnBNbvlYv3WTmXw1CdRawEqashrN2jVkqXhaEkI0DSJyo17HKTZBr0e2PjQetCYMuCTS42M1eHijYgLHoylbxshiCejRd9zovNipCboP8LlxFEc__-iJNqTvCp3FkRtLj2aM2ZTnHXxkMh-sad2BI5uqUbvE15XdVr4vuC_Z__AM/w640-h384/Ratchet_GPU.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Meanwhile, the RTX 4070 Super's performance curve makes sense, with both greater CPU performance and higher memory bandwidth resulting in higher %GPU utilisation. Ultimately, however, the 12400 and 14600KF perform identically - most likely because of the identical memory and PCIe bandwidth - despite the 1 GHz of extra CPU frequency on show in the case of the 14600KF...&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I should really explore this title with memory scaling in a future article because the drop in performance per CPU doesn't match the GPU utilisation numbers.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Spider-Man&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The second of the Insomniac titles is less heavy on the GPU and is instead &lt;u style="font-style: italic;"&gt;incredibly&lt;/u&gt;&amp;nbsp;CPU-bottlenecked. In this instance, the primary limitation is CPU compute performance, followed by bandwidth between the CPU and the RAM. We see this play out for the 14600KF and in the difference between the i5-12400 DDR5 and DDR4 results. Additionally, the larger cache on the 5700X3D really brings that part leaps and bounds above the 5600X to match the 12400 DDR5 result.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In this test, we can see that GPU performance &lt;i&gt;just does not matter&lt;/i&gt;, with practically all GPUs performing equally across all CPUs tested. Strangely, enough, the RTX 3070 is often slightly ekeing-out ahead of the average in each platform and I can only assume that this is, again, a result of the Nvidia driver overhead - the smaller GPU causing less overhead.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNj3VbxfB85a2XVNdMkkJSDH5TKQ_LzaJ5z7QhRWTtcm2_daBuc4dnxw70UPuvgAWhoWHcpkWelXsjlO9ohykL5TDhcGvr184jyVK8ua0d6kj0nu19rPRes5H-15l9m89QhtXF-TXZ2eGtFNAy2CWqau5y6Dq2X_Y8GkUW-1ByAxpWtFNpKosNEMjLcaU/s759/Spider-man.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="401" data-original-width="759" height="338" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNj3VbxfB85a2XVNdMkkJSDH5TKQ_LzaJ5z7QhRWTtcm2_daBuc4dnxw70UPuvgAWhoWHcpkWelXsjlO9ohykL5TDhcGvr184jyVK8ua0d6kj0nu19rPRes5H-15l9m89QhtXF-TXZ2eGtFNAy2CWqau5y6Dq2X_Y8GkUW-1ByAxpWtFNpKosNEMjLcaU/w640-h338/Spider-man.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;What's impressive to me is the CPU utilisation of this title on the 5700X3D. When taking an average of the %utilisation and multiplying that figure by the number of threads on each CPU, we actually find that the 5700X3D is only using the equivalent of between 6 and 7 threads whereas the other CPUs all use the equivalent of around 9 threads. Yes, it's not "winning" in terms of absolute performance but it's pulling above its weight in terms of efficiency due to the 3D cache...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This is yet another title I should return to in a future installment, in order to test memory scaling, though the difference in bandwidth of the 12400 DDR4/5 is 52 GB/s to 84 GB/s (using Intel's MLC application). That's a 61% increase in bandwidth resulting in a 9% increase in fps on the RTX 4070 Super and it'd be interesting to see that play out on the less artificially limited 14600KF.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGKXA8WBlNKYfjdmQyDhHmkfnA7W5ogeJgSiC1fPAWUE4FnSCLfcBhFgEeO7s5zwfkIvbzsjeCEKCukRZyVRE5KLdjzXymF9xXxb1o7hOeIoV6O1wJ0QcFWkSoG0lwiYNZ75Z3F6C3mrU9yxjI7LHS2RUd-lSq42qMfFf5fwvLXoSPsVDf56l9HhbZIuA/s1672/Spider-man_7800%20XT%20util.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="502" data-original-width="1672" height="192" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGKXA8WBlNKYfjdmQyDhHmkfnA7W5ogeJgSiC1fPAWUE4FnSCLfcBhFgEeO7s5zwfkIvbzsjeCEKCukRZyVRE5KLdjzXymF9xXxb1o7hOeIoV6O1wJ0QcFWkSoG0lwiYNZ75Z3F6C3mrU9yxjI7LHS2RUd-lSq42qMfFf5fwvLXoSPsVDf56l9HhbZIuA/w640-h192/Spider-man_7800%20XT%20util.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The 7800 XT is close to being fully utilised but that doesn't mean there aren't more frames to be had...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;These charts are &lt;a href="https://hole-in-my-head.blogspot.com/2023/09/the-problem-with-dumb-metrics-argument.html"&gt;another instance&lt;/a&gt; where I'm looking at the performance in fps compared with the GPU utilisation numbers and realising that the metric may not be a very useful indicator of a bottleneck. Sure, we all know that CPU utilisation numbers (as an aggregate) are not very useful and instead individual core utilisation numbers are a better way to determine game thread limitations, but until now, many industry commentators have always pointed to GPU utilisation as a good way to understand if you're "GPU-bound".&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;My data tells a different story, at least in some titles...&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;img border="0" data-original-height="502" data-original-width="1672" height="192" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD-ceM4uKCCZuBRkcXffwH44hSWM6-n9_WHL-zuEy7SUPUiWSOMDsIeSermZbQxvRphOZI9YxuW4t5QM7R4BnB5vVqT4sm9dgMbDhFMjA-UJT1kZDOobsDJwnhaK3O2iqUqp5CXksxIlX1ElqieQqGH7rofiOhqa5SIZ-v68gdlve02C6ypGeLb_z8WQc/w640-h192/Spider-man_4070Super%20util.png" style="color: #0000ee; margin-left: auto; margin-right: auto; text-align: center;" width="640" /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;It would take a much stronger CPU to max out the RTX 4070 Super in this title...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Counterstrike 2...&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There are few games which are as historically well-known to be CPU-bound as the various esport titles and Counterstrike (now "Counterstrike 2") still remains one of those! The thing with Counterstrike is that the game doesn't appear to be data-bound and instead it's very compute-bound. In this scenario, peak core frequency is king. So, while the 3D cache on the 5700X3D does help it keep up with other parts, the faster 5600X gives better performance in the average fps...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There is &lt;i&gt;some&lt;/i&gt;&amp;nbsp;improvement based on memory speed, though. The 12400 DDR5 inches out ahead of its pairing with DDR4, so faster memory does appear to have an impact on proceedings.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What's confusing to me is the GPU utilisation behaviour in this title.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkNfkNaabonzY20_Zz5vpGu1BDSqqD7TGK_PQk7yTbcVrjsgt_tlrzZOcLPqn9ey1N6m3JwnehADBi6VCt51ufBU6b8ilqdlM0r32wmM_7GA2b6nHXY8zFL1JV4pBYocgAfp5O9Nx3yyCv5SjSF3K2BYEf-mSN5kNPf-jl7J54hh9s8F92QpvYoSoMPMY/s757/Counterstrike%202.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="399" data-original-width="757" height="338" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkNfkNaabonzY20_Zz5vpGu1BDSqqD7TGK_PQk7yTbcVrjsgt_tlrzZOcLPqn9ey1N6m3JwnehADBi6VCt51ufBU6b8ilqdlM0r32wmM_7GA2b6nHXY8zFL1JV4pBYocgAfp5O9Nx3yyCv5SjSF3K2BYEf-mSN5kNPf-jl7J54hh9s8F92QpvYoSoMPMY/w640-h338/Counterstrike%202.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Looking at these graphs, there's no rhyme or reason to the %utilisation and average fps performance! The RX 7800 XT paired with the 5700X3D has some absolutely huge dips in GPU utilisaiton which &lt;u style="font-style: italic;"&gt;do not&lt;/u&gt;&amp;nbsp;correspond at all with any loss in fps performance. That period of 9% utilisation corresponds to around 430 fps, on average, pumped onto the screen! And the, technically, less utilised 12400 DDR5 with the 7800 XT has better performance!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, despite what I said above about this game being primarily CPU and memory bound, it's clear that there is SOME GPU bottlenecking going on because the RTX 4070 Super suddenly springs to life to vastly outclass all the other GPUs when paired with the 14600KF. If the CPU was the only bottleneck, we'd expect to see closer average fps across the GPUs.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The only thing I can think of is that this title's performance is also related to the GPU core frequency and %utilisation numbers in this title have little bearing on whatever actual bottleneck is in the GPU. The 4070 Super has an average core frequency of 2850 MHz while the 7800 XT has 2626 MHz during this benchmark. That's an 8% difference which corresponds to an 11% fps difference (436 fps vs 392 fps)... which is pretty darn close in way of an explanation of the results we're seeing...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiykSzJLG4IGzOhlsmSHw4qTsmU1VnEGR2zWk4mF4saPJ-picU5XeNIxuYgIUm8OZjbatx8HdRsdX2FTSCws6nlR3Howu-dEbiGA5N3CP7ejkM_1KbBMuRT4HQa-uNCqPe2ZPC1vQRz-BrVs7wiXmGzGSlSSyQMlmYb3kj787pL5XOJDtN1nZ9Y0N9C1Yk/s1672/Counterstrike_GPU%20util.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1009" data-original-width="1672" height="386" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiykSzJLG4IGzOhlsmSHw4qTsmU1VnEGR2zWk4mF4saPJ-picU5XeNIxuYgIUm8OZjbatx8HdRsdX2FTSCws6nlR3Howu-dEbiGA5N3CP7ejkM_1KbBMuRT4HQa-uNCqPe2ZPC1vQRz-BrVs7wiXmGzGSlSSyQMlmYb3kj787pL5XOJDtN1nZ9Y0N9C1Yk/w640-h386/Counterstrike_GPU%20util.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Yet more evidence that %GPU utilisation can be a meaningless metric...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Conclusion...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Last time, &lt;a href="https://hole-in-my-head.blogspot.com/2024/07/how-cpu-limited-is-modern-mid-range-pc.html"&gt;I started the post saying that I wanted to understand if there was a cut-off point&lt;/a&gt; for pairing a particular GPU with a particular CPU. Today's testing rounds out the conclusion I made last time that an RX 7800 XT or RTX 4070 would be the sweet spot for a low-to-mid-range CPU but that an RTX 4070 Super would still be worthwhile if games with more graphically-demanding were in the planned usage.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Today, I think we can conclude that the RTX 4070 Super is &lt;i&gt;not&lt;/i&gt;&amp;nbsp;overkill for any of these mid-range CPUs (aside from the Ryzen 5 4600G!). Yes, you'll be losing around 20 % performance in &lt;i&gt;some&lt;/i&gt;&amp;nbsp;games by not having a CPU as powerful as an i5-14600KF but the 4070 Super still provides a 10 - 20 % performance improvement over the other GPUs tested on the weaker CPUs in games where you're not CPU-limited... This level of performance also allows gaming at higher resolutions and quality settings and, quite frankly, is the best value GPU in terms of price to performance for the time being!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, it does seem to me that pushing for a stronger GPU will result in a real wastage of performance without purchasing a more powerful CPU. Thankfully, CPUs are still cheaper than GPUs. Hell, a CPU + RAM + motherboard is still cheaper than a mid-to-high end GPU!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The last nugget that I'm taking away from this testing and analysis is that we cannot rely on the %GPU utilisation metric to determine if we are really GPU-bound in a given title. Actual on the ground testing needs to be done in order to definitively verify whether that's true or not on a game-by-game basis.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As a result, we have lost one "surefire" performance metric which many people have looked to over the years...&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2024/11/how-cpu-limited-is-modern-mid-range-pc.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxzK5y8OG5DrBFP9bJajP90GH7PpIYtz1_V9fYDcbQyRA68h8Xa6QHeY2NWoDBLtD7wHcYNtn5KvQYqB5xULZJy0ueAClezlT8hyN_JlrNzQiZitlOTXR74vhdWpuN_qKOdETGTgpZNlNB5V7tf2vRinueKnkD70HJm40_N_nKe5YS7VoHiYTqrrVesXI/s72-w640-h360-c/Header.jpg" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-3115428746524178461</guid><pubDate>Sun, 03 Nov 2024 11:59:00 +0000</pubDate><atom:updated>2024-11-03T12:03:19.192+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>How Powerful is the PS5 Pro? (Part 2)</title><description>&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWABWtYhXt3AIFiY4VF-BaSG-ur2Z7nRn6nRWk_yGuRpNm2vVPbjE9P7YKipAxJIwYxHF6TztAeeFGucSHSpC5SXsYZARTcPQBHOCJUI2aHcQvpvF1BcOGwxsKr0j7Q_06I9XDVgd63l6SnMCaFobYyuC5EJYZ_mNYTNnbEbz9CSspm945zktdAlSksLU/s1920/Title.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWABWtYhXt3AIFiY4VF-BaSG-ur2Z7nRn6nRWk_yGuRpNm2vVPbjE9P7YKipAxJIwYxHF6TztAeeFGucSHSpC5SXsYZARTcPQBHOCJUI2aHcQvpvF1BcOGwxsKr0j7Q_06I9XDVgd63l6SnMCaFobYyuC5EJYZ_mNYTNnbEbz9CSspm945zktdAlSksLU/w640-h360/Title.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Overnight, a Twitter user, "&lt;a href="https://x.com/brunno_fast?lang=en"&gt;brunno_fast&lt;/a&gt;", has &lt;a href="https://insider-gaming.com/ps5-pro-specs-revealed/"&gt;uploaded a photo from the Playstation Pro Safety Guide&lt;/a&gt;&amp;nbsp;revealing some hitherto unknown information about the PS5 Pro. Additionally, we have a teardown from &lt;a href="https://youtu.be/z1VSPgqY7Yw?si=JDC-SLaM0N_8spG-"&gt;"TAG" over on YouTube&lt;/a&gt;! The dam has broken and Sony no longer has control over the flow of information...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let's dig into the details!&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Safety Specs...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let's start with the details:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSgn4dBcWKlaLhHWSLw-DUDCORfCbog02daIomURpiMNlymLX6elC1sVVpbEsYgz7oeIQPz9wss3q000FdIrGcI0phT9aV5SYz28WjdjbMxxwcMLmwQv8uzZLURRpnTgEEQdVrcvrwO8OTDvrKtoq3GOOZJnI5wa_RmM5nQPzwwyQVATF-NbdS4z_oWII/s1137/Specs.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="510" data-original-width="1137" height="288" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSgn4dBcWKlaLhHWSLw-DUDCORfCbog02daIomURpiMNlymLX6elC1sVVpbEsYgz7oeIQPz9wss3q000FdIrGcI0phT9aV5SYz28WjdjbMxxwcMLmwQv8uzZLURRpnTgEEQdVrcvrwO8OTDvrKtoq3GOOZJnI5wa_RmM5nQPzwwyQVATF-NbdS4z_oWII/w640-h288/Specs.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Some interesting differences, there...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;From my cursory glance, I spot the following differences between the specs listed for the Pro and base digital console:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;16.7 TFLOPs of GPU compute vs 10 TFLOPs&lt;/li&gt;&lt;li&gt;DDR5 2 GB listed in the memory section&lt;/li&gt;&lt;li&gt;2 TB SSD vs 825 GB in storage&lt;/li&gt;&lt;li&gt;Replacement of a Hi-speed USB-A port with a Hi-speed USB-C port (&lt;a href="https://www.sony.com/electronics/support/articles/00024571"&gt;480 Mbps&lt;/a&gt;)&lt;/li&gt;&lt;li&gt;Wireless support for 802.11 be (WiFi 7)&lt;/li&gt;&lt;li&gt;Higher current draw: 1.7 A vs 1.46 A&lt;/li&gt;&lt;li&gt;Higher Wattage of the PSU: 390 W vs 340 W&lt;/li&gt;&lt;li&gt;Lighter weight: 3.1 kg vs 3.4 kg&lt;/li&gt;&lt;/ul&gt;&lt;div&gt;This is a confusing mish-mash of updates for the Pro console over the base model!&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;On the one hand, we have the &lt;i&gt;slow&lt;/i&gt;&amp;nbsp;USB connector updated to a type-C, instead of one or both of the USB-A Superspeed (5 Gbps) connectors updated - which makes zero sense to me! We have a reduction in weight, which would imply a smaller (or at least less dense) heatsink, despite the larger chassis size. We also have the TFLOP number of 16.7, which is &lt;i&gt;not&lt;/i&gt;&amp;nbsp;the previously leaked value of &lt;a href="https://youtu.be/qBCOMXuxK6Q?si=-g06hkE9GY3VoiDs&amp;amp;t=2577"&gt;33.4 TFLOPs&lt;/a&gt;&amp;nbsp;- mind you, the PS5 base model's 10.23 TFLOP capability is also similarly misreported as 10 TFLOPs... but that's a smaller discrepancy.&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;On the other hand, we have the increased power and current requirements - which tally with each other: Wattage = Voltage x Current; Base = 240 x 1.46 = 350.4 W, Pro = 240 x 1.7 = 408 W. However, we know they're not pulling 100% and without 100% efficiency (plus some overhead for safety) so divide the two to get a ratio of 1.16 and multiply the 340 W by that ratio and you get ~394 W - close enough to the new value of 390 W to be accurate.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, the point still stands - &lt;i&gt;&lt;u&gt;power use has gone up...&lt;/u&gt;&lt;/i&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Previously, I had looked at the &lt;a href="https://hole-in-my-head.blogspot.com/2024/03/analyse-this-lets-look-at-ps5-pro-leaks.html"&gt;power efficiency of RDNA2 compared with RDNA3&lt;/a&gt; and come away with the conclusion that a move to RDNA3 would allow Sony to keep similar power levels to the base unit - which is now confirmed to be untrue.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Then, there's that 2 GB of DDR5 memory... which is a bit of a mystery to me. It seems like this is the primary reason for &lt;a href="https://youtu.be/U5h4bvudvX8?si=LAmuvzTrYDK2307K&amp;amp;t=2819"&gt;freeing up memory for games&lt;/a&gt;. However, does it have its own memory bus, like the PS4 Pro did? Or will it eat up some of that precious 256 bit main bus bandwidth? The other side of the coin is that a typical 2 TB SSD with DRAM cache would have a &lt;a href="https://www.techpowerup.com/ssd-specs/corsair-mp600-2-tb.d374"&gt;2 GB DRAM chip as buffer&lt;/a&gt;. Could we have an increase in that, as well?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheV0hC1bdBzOL-j09k9JK0Je3RWnq4pjYb3B_i2xQW_rRUnjG2nyTXJEZQ-Oyet6YWtjejPTJR35GDvAoRBiHC_VQE169V6I_L1aqmD8wTWNEnLE-mkHGkMuSLyffOYcqh7rXsZrYEDTcWmtCPHiTRf2wCiBlH5lBdayerth3C9GUmxv_eiX1ZsQtrD6g/s1137/Latency%20comparison%207000%20vs%206000_vector%20cache.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="581" data-original-width="1137" height="328" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheV0hC1bdBzOL-j09k9JK0Je3RWnq4pjYb3B_i2xQW_rRUnjG2nyTXJEZQ-Oyet6YWtjejPTJR35GDvAoRBiHC_VQE169V6I_L1aqmD8wTWNEnLE-mkHGkMuSLyffOYcqh7rXsZrYEDTcWmtCPHiTRf2wCiBlH5lBdayerth3C9GUmxv_eiX1ZsQtrD6g/w640-h328/Latency%20comparison%207000%20vs%206000_vector%20cache.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Larger L0, L1 and L2 caches are a feature of RDNA3 over RDNA2...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Flip-flopping...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Coming back to Digital Foundry's &lt;a href="https://youtu.be/qBCOMXuxK6Q?si=YYtVw0Lv4ZFDGGjT&amp;amp;t=2414"&gt;coverage from several months ago&lt;/a&gt;, the GPU specs detailed by them were as follows:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;30 WGPs (60 CU)&lt;/li&gt;&lt;li&gt;GPU L0 cache 32 kb&lt;/li&gt;&lt;li&gt;GPU L1 cache 256 kb&lt;/li&gt;&lt;li&gt;GPU L2 cache 4 MB&lt;/li&gt;&lt;li&gt;Up to 2.35 GHz GPU clock frequency but confirming the averaging of around 33.4 TFLOPs&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Those cache sizes match well with what we observe in the RDNA3 Shader Engine architecture. However, the teraflop value listed in the official specs of the PS5 Pro is &lt;a href="https://www.techpowerup.com/gpu-specs/radeon-rx-6800.c3713"&gt;matching only that of an RDNA2 design&lt;/a&gt; - with single a ALU pipeline per Compute Unit (CU). Similarly, the Pro is actually matching the increase in power usage of an RDNA2 design - despite being on the same* or smaller manufacturing process node (either TSMC N6 or N4). We're looking at a 1.42x power increase from RX 6700 to RX 6800 on N7 but, taking into account the process node optimisations and a slightly increased clock frequency, 1.16x &lt;a href="https://x.com/Kepler_L2/status/1852967852354961460"&gt;could be a feasible number for N4&lt;/a&gt;...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I say slightly increased clock frequency because, like Digital Foundry state in that segment linked above, the calculations put the core clock frequency at 2174 MHz (the stock desktop equivalent RDNA2 part has a boost frequency of 2105 MHz) but to increase further to potential 2350 MHz, would likely require an associated reduction in CPU power draw to achieve...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*I say "the same" but there are some &lt;a href="https://www.tsmc.com/english/dedicatedFoundry/technology/platform_DCE_N7_N6#:~:text=TSMC%20N6%20technology%20features%20more,defect%20density%20similar%20to%20N7."&gt;optimisations on N6 versus the N7 node&lt;/a&gt; used to manufacture RDNA2. So, we would actually expect some power-savings if all things remained the same...&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, this has left me with a bit of a question in my mind. The leaked specifications from the developer portal all point to RDNA3 being utilised in the Pro. However, the official specs point to the ALU numbers per CU of RDNA2 being present.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Logically speaking, Sony &lt;b&gt;&lt;i&gt;would NOT&lt;/i&gt;&lt;/b&gt;&amp;nbsp;"sandbag" about the real TFLOP capabilities of the PS5 Pro - &lt;u&gt;they have no incentive to do so&lt;/u&gt;. AMD's RDNA3 and Nvidia's Ampere do not pretend to be capable of doing &lt;i&gt;less &lt;/i&gt;work than they can theoretically manage. Which professionally-acting, long-established company would do such a thing?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;No, the only conclusion is that the TFLOP value in the official consumer documentation is accurate. Taking these conclusions and data at face value, it appears that the developer portal documentation that was leaked could not have been accurate or was purposefully falsified in some manner in order to throw-off leakers and damage their credibility.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/a/AVvXsEi3vCgcd70b3b2mvRtPWfj3kAWAlhn_feI9qt81vnCLg5F7HnplZP8NiLaAgly2SwwnhOGI0VBM16W9HZT6MaiFN0skM17ONDkJPwulJAOEfD8NKUITi5US9_3YZxckEcghaty1RDhjjTVCdd4wE2vXBvW4jci0w5ro8eSkZ_5D5HfqXDAVPChcBmFKMbY" style="margin-left: auto; margin-right: auto;"&gt;&lt;img alt="" data-original-height="239" data-original-width="595" height="258" src="https://blogger.googleusercontent.com/img/a/AVvXsEi3vCgcd70b3b2mvRtPWfj3kAWAlhn_feI9qt81vnCLg5F7HnplZP8NiLaAgly2SwwnhOGI0VBM16W9HZT6MaiFN0skM17ONDkJPwulJAOEfD8NKUITi5US9_3YZxckEcghaty1RDhjjTVCdd4wE2vXBvW4jci0w5ro8eSkZ_5D5HfqXDAVPChcBmFKMbY=w640-h258" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;&lt;a href="https://x.com/matiasgoldberg/status/1769952200103444911"&gt;Programmer Matias Goldberg gave his thoughts on architectural changes...&lt;/a&gt;&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;div&gt;My own&amp;nbsp;&lt;a href="https://hole-in-my-head.blogspot.com/2024/03/analyse-this-simulating-ps5-pro.html"&gt;PS5 Pro simulations&lt;/a&gt;&amp;nbsp;&lt;a href="https://hole-in-my-head.blogspot.com/2024/09/analyse-this-simulating-ps5-pro-part-2.html"&gt;pointed to this outcome&lt;/a&gt;, throughout this whole time period. Especially when I showed that, even when CPU-limited, the RDNA3 part would have an uplift of 1.62 - 1.70x the peformance of over the PS5 base&amp;nbsp;&lt;i&gt;at a core clock frequency of 2230 MHz&lt;/i&gt;. Whereas the uplift in performance on the equivalent RDNA2 part over the base of 1.42 - 1.43x, again&amp;nbsp;&lt;i&gt;at 2230 MHz!&amp;nbsp;&lt;/i&gt;&lt;/div&gt;&lt;div&gt;&lt;i&gt;&lt;br /&gt;&lt;/i&gt;&lt;/div&gt;&lt;div&gt;During my prior calculations, I also worked out that 2x the raytracing performance would leave the theoretical PS5 Pro at &lt;a href="https://hole-in-my-head.blogspot.com/2024/09/how-powerful-is-ps5-pro.html"&gt;just above RX 6800-levels of RT performance&lt;/a&gt;, with 2.5x equating to approximately RX 7700 XT-levels of performance.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;I have also pointed out that&amp;nbsp;&lt;a href="https://hole-in-my-head.blogspot.com/2023/06/analyse-this-what-would-mid-gen-console.html"&gt;updating to RDNA3&lt;/a&gt;&amp;nbsp;&lt;a href="https://hole-in-my-head.blogspot.com/2024/03/analyse-this-lets-look-at-ps5-pro-leaks.html"&gt;might cause software compatibility issues&lt;/a&gt;. There are ways around that - by being able to "gate-off" various features but things like changes cache size are transparent to sofware in terms of programme knowledge but opaque in terms of performance optimisations. If you have written your programme to expect certain data movement behaviour and optimised for it, changing the cache size will mess with the expected output of the programme and potentially cause problems.&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;For the time being, until ironclad evidence is presented, I have to return to my prior standpoint that the PS5 Pro is using 60 CU of RDNA2 design. There is just too much conflicting data to confidently state that RDNA3 is in use in the cache and CU design!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7fgoUTvmQgpOa10BQRMJft8ZOnWkJcc6ED6O3fwG7Uuc7HcjVvCgTfYEIX6URUOIgU4dbr0Ocscmt21e_SYCLKnVSDb0zO1BrBJICJrtNCdIdi-U4DJy6T0-h4ASD2p2azgkq8YPlWH0ADzUraaXfb1QrL9-pJPasjvbMlRe0dioFD0cEADjCqo8P4zY/s1920/Board%20side%201.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="704" data-original-width="1920" height="234" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7fgoUTvmQgpOa10BQRMJft8ZOnWkJcc6ED6O3fwG7Uuc7HcjVvCgTfYEIX6URUOIgU4dbr0Ocscmt21e_SYCLKnVSDb0zO1BrBJICJrtNCdIdi-U4DJy6T0-h4ASD2p2azgkq8YPlWH0ADzUraaXfb1QrL9-pJPasjvbMlRe0dioFD0cEADjCqo8P4zY/w640-h234/Board%20side%201.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;&lt;a href="https://youtu.be/z1VSPgqY7Yw?si=f1ZIJvEkJrt4MmIC"&gt;Video by TAG&lt;/a&gt;, &lt;a href="https://www.ifixit.com/Teardown/PlayStation+5+Teardown/138280"&gt;comparison of the original PS5 base from iFixit&lt;/a&gt;...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Increasing Memory...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The new circuitboard design is interesting. Sony have dispensed with the expensive custom-cut layout, instead opting for a cheaper rectangular cut - that will save some money and really did seem like a silly extravagance in the original design. The SSD controller and DRAM cache are still shifted to the rear of the board, &lt;a href="https://youtu.be/htdZ0h5BmXc?si=e7soM-AXn0sFDBnK"&gt;as was the case with the PS5 Slim design&lt;/a&gt;. We still have four flash memory modules as per the Slim (though probably now 512 MB each). However, the DRAM module doesn't appear different - though I can't read the model specs from the video - so, maybe no increase in buffer compared to the increased size of the SSD, which would be disappointing.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Judging from the layout, the extra chip in the bottom left-hand corner is the 2 GB DDR5 and appears to be linked directly to the APU - so it seems it is connected on a different bus than the GDDR6 memory modules, just like the PS4 Pro!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEfGqxxGr7X1pPYqy1ZzF076vTcfSvKqli60s1vIo1c50Sc8bajTWAMnmZGFNymgTisFLCQ-tXtjeuEztsREma0Pe6vnbTc3Vmeqo7iSOX6U23jML7oE7IedLuYF-h52ETJPBOFWYC5JnaJIT_RUgF0XmNihhqr1lXR9zZrujN7bQhfB8M1Lb2kAGuBkQ/s477/PS5%20Pro%20CFI-7022B01.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="377" data-original-width="477" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEfGqxxGr7X1pPYqy1ZzF076vTcfSvKqli60s1vIo1c50Sc8bajTWAMnmZGFNymgTisFLCQ-tXtjeuEztsREma0Pe6vnbTc3Vmeqo7iSOX6U23jML7oE7IedLuYF-h52ETJPBOFWYC5JnaJIT_RUgF0XmNihhqr1lXR9zZrujN7bQhfB8M1Lb2kAGuBkQ/s16000/PS5%20Pro%20CFI-7022B01.PNG" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;&lt;a href="https://www.powerbuy.co.th/en/product/SONY-PlayStation-5-Pro-PS5-Pro-Game-Console-2-TB-CFI-7022-B01-302130"&gt;Ironically, the 16.7 TFLOPs of compute has been online in Asia for at least some time, this information has only just broken into the Western tech sphere...&lt;/a&gt;&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Conclusions...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This was just a quick breakdown of the new (and final) information we're getting about the Playstation 5 Pro. From this information, I'm no longer confident in the prior claims that the console is RDNA3-based. All data and even my own calculations and hardware-based simulations have pointed to 60 CU RDNA2 and I had been unable to explain the discrepancy in performance based on that work compared to the claims of &lt;a href="https://www.youtube.com/live/X24BzyzQQ-8?si=02vSm1QvGkBob8Ek&amp;amp;t=206"&gt;1.45x performance uplift &lt;/a&gt;and &lt;a href="https://www.youtube.com/live/X24BzyzQQ-8?si=3fXBevjCnl3OuYZF&amp;amp;t=211"&gt;2-3x the raytracing calculation performance&lt;/a&gt;. RDNA3 just provides &lt;i&gt;too much&lt;/i&gt;&amp;nbsp;of a performance uplift compared to those definitive claims from Sony.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In this context, the extra power requirements are understandable and the extra DRAM to help with game memory allocation is apparently needed and, given that it will likely sit on its own bus, will potentially leave more bandwidth for games to utilise!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There are also some nice cost-optimisations on the Pro model internal design, including the heatsink and heatpipes - which I didn't get into in this article, but which look better at addressing the heat transfer from the APU, as well as the separate system for both VRMs, memory modules, and SSD controller and DRAM.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Finally, the seemingly random switch from the Type-A to Type-C adapter for the slowest USB port makes little sense to me and thus maybe a cost-reduction exercise...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Anyway, that's all for today. Let me know any thoughts in the comments or on Twitter!&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2024/11/how-powerful-is-ps5-pro-part-2.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWABWtYhXt3AIFiY4VF-BaSG-ur2Z7nRn6nRWk_yGuRpNm2vVPbjE9P7YKipAxJIwYxHF6TztAeeFGucSHSpC5SXsYZARTcPQBHOCJUI2aHcQvpvF1BcOGwxsKr0j7Q_06I9XDVgd63l6SnMCaFobYyuC5EJYZ_mNYTNnbEbz9CSspm945zktdAlSksLU/s72-w640-h360-c/Title.jpg" width="72"/><thr:total>9</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-2992352251835004666</guid><pubDate>Thu, 31 Oct 2024 18:16:00 +0000</pubDate><atom:updated>2024-10-31T18:16:30.338+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>Testing GPU Variation...</title><description>&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2KhHXQpNHfjOSeOyn2AKqSore9tlbJnMGJe12MAPEUn7-iOwH2ZuUox5TL9eK3WZ-5ae1nd0M70LliccMRhZ0HlRd_1H_Ncq94bKcY4tUlpoZVGCRnOSrl0w4ku9yvdFvxljwwot4-SER0z6kToDd63Bcsz6JEU3JxnNvVbPTxx4YrmHkPKHDTA7Wuqs/s1920/Undervolt%20title.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2KhHXQpNHfjOSeOyn2AKqSore9tlbJnMGJe12MAPEUn7-iOwH2ZuUox5TL9eK3WZ-5ae1nd0M70LliccMRhZ0HlRd_1H_Ncq94bKcY4tUlpoZVGCRnOSrl0w4ku9yvdFvxljwwot4-SER0z6kToDd63Bcsz6JEU3JxnNvVbPTxx4YrmHkPKHDTA7Wuqs/w640-h360/Undervolt%20title.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;One question I've often wondered about is the performance variability between individual GPUs we buy as consumers. Sure, individual models might have different performance/temperature profiles based on model power delivery or cooling design but the chip-to-chip variation has not really been tested - at least as far as I've been able to discover!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Since I have access to two identical model GPUs, I thought I'd take a VERY limited look into the issue...&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;What's in a Chip...?&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This question has been explored from &lt;a href="https://youtu.be/os-jXiYRihI?si=qGJme9V_SSiOJKCD&amp;amp;t=577"&gt;the CPU-side&lt;/a&gt; &lt;a href="https://youtu.be/dGbW7orZS-A?si=EYgICQp2LHQZ2Jrn"&gt;of the equation&lt;/a&gt; &lt;a href="https://youtu.be/PUeZQ3pky-w?si=8uB6qhI_WprUkq-K&amp;amp;t=1740"&gt;multiple times&lt;/a&gt;. The upshot of all those tests was that there are small differences in unit to unit performance but the overall performance delta is pretty small (on the order of around 1-2%). All of those tests also had small sample sizes, with the largest being 22 units of the same product but, for a CPU, the system around it* can have a larger effect than the inter-silicon differences.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;&lt;i&gt;&lt;blockquote&gt;*Motherboard design, cooler/heatsink design, effective heatsink mounting, case airflow, ambient temperature, etc.&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The same logic can be applied to GPUs, as well - with many of those peripheral systemic effects applying - the only difference is that GPUs come as an assembly. So, in theory, inter-unit variation within the same product should be low.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Why hasn't this sort of testing been performed on GPUs? Well, unfortunately, in general they cost a lot more than CPUs* and getting ahold of mulitple of the same SKU is difficult**. I'm a little lucky in that back in 2020, I grabbed an RTX 3070 for myself and my dad when a batch became available as we both needed a GPU. Amazingly, this batch wasn't bought up by an army of bots within seconds of being posted, but I did pay over the odds for them, so... Yay!?&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Anyway, I was able to actually play games (including Cyberpunk 2077), so that was a decent Christmas present to myself...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;p&gt;&lt;/p&gt;&lt;blockquote&gt;*You can buy a mid-range CPU for $200 - 300 but a mid-range GPU is typically $500 - 700, nowadays...&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;blockquote&gt;**&lt;a href="https://youtu.be/-AJhJKSx_70?si=BFz_PW76E8R_mHJY&amp;amp;t=59"&gt;Unless you happen to be a cryptocurrency miner&lt;/a&gt;...&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The one area where there appears to be active community* research and investigation is in &lt;a href="https://arxiv.org/abs/2208.11035"&gt;the super computer&lt;/a&gt;&amp;nbsp;and &lt;a href="https://doi.org/10.1145/2807591.2807653"&gt;large scale&lt;/a&gt; &lt;a href="https://ieeexplore.ieee.org/abstract/document/8341981"&gt;data management&lt;/a&gt; arenas. This community is heavily reliant on the consistent long-term performance and low short-term variability of individual units in their arrays and, so, it's only logical that they would have a focus on these effects.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*As opposed to the manufacturer...&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, saying that, their conclusions are more designed around mitigating problems in order to maintain the uptime and output of these "big iron" devices, than identifying actual individual unit variations - though one should be able to tease out that information from their underlying data.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Reading through the aforementioned papers, you can see the broad conclusions are:&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ol&gt;&lt;li&gt;Cooling solutions are important for consistent temperature and frequency operation.&lt;/li&gt;&lt;li&gt;Scheduled maintenance is important to keep the GPUs clean and TIM refreshed.&lt;/li&gt;&lt;li&gt;&lt;a href="https://www.mdpi.com/2079-9292/11/9/1420"&gt;Power-off cycles, or low frequency operation in lull-times&lt;/a&gt; is important to avoid &lt;a href="https://en.wikipedia.org/wiki/Negative-bias_temperature_instability#:~:text=Negative%2Dbias%20temperature%20instability%20(NBTI,power%2Dlaw%20dependence%20on%20time."&gt;NBTI&lt;/a&gt; and PBTI.&lt;/li&gt;&lt;/ol&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As such, there isn't much data available for us to look at inter-unit performance variability.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjl33e9WljeejZqKDSGvWuzJ9F-o2YNus9cmEBxju68JWAYk6p3YD2DZEzimEqluq_UZGwBrk2BpD7bpPyGP56dHMJqxikmvIFFYG1J-dXrzdNSiZkYtyIs1OnBmGhvczZ4HgEcW4fqjh7p1qIuV6ZPbT71ahuw8vy8BpjM4OjTjVRt0reyUGYQF8NQ1dQ/s681/Overall_NTP.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="401" data-original-width="681" height="376" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjl33e9WljeejZqKDSGvWuzJ9F-o2YNus9cmEBxju68JWAYk6p3YD2DZEzimEqluq_UZGwBrk2BpD7bpPyGP56dHMJqxikmvIFFYG1J-dXrzdNSiZkYtyIs1OnBmGhvczZ4HgEcW4fqjh7p1qIuV6ZPbT71ahuw8vy8BpjM4OjTjVRt0reyUGYQF8NQ1dQ/w640-h376/Overall_NTP.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Same system, same drivers, same day... except for the NTP results.&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Interesting results...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Maybe I'm burying the lede there with the title but I found the results I obtained are really interesting! Aside from the fact that all results are within 5 % of each other (not surprising), having &lt;i&gt;some&lt;/i&gt;&amp;nbsp;tests perform better, whilst others worse was surprising for me.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The test system was as follows:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;i5-12400&lt;/li&gt;&lt;li&gt;Gigabyte B760 Gaming X AX&lt;/li&gt;&lt;li&gt;32 GB DDR5 6400&lt;/li&gt;&lt;li&gt;Zotac Gaming RTX 3070 Twin Edge OC (x2)&lt;/li&gt;&lt;li&gt;Windows 10 (Latest version)&lt;/li&gt;&lt;li&gt;Geforce driver 551.52*&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;&lt;i&gt;*Yes, this testing was performed a while ago... Unfortunately, I'm a busy person!&lt;/i&gt;&lt;/b&gt;&lt;/span&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You see, the thing here is that these two cards are not exactly identical. No, one has spent four years in a warm country and the other has spent that time in the cold northern parts of the UK. Granted, not all of that time has been spent powering through graphically intensive games, but it surely plays a part. This is something that I think explains these results because when I look at these performance numbers and then take the data for GPU temperature which I have available from this testing we see a &lt;i&gt;slight&lt;/i&gt;&amp;nbsp;difference...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicxkmzgtgTYLcRMxssLg9fEpggIm82pNWT09UysUJL4H3xjI0KwqyGNTQus_CEuwC58SLGOlqJv6UBhvXIXt59tLoAkRMxu4HG7M7-fe5nobac3NQRIQcGbL3o3t6XptuRuWT2woaFOEF9-D3yec5ihKQo6bG8F_Tf1o23vffiVjN8w9ZxO6Lu0CuYEdM/s441/Temperature_table.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="197" data-original-width="441" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicxkmzgtgTYLcRMxssLg9fEpggIm82pNWT09UysUJL4H3xjI0KwqyGNTQus_CEuwC58SLGOlqJv6UBhvXIXt59tLoAkRMxu4HG7M7-fe5nobac3NQRIQcGbL3o3t6XptuRuWT2woaFOEF9-D3yec5ihKQo6bG8F_Tf1o23vffiVjN8w9ZxO6Lu0CuYEdM/s16000/Temperature_table.PNG" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;GPU#1 is consistently hotter than GPU#2...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Yeah, that's not a very small difference! On average GPU#1 (&lt;i&gt;my&lt;/i&gt;&amp;nbsp;GPU) was, on average 10 °C hotter in the same workloads compared to GPU #2 - though it really depended on the workload. For instance, Alan Wake 2 showed zero difference (despite it being a heavy GPU-driven rendered game) whereas Hogwart's had a 16 °C delta between the two cards!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If we look at the relationship between the delta in temperature and delta in performance we get this nice little graph where we see that at increased temperatures, we tend to get those slight dips in gaming performance. Now, that's not an unknown phenomenon for silicon-based computing. Hell, the logic applies to all types of computing - just not necessarily the working range. But what I decided to do was to take a look at the thermal paste/pad on GPU#1 and see what may be causing the issue.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Unfortunately, in my excitement/nervousness about dismantling the card (which was incredibly easy to do compared to other cards I've dismantled previously) I forgot to take a picture. However, the machine-applied thermal paste had mostly migrated off of the GPU core, resulting in patchy or thinner coverage of the thermal compound.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, I cleaned up the core and applied fresh thermal paste and retested - this is what I am calling the "NTP" result (New Thermal Paste).&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcwzIVl0GxSS-YH3UX3X4mqZ877jzkGyzLAQcpSE4z51xiaetgvxUNgxvHvu4_X3zxCxsiUBRhgDcq_ZhoiyowkdDQDRvUB8ZBY8f5K6VGDboTquSyJ36kQCnQOLtZPCYvABQFjoUhfNsdM2F922TG6nvYtMHniD4H4_ge4oilg9W_4kU0Qg7AsvRzFlY/s681/Correlation_performance.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="405" data-original-width="681" height="380" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcwzIVl0GxSS-YH3UX3X4mqZ877jzkGyzLAQcpSE4z51xiaetgvxUNgxvHvu4_X3zxCxsiUBRhgDcq_ZhoiyowkdDQDRvUB8ZBY8f5K6VGDboTquSyJ36kQCnQOLtZPCYvABQFjoUhfNsdM2F922TG6nvYtMHniD4H4_ge4oilg9W_4kU0Qg7AsvRzFlY/w640-h380/Correlation_performance.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Hotter bad, colder good! ...Who knew?!&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We can see a substantial reduction in temperature in all the tests and even slightly below that of GPU#2, which must also have a little of the same reduction in thermal transfer efficiency, just not as much. With the NTP applied, the spread of temperatures decreases by a large margin and the results cluster more closely around those of GPU#2 - but the results themselves still have some variation.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's clear to me that the thermal paste I applied is doing a better job than the cheap stuff they put in the GPU because I immediately noticed that the fan noise was &lt;i&gt;significantly&lt;/i&gt;&amp;nbsp;reduced during the testing and even when I first obtained the card back in 2020, it was quite loud - the only reason I wasn't very happy with the model despite liking the form factor. Now it purrs like a kitten!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, despite the obvious acoustic improvements, and the odd game like Hogwart's being a bit slower (5% is only 3 fps in this example!) performance is effectively the same.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Even though GPU#1 was around 15 °C hotter before the repaste, the performance was basically unaffected. So, yes, my sample size is statistically insignificant but I can say that it's more likely that the performance of your GPU cooling solution is going to have a stronger effect (though likely still negligible) on your GPU's gaming performance...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2JxdY_Wv28c33KMjJ4UzDEWm-EsGcQGodVQLSAPxOJZu1dKw85xFmvgUMzf8yGGEYzbIUYxyRi0Vwz_z5nzqOtT8X83zlOpQUrXPJ1hpSdqnh8xorgD74WxKtv2GhW03GvfKkJUTz8aIA60B_CR9WQ1ISXijWHv6xs4gDqwMdSboJBY4FXGfE4G0u-2Q/s491/Overall_table_repaste.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="405" data-original-width="491" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2JxdY_Wv28c33KMjJ4UzDEWm-EsGcQGodVQLSAPxOJZu1dKw85xFmvgUMzf8yGGEYzbIUYxyRi0Vwz_z5nzqOtT8X83zlOpQUrXPJ1hpSdqnh8xorgD74WxKtv2GhW03GvfKkJUTz8aIA60B_CR9WQ1ISXijWHv6xs4gDqwMdSboJBY4FXGfE4G0u-2Q/s16000/Overall_table_repaste.PNG" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Looking at the results, despite the reduced temperatures, performance is much of a muchness...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;And that's it and that's all... Nothing complicated today, just a little look at two GPUs and how they performed relative to each other. I can observe no difference in silicon quality between these two parts and, even when taking into account a relatively large temperature delta, performance is not negatively affected in any appreciable manner.&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2024/10/testing-gpu-variation.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2KhHXQpNHfjOSeOyn2AKqSore9tlbJnMGJe12MAPEUn7-iOwH2ZuUox5TL9eK3WZ-5ae1nd0M70LliccMRhZ0HlRd_1H_Ncq94bKcY4tUlpoZVGCRnOSrl0w4ku9yvdFvxljwwot4-SER0z6kToDd63Bcsz6JEU3JxnNvVbPTxx4YrmHkPKHDTA7Wuqs/s72-w640-h360-c/Undervolt%20title.jpg" width="72"/><thr:total>1</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-8323284258896096184</guid><pubDate>Fri, 25 Oct 2024 18:25:00 +0000</pubDate><atom:updated>2024-10-31T04:47:36.524+00:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>The Performance Uplift of RDNA 3... (Part 2)</title><description>&lt;div style="text-align: left;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvkMcPI8DNizYilMgIo5oUGopOkIFjsgfVm6PvJ3Pg_WaIkGkzXS08CqBlrIqtB33A8O8BcBq9dCUAb6efqe7_jfDeT-LMa3GlffzFI0BsognfNW6JHt-Zyj2_FisDn13mg28jD6AfL8Px65TBOK_Zlw5P-PvqbFy0iHY6z4-Yce6G1_RWhF_CVCT5vkM/s1920/Title.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvkMcPI8DNizYilMgIo5oUGopOkIFjsgfVm6PvJ3Pg_WaIkGkzXS08CqBlrIqtB33A8O8BcBq9dCUAb6efqe7_jfDeT-LMa3GlffzFI0BsognfNW6JHt-Zyj2_FisDn13mg28jD6AfL8Px65TBOK_Zlw5P-PvqbFy0iHY6z4-Yce6G1_RWhF_CVCT5vkM/w640-h360/Title.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Ever since I performed the &lt;a href="https://hole-in-my-head.blogspot.com/2023/11/the-performance-uplift-of-rdna-3-over.html"&gt;comparative analysis of the 60 compute unit (CU) RX 7800 XT with the RX 6800&lt;/a&gt; one of the background thoughts running around my head has been the question on whether Navi 33 - the monolithic version of RDNA 3 -&amp;nbsp; does or doesn't have an uplift over the equivalent RDNA2 design.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Since I purchased one each of the RX 6650 XT and RX 7600 XT for the PS5/Pro simulations, I now have the tools available to do the full analysis of RDNA 2 versus RDNA 3! So, let's jump in...&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Reminiscing...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Last time, I mentioned that &lt;a href="https://www.techspot.com/review/2686-amd-radeon-7600/"&gt;reviewers have seen&lt;/a&gt; &lt;a href="https://www.tomshardware.com/reviews/amd-radeon-rx-7600-review/4"&gt;essentially no&lt;/a&gt; &lt;a href="https://www.techpowerup.com/review/amd-radeon-rx-7600/32.html"&gt;performance uplift&lt;/a&gt;&amp;nbsp;&lt;a href="https://www.computerbase.de/2023-05/amd-radeon-rx-7600-test/2/"&gt;in their testing&lt;/a&gt; between the RX 6650 XT and RX 7600 (on the order of around 4% or approx. 3-5 fps). I spoke about my interpretation of the results I had obtained - trying to square them with the reporting performed by various technical outlets on the lacklustre RX 7600 performance relative to the RX 6650 XT - and came to the conclusion that it seemed that the front-end clock scaling was giving a big boost on the Navi 31 and 32 parts, and causing a lack of performance being absent on the Navi 33 parts...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;&lt;i&gt;"Additionally, the RX 7600 appears hamstrung by the lack of increased front-end clock - perhaps due to power considerations? - and it is the choice to decouple front-end and shader clocks that seems to me to be the biggest contributor of RDNA 3's architectural uplift as it is this aspect which appears to allow the other architectural improvements to low-level caches and FP32 throughput to really shine."&lt;/i&gt;&lt;/b&gt;&lt;/span&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, later on, in the comments, I received some challenging thoughts from user kkeubuni speaking about the testing from ComputerBase, who claimed: "&lt;i&gt;&lt;b&gt;RDNA 3 is simply not faster with the same number of execution units without a clock jump&lt;/b&gt;&lt;/i&gt;" and "&lt;i&gt;&lt;b&gt;Depending on the rounding, RDNA 3 on the Radeon RX 7600 is between 7 and 8 percentage points faster in the course than RDNA 2 on Navi 23&lt;/b&gt;&lt;/i&gt;".&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I replied as follows:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;blockquote&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;From my perspective, the 6650 XT is an overclocked 6600 XT - with the primary difference being memory clock and TDP. Looking at the steady-state core clock frequencies of reviewed cards over on TechPowerUp, the 6600 XT was already around 2650 MHz using the partner cards. In comparison, the singular tested 6650 XT reached 2702 MHz - a 25 - 50 MHz increase.&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;We would not expect any performance increase in this scenario but we did, in fact, see a 16-20% performance increase between the 6600 XT and 6650XT - a fact acknowledged by ComputerBase (page 2).&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;From what I can see, this performance increase was solely due to the increase in memory frequency and the increase in board power (160 W vs 176 W). We also observe this in my testing above with the 7800XT where I increased the power by 15% and saw an associated 6% uplift in Metro Exodus.&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;The conclusion of this information (for me) is that the 6600 XT/6650 XT core is working at the peak of its frequency curve and will not be efficient as it is pushed further. The other conclusion is that the card is likely power-limited in both configurations.&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;This means that it should, in my opinion, be tested below this core clock frequency to ensure that power draw is not the limiting factor as we are aware that even if the core frequency is reporting the same value, power limitations will reduce performance.&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;****&lt;/b&gt;&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;So, given these data, I do not believe that the RX 7600 has an 8 - 9 % performance gap, clock for clock, over the RX 6650 XT. I think that testing, similar to what I have performed here, for the reasons outlined above, needs to be done to definitely show there is a substantial uplift...&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, that's why we're here today. To prove, using my test methodolgy, how Navi 33 (RX 7600 XT) holds up against Navi 23 (RX 6650 XT) and to test whether my hypothesis that the front-end clockspeed increase can be attributed to the better performance I observed in Navi 32 over Navi 22...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Jumping Straight In...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As with last time, I'll start by exploring the limits of the cards in power and memory scaling - to see what is dictating the performance. The test system remains the same as last time, running on the latest build of Windows 10, but the Adrenaline version has been updated to 24.9.1.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;div&gt;&lt;ul&gt;&lt;li&gt;Intel i5-12400&lt;/li&gt;&lt;li&gt;Gigabyte B760 Gaming X AX&lt;/li&gt;&lt;li&gt;Corsair Vengeance DDR5 2x16 GB 6400&lt;/li&gt;&lt;li&gt;Sapphite Pulse RX 7800 XT&lt;/li&gt;&lt;li&gt;XFX Speedster SWFT 319 RX 6800&lt;/li&gt;&lt;li&gt;Sapphire Pulse RX 7600 XT&lt;/li&gt;&lt;li&gt;Gigabyte RX 6650 XT&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;One thing I will note before starting is that the RX 7600 XT released after my prior blogpost and we have data showing &lt;a href="https://www.computerbase.de/2024-01/amd-radeon-rx-7600-xt-review-test/2/#abschnitt_performancerating_mit_und_ohne_rt_in_1920__1080"&gt;a more substantial improvement&lt;/a&gt;* for that card over the RX 7600/ 6650 XT but I have not seen anyone actually discuss the reasons why. This card increased the VRAM amount whilst keeping the memory bus the same width, and also increased the board power from 165 W to 190 W. This was essentially addressing the same point/problem that I noted above about the RX 6600 - the card was primairly power limited.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*ComputerBase reports 17% raster and 37% for ray tracing over the RX 6650 XT...&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9kbRcrDHRfRjIgl4z_6a1iZ_U9CwIYlYO87FialiXvLyO7LwqVI_AoI1qvd7fFYXxnDOscMGUNvr3c3i3hoo358O2DfNdi2xPXgT1rIgJl2n904xPuRootNofOnikqTkWNz5sl0vw9iJvZnLcY69Ynk7g3OYCD5JsHJnuk-4-0BJZa2Qo0HoWPpLx7Oc/s689/Power%20scaling.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="357" data-original-width="689" height="332" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9kbRcrDHRfRjIgl4z_6a1iZ_U9CwIYlYO87FialiXvLyO7LwqVI_AoI1qvd7fFYXxnDOscMGUNvr3c3i3hoo358O2DfNdi2xPXgT1rIgJl2n904xPuRootNofOnikqTkWNz5sl0vw9iJvZnLcY69Ynk7g3OYCD5JsHJnuk-4-0BJZa2Qo0HoWPpLx7Oc/w640-h332/Power%20scaling.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The effect of raising the board power limit whilst keeping the core and memory at stock settings...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As we can see above, simply raising the power limit in Adrenaline on both RDNA 3 cards saw a small but essentially linear increase in performance, though the RX 7600 XT had a shallower gradient than the RX 7800 XT, meaning that it was nearer the performance limit for board power than the Navi 32 card. Both cards saw around a 4% increase - corresponding to a measly 1.5 and 3 fps increase, respectively. Still, that's a ~4% increase on a card that was already 17% faster than the RX 6650 XT, pushing it to almost 18% in total...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Looking at the scaling of memory frequency, we see a similar situation playing out - neither RDNA 2 card really performs better - in fact, the RX 6800 actually displays a degradation in performance. The 7800 XT also doesn't improve with increased memory speed but the 7600 XT gets another 1-2 fps (or +2%) on the stock frequencies.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLw6ohiTbTihIzW9JJsMBJg0fJWZL7MffyNOGBU8CuvlsUQ11Osaq2y7v-p9Emk1Xs4EBg5q9VnK8-GpNhDmApGBWMLkjku_S5R3wQ-NqryIaRU4A4q2c-l2yB9kbmYyfn_hg5TSCKZ91jPXobBRkzTOLk0gPYzMNTsrOZrUtvIGAUSGHotMx_nr7d2QY/s1377/Power%20scaling_6650_7600%20XT.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="355" data-original-width="1377" height="164" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLw6ohiTbTihIzW9JJsMBJg0fJWZL7MffyNOGBU8CuvlsUQ11Osaq2y7v-p9Emk1Xs4EBg5q9VnK8-GpNhDmApGBWMLkjku_S5R3wQ-NqryIaRU4A4q2c-l2yB9kbmYyfn_hg5TSCKZ91jPXobBRkzTOLk0gPYzMNTsrOZrUtvIGAUSGHotMx_nr7d2QY/w640-h164/Power%20scaling_6650_7600%20XT.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The RX 7600 XT is still power-limited, despite its base +15% increase over the RX 7600...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOPI4-Ef5Zzm_SzXlcReNcrbn5Qd8NLI3uKToEHCMH3X3MbMApuKFvmg8abj06hTbPz-yBWS1lRoWu2Bz3ILb0lejhZlCLPEgGk1hkBtvxpzJGznzU-St1nOO-usRdoO4MRQfItgnuR0ddnekOGDqwfbAeRpfZThMsLkuGeNb-Ab9YyAkn1ZnmgxOZWS0/s1379/Power%20scaling_6800_7800%20XT.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="355" data-original-width="1379" height="164" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOPI4-Ef5Zzm_SzXlcReNcrbn5Qd8NLI3uKToEHCMH3X3MbMApuKFvmg8abj06hTbPz-yBWS1lRoWu2Bz3ILb0lejhZlCLPEgGk1hkBtvxpzJGznzU-St1nOO-usRdoO4MRQfItgnuR0ddnekOGDqwfbAeRpfZThMsLkuGeNb-Ab9YyAkn1ZnmgxOZWS0/w640-h164/Power%20scaling_6800_7800%20XT.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Neither the RX 6800 or 7800 XT saw any benefits from memory scaling but the latter was also power-limited at stock...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's clear that my method of comparing the architectures at a lower frequency is most likely going to be the correct way forward - given that any power limitation will be far above what is required in that testing regime...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0w_76om4IVbs3UpaiKM0oypOiXXOI4LAjdd3cenltPLH4DzDawbh1MFoNdajpWlpb0HW6b-n2pGh-nXJK4OVdMabm1Zv3cjzC1KnG8tid44K5GA8BvrlWUDGvlVm3ZmLk7aFMDVra-vrl_0oE1Uo77ywvJL1JIKQY7-DfoXVXW_IuYvmykClNl4eQTlw/s689/Power_at_2050%20MHz.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="355" data-original-width="689" height="330" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0w_76om4IVbs3UpaiKM0oypOiXXOI4LAjdd3cenltPLH4DzDawbh1MFoNdajpWlpb0HW6b-n2pGh-nXJK4OVdMabm1Zv3cjzC1KnG8tid44K5GA8BvrlWUDGvlVm3ZmLk7aFMDVra-vrl_0oE1Uo77ywvJL1JIKQY7-DfoXVXW_IuYvmykClNl4eQTlw/w640-h330/Power_at_2050%20MHz.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;As a sanity check: there is no performance difference at 2050 MHz when increasing power and less than 1 fps with memory scaling (not that we'll be increasing bandwidth in the following tests)...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now that we've established that the RDNA 3 cards ideally should not be tested at stock for this comparison, let's move onto the actual games whilst controlling the core frequency of each card to approximately 2050 MHz - the same value I settled on last time - with the power limits increased for all cards.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Benchmarking...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The first titles in our test suite, Avatar and Returnal, show essentially identical performance on the Navi 33 and 23 cards; the RX 7600 XT shows a slight bump in average fps in Returnal with ray tracing enabled but this 3 fps increase is not reflecting the much larger 14% increase (10 fps) observed for the higher tier cards.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This result is particularly surprising because we might expect to see an uplift from both the increased VRAM and power limits on the 7600 XT when using the ultra setting in Avatar.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Returnal, on the other hand, uses settings which shouldn't trouble the VRAM on the 6650 XT...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Such a result gives credence to the hypothesis, though it's too early to lean into a conclusion!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgClO3EB8Y3JlnQdnYTMPOBdSOX88NSZYfs_0lhIUUnT7TtD94ZeLDWG787bB8UijOCGwU1nvjvitd6wrpa9bcpIdKKqY-BgWFzTR9ffwOu8rwN1q_ao-Xk9cceeBKs7ri1G_AW78Kn5vGZzMP9vzJS9NG-IsbtCdzZoE7aYCldd9kZQRopmUDWLVC4K0s/s1459/Avatar_Returnal.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="677" data-original-width="1459" height="296" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgClO3EB8Y3JlnQdnYTMPOBdSOX88NSZYfs_0lhIUUnT7TtD94ZeLDWG787bB8UijOCGwU1nvjvitd6wrpa9bcpIdKKqY-BgWFzTR9ffwOu8rwN1q_ao-Xk9cceeBKs7ri1G_AW78Kn5vGZzMP9vzJS9NG-IsbtCdzZoE7aYCldd9kZQRopmUDWLVC4K0s/w640-h296/Avatar_Returnal.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;These two titles show no architectural uplift for Navi 33...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Cyberpunk 2077 gives us a more nuanced result. At 1440p, with RT enabled, the RX 6650 XT struggles with its 8 GB of VRAM - as observed in the minimum fps value. Otherwise, the performance uplift observed for the 7600 XT is outdone by that of the RX 6800 to 7800 XT. There clearly is an effect but one of the problems, here, is the low absolute fps values cause issues with trying to understand the real level of performance uplift: i.e. small numbers mean a single fps has a large effect on percentage improvement.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;At this point, for this title, I would say that there is an improvement in performance but it's less than for the Navi 32 die. An effect of the increased frontend clock frequency?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9EL24wa8OHWrETHHYPmxe2BO_Mnoe8x1GUZHgJUHtoral2KnJh2pmHdCSArzPL-zPDBnVEJs8aUyIk3PKCQzPzo-Ih8Zy1PHkvkfBOBHhRVCmi6W6h3qcsvdDei__V2Lvv0spj8XhDLtwNiGehu5lZDMLqeEcevwQkM_TzHM_ratW-j9wVm5WWjAvk3c/s1461/Cyberpunk.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="679" data-original-width="1461" height="298" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9EL24wa8OHWrETHHYPmxe2BO_Mnoe8x1GUZHgJUHtoral2KnJh2pmHdCSArzPL-zPDBnVEJs8aUyIk3PKCQzPzo-Ih8Zy1PHkvkfBOBHhRVCmi6W6h3qcsvdDei__V2Lvv0spj8XhDLtwNiGehu5lZDMLqeEcevwQkM_TzHM_ratW-j9wVm5WWjAvk3c/w640-h298/Cyberpunk.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The RX 6650 XT's 8 GB framebuffer displays its inadequacy in Cyberpunk at 1440p...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Both Hogwart's and Spider-man tell a similar story: gains, but not as much. The one exception is for Spider-man at 1440p with RT enabled. The RX 7800 XT begins to run up against a CPU bottleneck, reducing the overall performance advantage it has over the 6800. However, in this title, the 7600 XT really outshines the 6650 XT in ray tracing, coming out with a clear win (+14%). Unfortunately, this is most likely due to the 6650 XT suffering from a VRAM limitation as the game registers around 7 - 8 GB VRAM usage (per process) on the 16 GB cards at 1440p resolution.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In fact, re-testing at 1080p with ray tracing enabled, we get 81 fps vs 67 fps average for the 7600 XT and 6650 XT, respectively. This increases the gen on gen increase from +14% at 1440p to +22% at 1080p which doesn't really make much sense when the raster performances are 114 fps vs 109 fps average, respectively, which correlates to a +5% gen on gen increase at 1080p compared to a +4% increase at 1440p - a very consistent number!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0h2PHjejTYikT1ME8CQgMicjHxpRJQARhXfrLIKUq7KWmVoJgfx4viWPaPmNcpmvlny-ZtpL-vsiTVXFqwJY2GdAef9z3DqtwDV-Pk-6VoJKZ5uTFIUqgKTyAuXvAtO0duRUnIXIMeeinUQSXV02_Dks63G1KePl67nbOs44QbmOlifHWgL1eHIqTVKg/s1457/Hogwarts_Spiderman.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="673" data-original-width="1457" height="296" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0h2PHjejTYikT1ME8CQgMicjHxpRJQARhXfrLIKUq7KWmVoJgfx4viWPaPmNcpmvlny-ZtpL-vsiTVXFqwJY2GdAef9z3DqtwDV-Pk-6VoJKZ5uTFIUqgKTyAuXvAtO0duRUnIXIMeeinUQSXV02_Dks63G1KePl67nbOs44QbmOlifHWgL1eHIqTVKg/w640-h296/Hogwarts_Spiderman.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;While the frametime graph (not shown) of Spider-man did not display an outsized number of frametime spikes due to VRAM issues, the game is likely using lower-quality stand-ins during this testing&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;Delving deeper into the data monitoring, we can see that when disabling ray tracing, we immedately gain back around 1 GB of VRAM! This is then quickly re-filled by all the assets that the game actually wants in VRAM, though, given that we routinely see 7.3+ GB used on the 7600 XT for the process, with 10 GB total VRAM usage for the system, we know we're actually missing out on around 3 GB of data that's being shuffled back and forth to system memory through the PCIe bus when using the RX 6650 XT.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Given this additional data - we have to discount the large increase I've recorded above for the 7600 XT over the 6650 XT when using RT effects at both resolutions in Spider-man and instead focus on the raster performance.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9utl272an5UtEwBnasdBbwsjFJJqzEW1xmEIxapSSe6ElihSQGmfVybc10KkwWyHyD1PIYbzQZ6BhKtfKNuSDO0qUhRDL6b0EhftWq50bzCUYrjXZmuFNSZyWgkuJYiso_ELnH-WagyLsZlVR66i-sH3Q3L3ZC68gSoromCtoA9eBHFfchGw7mPB7kyQ/s671/Spider-man%20VRAM%20use%201080p.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="355" data-original-width="671" height="338" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9utl272an5UtEwBnasdBbwsjFJJqzEW1xmEIxapSSe6ElihSQGmfVybc10KkwWyHyD1PIYbzQZ6BhKtfKNuSDO0qUhRDL6b0EhftWq50bzCUYrjXZmuFNSZyWgkuJYiso_ELnH-WagyLsZlVR66i-sH3Q3L3ZC68gSoromCtoA9eBHFfchGw7mPB7kyQ/w640-h338/Spider-man%20VRAM%20use%201080p.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Switching off RT shows that the game is VRAM-starved, even at 1080p...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Hogwart's is another graphically demanding title (it's also demanding on the processor as well, but we're not hitting a bottleneck in this test for either the CPU or VRAM). Due to the amount of performance loss from non-RT to RT settings for the 7800 XT and RX 6800, I decided not to bother to test the Navi 23 and 33 parts - the numbers would have just been&amp;nbsp;&lt;i&gt;too low&lt;/i&gt;...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2_dKdIUZbJJuhEoXRr5ROXQf5O3kGR8F1MeAdrc5ZLSKpphkrprtsZHB0lA_g9bqAyvJgRU-IdLPpxmjhK1dBOdwoIe3s2wuSNNdwqjYnbr-8aFh_1Uot3wRnlymsC0TEAdcnIscX_8Vo-MTRlTO8B1vHv3oh7C0IZ43EHz-JH8aR4k_5pju3ebYMu60/s1457/Starfield_Alan%20Wake%202.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="671" data-original-width="1457" height="294" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2_dKdIUZbJJuhEoXRr5ROXQf5O3kGR8F1MeAdrc5ZLSKpphkrprtsZHB0lA_g9bqAyvJgRU-IdLPpxmjhK1dBOdwoIe3s2wuSNNdwqjYnbr-8aFh_1Uot3wRnlymsC0TEAdcnIscX_8Vo-MTRlTO8B1vHv3oh7C0IZ43EHz-JH8aR4k_5pju3ebYMu60/w640-h294/Starfield_Alan%20Wake%202.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;It's hard to squre the Starfield performance at 1440p, maybe someone in the comments can...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Performance in Starfield is very similar for both sets of GPU comparisons. At 1440p, both RDNA 3 cards perform a lot better than their RDNA 2 counterparts at 1080p. The reason is not really clear to me. It could be the extra FP32 throughput in the improved Compute Unit design, though.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;On the contrary, Alan Wake 2 tells a similar story to that of Spider-man - a VRAM limit and PCIe traffic when testing at 1080p, High settings creates the impression that the 7600 XT is relatively more performant than the 7800 XT, gen-on-gen. However, if we drop the quality down to the Low setting, we see the gap between N33 and N23 narrow to match the higher tier cards at around +18%.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD9ZadSMSv1hsPBw3RJZDYt97BSmllQxol9q7P0OfV7mpp7gxqw9x3fwsdkbkLsZvHkfm2bzb30_aPu0oNPfksRV-g7QdMIVJRiFlnxNMq2eo9YcWl-eBElP1JL7446TNnwOVGkF2GqVpd0L06D9RPjEf1TeKcRW9si6t9aG_MCplOVkW26XakmY1kuy8/s631/Alan%20Wake%202_low.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="587" data-original-width="631" height="373" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD9ZadSMSv1hsPBw3RJZDYt97BSmllQxol9q7P0OfV7mpp7gxqw9x3fwsdkbkLsZvHkfm2bzb30_aPu0oNPfksRV-g7QdMIVJRiFlnxNMq2eo9YcWl-eBElP1JL7446TNnwOVGkF2GqVpd0L06D9RPjEf1TeKcRW9si6t9aG_MCplOVkW26XakmY1kuy8/w400-h373/Alan%20Wake%202_low.PNG" width="400" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Low settings are "fairer" on the 8 GB RX 6650 XT...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Last, but not least, Metro Exodus takes a break from helping to monitor power and memory frequency scaling to be used as an actual benchmark.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In this ray tracing-focussed title, we observe &lt;i&gt;very&lt;/i&gt;&amp;nbsp;strong scaling based both on number of CUs as well as architectural improvements. In fact, the 7600 XT outperforms the relative increase in performance compared to the 7800 XT - though there is a very real effect from the small numbers involved, I think it's not enough to explain the actual relative results. N33 appears stronger vs 23 than N32 vs N22...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I have also checked the VRAM usage for the process as I did with Spider-man but Metro: Exodus is a relatively lightweight application and, even if we decided (for whatever reason) that the "Extreme" setting was running over the framebuffer limit for the 6650 XT, it most certainly isn't for the "Ultra" setting - which not only drops the process VRAM amount but also the total used for the card!&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Since we see a similar uplift at the Ultra setting, this appears to be a title for which the Navi 33 performs as well, if not better than Navi 32.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigUjWVacRtX4098xbo7oKxe1p8hbNd4qa6SNbxgSdj4Frs0oEfSXhTOFJVU_MXvcXhhEQeA_iDlB9PMdlitY4SBbog4CVs0s3NsbK0dNU6MEEFUt5040qDRBOG9fsbdpot5eCwznv50BVcrBFKpHwbrlTTEcQM6nKU-xWwyDKmJ_6fY1_SQ1cuRkHy0lc/s725/Metro%20exodus.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="675" data-original-width="725" height="596" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigUjWVacRtX4098xbo7oKxe1p8hbNd4qa6SNbxgSdj4Frs0oEfSXhTOFJVU_MXvcXhhEQeA_iDlB9PMdlitY4SBbog4CVs0s3NsbK0dNU6MEEFUt5040qDRBOG9fsbdpot5eCwznv50BVcrBFKpHwbrlTTEcQM6nKU-xWwyDKmJ_6fY1_SQ1cuRkHy0lc/w640-h596/Metro%20exodus.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Unfortunately the benchmark at high settings broke for the 7600 XT but the other results are valid...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;So wrapping up, in game applications we have some titles where there is no performance improvement for Navi 33, some where there is, but less than observed for Navi 32, and one title (Metro) where the improvement is larger than Navi 32.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Architectural Measurements...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Once again, I'll be taking advantage of the wonderful tests &lt;a href="https://nemez.net/projects/gpuperftests/"&gt;authored by Nemez&lt;/a&gt;&amp;nbsp;to perform these microbenchmarks.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7yJp1I4H0MzjduRro0lRlpoIyxse86sy74BerP1cuLUB2JYtuP0JQslhDK5LnOQ9Tfc1reNcf0oO0M7SiUDw8YHgMFT4zGjISMrwIvHh7gQz6E-TeFCZW12Sket1A6cNXQP4zePWNRt6te7nUD-1pmACGAVY6tcLsEeixUL7AwTn5GAMrby1eMXNdQso/s1145/Nemez_1.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="685" data-original-width="1145" height="382" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7yJp1I4H0MzjduRro0lRlpoIyxse86sy74BerP1cuLUB2JYtuP0JQslhDK5LnOQ9Tfc1reNcf0oO0M7SiUDw8YHgMFT4zGjISMrwIvHh7gQz6E-TeFCZW12Sket1A6cNXQP4zePWNRt6te7nUD-1pmACGAVY6tcLsEeixUL7AwTn5GAMrby1eMXNdQso/w640-h382/Nemez_1.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;These tests measure the competencies of the core architecture, so we'd expect similar results gen on gen...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Similar to the7800 XT, the 7600 XT also displays the same competencies over its generational predecessor. The 6650 XT wins-out in FP16 Addition and Subtraction, just as the RX 6800 did. There really are no surprises, here, and we shouldn't expect any! These tests do not stress anything in the design of the chip, only assess the CU core's ability to perform work.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The RDNA2 parts consistently perform strongly in Division operations, with the only &lt;i&gt;very&lt;/i&gt;&amp;nbsp;slight difference coming out in the Inverse Square Root test - with the RX 6650 XT drawing equal with the RX 7600 XT, whereas the RX 6800 won-out against the RX 7800 XT in FP16...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjF3SyobjQbVf5ztoqlmxhG0jS6b7l8GGNbjvxRYI9Y5f4RA4GHHQd3s7tKmgJ-ESXERF7atuyOJhNKnv1H19uzv1lxjhn-sXQG72jQtq-ur6dG4Gr_TL3xGCu8oq60yaVAJvHPYp1SLFpLVUNbYrJaks2YmjZNf3dDCQIODlZauTZyqpnmAlmSsCP8YJs/s1145/Nemez_2.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="685" data-original-width="1145" height="382" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjF3SyobjQbVf5ztoqlmxhG0jS6b7l8GGNbjvxRYI9Y5f4RA4GHHQd3s7tKmgJ-ESXERF7atuyOJhNKnv1H19uzv1lxjhn-sXQG72jQtq-ur6dG4Gr_TL3xGCu8oq60yaVAJvHPYp1SLFpLVUNbYrJaks2YmjZNf3dDCQIODlZauTZyqpnmAlmSsCP8YJs/w640-h382/Nemez_2.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Finally! A difference!&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Cache and memory bandwidths have the same trends as for the higher tier parts. The larger L0 cache, and L1 cache in the RDNA3 architecture show their hands in both the N32 and N33 parts, and this time, since both N23 and N33 have the same L2 and L3 cache size.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What's interesting, though, is that contrary to N32 vs N22, past 6 MiB sizes, up to 64 MiB, there is nowhere near as large of a split in performance between N23 and N33. Despite having a smaller L3 cache (which is nowhere near being saturated at this point in the test) the bandwidth available to the RX 6800 trails off at around 8 MiB, which corresponds to 2 MiB into the L3. Meanwhile, the greater bandwidth of the RX 7800 XT* chugs through until it suddenly hits a wall at 64 MiB - i.e. the L3 cache size. Whereas the 128 MiB of L3 on the RX 6800 don't manage as well when even partially saturated. AMD really made a big improvement in their interconnect technology and the "fan-out links"...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*AMD likes to espouse the "&lt;a href="https://www.amd.com/en/products/graphics/desktops/radeon/7000-series/amd-radeon-rx-7800-xt.html"&gt;Up to 2708 GB/s&lt;/a&gt;" of the improved Infinity cache...&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Thus, the monolithic N33 suffers the same lack of bandwidth with higher data requirements as the prior generation.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjELBhJTecHx-b_jPSnfj9xBka4qw-sl7eR-ZopEysipobFyfG9F0US4BFtL1UqOk7MZodSf_R2mHPQho-DRHQAjiJZdYFiq6M4h5LF6UG7UMcEQDZDb1VFXbdZIE4V-8316H7YtbJI6f1lC5r7ecwV4i7FZKall_XXlDyp-2vXmfZyg6jp5JljqoG5c_M/s1141/Nemez_3.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="581" data-original-width="1141" height="326" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjELBhJTecHx-b_jPSnfj9xBka4qw-sl7eR-ZopEysipobFyfG9F0US4BFtL1UqOk7MZodSf_R2mHPQho-DRHQAjiJZdYFiq6M4h5LF6UG7UMcEQDZDb1VFXbdZIE4V-8316H7YtbJI6f1lC5r7ecwV4i7FZKall_XXlDyp-2vXmfZyg6jp5JljqoG5c_M/w640-h326/Nemez_3.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The larger L0 and L1 sizes and Infinity Fanout show their mettle in this chart...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Moving onto the operating frequencies, we reconfirm that the monolithic N33 does not work in the same manner as N32 - the front-end clock frequency appears to still be decoupled from the core frequency* but it actually has a slight deficit! We're looking at a ~1% drop in front-end (2050 MHz) versus the core (2080 MHz)... This is in-line with the results obtained by Chips and Cheese in their testing of Navi 32 vs Navi 33.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;*It is at least measured separately - whether this is a real difference between RDNA2 and 3 or not is another question!&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;With my limited knowledge of GPU design and dependencies, I do have to wonder whether the increased front-end clocks are actually tied to that Infinity Fanout interconnect? Is it just one of the supporting functions which needs to exist to pump the data back and forth?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoqX2prq9FjcIJPtNKCCreri0NM-V4lAq2n1pzmYd2jV9bA66zosp6wTMcWmd_K_dOpRJIzFpoqgBvzrurjJXLJcWy-wJQkbSIlj4SvQTFSAhUwYBWoEP8Gr5wSFCXCbRG_QEcHz0oc-s2EwLLh1RveLZqoTJiakF77d8X2LC1zX2yPzsvVs_KbaC5ivA/s1123/front-end_core%20frequency.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="341" data-original-width="1123" height="194" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoqX2prq9FjcIJPtNKCCreri0NM-V4lAq2n1pzmYd2jV9bA66zosp6wTMcWmd_K_dOpRJIzFpoqgBvzrurjJXLJcWy-wJQkbSIlj4SvQTFSAhUwYBWoEP8Gr5wSFCXCbRG_QEcHz0oc-s2EwLLh1RveLZqoTJiakF77d8X2LC1zX2yPzsvVs_KbaC5ivA/w640-h194/front-end_core%20frequency.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Core clocks for N23 vs Front-end/Core on N33... Taken in Metro Exodus Enhanced edition...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi35MBJWO1HZEAsWhMxBBGtTMXm92ApmygQvhRatpEq-JSsiwTlmuA4UUykj-FC9MlKfOshfWbtIXiuoK9rl3yz9BK60XxbnFLgd0Wc66gU6Obcl0nEWCKul7fpXTaf6k_9LngLHWTTe77pE6wqccB6XRAQDE683XboSOslBF8O5xdt8w92LrJa_PjY5co/s1063/front-end_core%20frequency.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="326" data-original-width="1063" height="196" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi35MBJWO1HZEAsWhMxBBGtTMXm92ApmygQvhRatpEq-JSsiwTlmuA4UUykj-FC9MlKfOshfWbtIXiuoK9rl3yz9BK60XxbnFLgd0Wc66gU6Obcl0nEWCKul7fpXTaf6k_9LngLHWTTe77pE6wqccB6XRAQDE683XboSOslBF8O5xdt8w92LrJa_PjY5co/w640-h196/front-end_core%20frequency.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Core clocks for N22 vs Front-end/Core on N32... Taken in Metro Exodus Enhanced edition...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;br /&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJyi3dVH3C6swTQ4GahUOW56G1X-wqA56arCsTSCyrh8kJiu_Wr77euyHgxQyLpm3lm27-tw4lCRv9nqkPMyCe8-86CAJRMXkTo7ukFuWYAEbD3OcRqdbCLaiE_AkPVsfDxtFctjdpawzWzrG_nrArj2sulmckVciuATZcUNpYIKqJu96rbk2ddSSwtic/s1129/Frequency%20ratio.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="343" data-original-width="1129" height="194" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJyi3dVH3C6swTQ4GahUOW56G1X-wqA56arCsTSCyrh8kJiu_Wr77euyHgxQyLpm3lm27-tw4lCRv9nqkPMyCe8-86CAJRMXkTo7ukFuWYAEbD3OcRqdbCLaiE_AkPVsfDxtFctjdpawzWzrG_nrArj2sulmckVciuATZcUNpYIKqJu96rbk2ddSSwtic/w640-h194/Frequency%20ratio.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;N33 has a slight deficit in front-end frequency compared to core frequency, whereas N32 has a big uplift in the front-end...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Conclusion...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Taking stock of all these data points is a bit daunting. The added complication of working out precisely when and where the 6650 XT is being limited by its 8 GB VRAM is also a bit of a challenge - though I wasn't going to buy myself an 8 GB RX 7600 just to make the job easier. Screw that!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The microbenchmarks show that the main differences between N22/N32 vs N23/N33 are the improved L3 bandwidth and the front-end frequency in the N32 design. All other architectural differences are common between the two RDNA3 cards.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgW6UWXn4E48hH4afs1JohREnJMDv_Duygy2pGZSSIVbN6AWkMyAy4jYubIOJwIepDyRJWRNfxSL3d2q7t5cwmCC8qfLPARdY9Q-BlYdRfLehtnoLfH0CvIe5PPNIp1mk0FbIZoYfZUDYVjBW3EHF9P7vIXufs0J2vPUvOCy8b8svB5fmMEecg0AxC7NqA/s901/Relative%20performance_updated%202.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="715" data-original-width="901" height="508" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgW6UWXn4E48hH4afs1JohREnJMDv_Duygy2pGZSSIVbN6AWkMyAy4jYubIOJwIepDyRJWRNfxSL3d2q7t5cwmCC8qfLPARdY9Q-BlYdRfLehtnoLfH0CvIe5PPNIp1mk0FbIZoYfZUDYVjBW3EHF9P7vIXufs0J2vPUvOCy8b8svB5fmMEecg0AxC7NqA/w640-h508/Relative%20performance_updated%202.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Discarding tests was painful but had to be done once the limitations became clear...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Looking at the overall comparison of improvement in gaming, Navi 32 has an overall higher uplift across the majority of titles (when the aforementioned 8 GB isn't getting in the way of testing!). This is typically on the order of around 3-5%, though we do see titles like Hogwarts which benefit to the tune of 10%. In contrast, Metro Exodus doesn't perform like any other title and it's possible that the increases in L0 and L1 cache size has an outsized effect in the N23/N33 comparison...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Going back to the reason for this blogpost, does Navi 33 have less of a performance improvement than Navi 32? I can safely say that the answer is "&lt;u&gt;Yes, in the majority of cases&lt;/u&gt;".&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Was I correct in thinking that Navi 33 didn't have a performance uplift over Navi 23? "&lt;u&gt;No, but with some caveats&lt;/u&gt;".&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I was correct in my prediction that the RX 7600 was power limited, like the RX 6600 XT was and that Computerbase was not testing in a correct environment. Additionally, in properly (or as close as is possible) controlled testing conditions, we cannot observe the performance uplift noted in their subsequent testing of the RX 7600 XT of around 17% in raster performance. Therefore, the uplifts observed are not architectural in nature!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If we take an average of the valid performance uplift numbers above, we get an average of 1.13x uplift on N32 and 1.08x uplift on N33 - a 5 % difference between the two RDNA3 designs. It appears that there &lt;i style="font-weight: bold;"&gt;is&lt;/i&gt;&amp;nbsp;a real difference in the performance, though less than I thought and what is quite surprising (at least to me) is that N33 is also very power- and bandwidth-limited, despite being monolithic in design and already having the ceiling of power and memory bandiwdth increased substantially for the RX 7600 XT variant of the chip!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, taking those considerations mostly out of the picture the performance uplift due to architecture is quite weak - &lt;a href="https://hole-in-my-head.blogspot.com/2023/07/the-performance-uplift-of-ada-lovelace.html"&gt;though not as weak as for the RTX 40 series&lt;/a&gt;!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;RDNA3 feels like a side-step in the architecture's design and I'm struggling to see where AMD are heading for RDNA4. What other levers do they have to pull to wring more performance from the dual compute unit setup? They're already seeing poor power scaling, poor clock scaling, and poor FP32 scaling in this architecture, what's left? Well, &lt;a href="https://hole-in-my-head.blogspot.com/2024/09/amds-new-reality.html"&gt;I guess there's a plan for that&lt;/a&gt;!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I also am a bit pessimistic for Nvidia's next architecture. It feels like we're left waiting for ancillary technologies to catch up to the GPU cores we already have available in order to advance some more. Packaging, memory, and dedicated silicon seem to be the future for consumer GPUs but AMD are dragging their feet on that last point, while the memory makers have been dropping the ball for some time, now, leaving both Nvidia and AMD struggling to outfit their lower tier GPUs with more memory and higher bandwidth. Finally, the advanced packaging that can bring about scaling efficiencies are too power hungry and too expensive to bring to a whole GPU product stack, meaning that - once more - low end and mid-range products are likely to be left out in the cold when it comes to advancements.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;To wrap up: RDNA 3's not a bad architecture - there clearly were gains to be had but RDNA 2 was so good that there just wasn't much room for the new architecture to really shine. I, personally, really like both the RX 7600 XT and 7800 XT - though the former is not the best value for money (much like its competition in the same price range).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I was wrong in thinking that there was no performance uplift on the monolithic RDNA3, but there &lt;i&gt;is&lt;/i&gt;&amp;nbsp;a difference to be seen and it seems to be most likely linked to the higher front-end frequency and memory bandwidth of the chiplet design.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;BUT!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That's it for this series. I'll bring it back for RDNA 4. See you then!&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2024/10/the-performance-uplift-of-rdna-3-part-2.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvkMcPI8DNizYilMgIo5oUGopOkIFjsgfVm6PvJ3Pg_WaIkGkzXS08CqBlrIqtB33A8O8BcBq9dCUAb6efqe7_jfDeT-LMa3GlffzFI0BsognfNW6JHt-Zyj2_FisDn13mg28jD6AfL8Px65TBOK_Zlw5P-PvqbFy0iHY6z4-Yce6G1_RWhF_CVCT5vkM/s72-w640-h360-c/Title.jpg" width="72"/><thr:total>2</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-6087990939023126281</guid><pubDate>Sun, 29 Sep 2024 14:30:00 +0000</pubDate><atom:updated>2024-10-19T03:04:58.702+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analyse this</category><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">screenestate</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>Analyse This: Simulating the PS5 Pro... (Part 2)</title><description>&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhApoUQc6XQjvYABx1oJhTKQXp64nE5EEs9QrURiSR-jn3mtxge51RdgIGd-Cs0nJUpe1JkTOof09fJgbEOXR-4t7ZSRud9eLynI8Oe2eQEy9s_Hlp6ixJonxZ-tV-3SdGC7gZSm0G6jlo8Pyyc3-XkwY8yc-suncCTMVYuQZjw26x7IbrnNTwB1dOFVNQ/s1920/Title.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhApoUQc6XQjvYABx1oJhTKQXp64nE5EEs9QrURiSR-jn3mtxge51RdgIGd-Cs0nJUpe1JkTOof09fJgbEOXR-4t7ZSRud9eLynI8Oe2eQEy9s_Hlp6ixJonxZ-tV-3SdGC7gZSm0G6jlo8Pyyc3-XkwY8yc-suncCTMVYuQZjw26x7IbrnNTwB1dOFVNQ/w640-h360/Title.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;It's the power of...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The &lt;a href="https://hole-in-my-head.blogspot.com/2024/09/how-powerful-is-ps5-pro.html"&gt;Playstation 5 Pro talk&lt;/a&gt; is really hogging the headlines but I really wasn't 100% happy with &lt;a href="https://hole-in-my-head.blogspot.com/2024/03/analyse-this-simulating-ps5-pro.html"&gt;my prior look/simulation of the device&lt;/a&gt;. So, I'm back here, today, putting together a quick look at a simulated PS5 base versus a simulated PS5 Pro - with PC parts!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As a side benefit of this, we also get a chance to look at the effect of changing GPU architecture and a potential sly look at an upcoming blogpost where I look at the differences between monolithic RDNA3 versus RDNA 2.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, let's jump in...&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Setting Up...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As with last time, I feel it's necessary to point out that this study has many flaws and inaccuracies. There are no &lt;i&gt;equivalent&lt;/i&gt; pieces of hardware between the console and PC space. &lt;a href="https://hole-in-my-head.blogspot.com/2024/09/how-powerful-is-ps5-pro.html"&gt;Many have tried&lt;/a&gt; to make this comparison and &lt;a href="https://youtu.be/nJUygUUKkgM?si=qy9Gye9yzzYV1ayQ&amp;amp;t=958"&gt;many have failed&lt;/a&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The long of it is, that the console is an APU with severe power limits. The chip is monolithic, so has some gains in intra-die latency, but chip has far less cache than almost any desktop part - both for the CPU and GPU. On the CPU side, we can replicate the CPU cache hierachy and split CCX design because we have desktop-equivalent APUs.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, what we &lt;i style="font-weight: bold;"&gt;can't&lt;/i&gt;&amp;nbsp;do is replicate the shared GDDR6 memory system and custom cache scrubbing and data management silicon within the APU which allows better management of the data between CPU and GPU - we can see the potential effect of this, later. GDDR6 also has a latency &lt;i&gt;disadvantage&lt;/i&gt;&amp;nbsp;compared to desktop DDR4 memory, but an overall bandwidth "win".&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;On the GPU side, it lacks &lt;u style="font-weight: bold;"&gt;the defining&lt;/u&gt;&lt;b&gt;&amp;nbsp;&lt;/b&gt;feature of RDNA 2 - the L3 "Infinity Cache", while avoiding the &lt;a href="https://hole-in-my-head.blogspot.com/2023/08/what-went-wrong-with-rdna-3.html"&gt;pitfalls of the chiplet architecture&lt;/a&gt; introduced to mid-range and high-end RDNA3. However, this APU also lacks one of the benefits that those mid-range and high-end designs brought to RDNA3 over RDNA2 - &lt;a href="https://hole-in-my-head.blogspot.com/2023/11/the-performance-uplift-of-rdna-3-over.html"&gt;i.e. a higher front-end clock&lt;/a&gt;.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The APU also has less memory bandwidth than the desktop RX 7800 XT but a bit more than the RX 6800 non-XT (which are the compute unit comparable parts on the desktop) and, as noted above, this is shared between all APU functions.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Both CPU and GPU on the APU operate at lower frequencies than their desktop counterparts, which will hold them back, somewhat.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTRdCzBgfosrnBmjS22lV17d9o3TKnJIVy-bTUUGB6oO3pFr36BPl41jN2lJ7ajbugL0qwhyxvUkXo8jvvlEmInCu0Xs6DxrwbKjYcZKZPfe7Vk6bNCyVSedbBpQCQmuWf5s6sEE4TfPM2T8cQ6dBREWuQklLkpNdTZhtDbw04_6PT-la-bRW4rR8lzmc/s1866/New%20DCU.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1050" data-original-width="1866" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTRdCzBgfosrnBmjS22lV17d9o3TKnJIVy-bTUUGB6oO3pFr36BPl41jN2lJ7ajbugL0qwhyxvUkXo8jvvlEmInCu0Xs6DxrwbKjYcZKZPfe7Vk6bNCyVSedbBpQCQmuWf5s6sEE4TfPM2T8cQ6dBREWuQklLkpNdTZhtDbw04_6PT-la-bRW4rR8lzmc/w640-h360/New%20DCU.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The RDNA architecture has altered over time but mostly in the manner in which the code instructions are able to be serviced by it...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;If we take a look on the PC side of things, we get the inverse of everything noted above, with some caveats!&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Data from the SSD has to be transported to the CPU, then to the memory, then operated on to be decoded/unencrypted/decompiled (depending on storage state), sent &lt;i&gt;back&lt;/i&gt;&amp;nbsp;to the memory and then forwarded, through the CPU, to the GPU memory for it to be able to work. There is additional overhead, per frame, for any data which needs to be shared, monitored, and updated for both devices to work on it over multiple frames which requires a back-and-forth along the PCIe-to-memory interface.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;&lt;i&gt;In this sense, the APU of the console has a massive advantage as it can more easily work on the same data, as necessary, as well as having less latency-sensitive data being able to be accessed faster!&lt;/i&gt;&lt;/b&gt;&lt;/span&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The PC parts also require more energy to run with the same performance: the distance between data and where it needs to be incurs a &lt;u&gt;HUGE&lt;/u&gt;&amp;nbsp;energy cost and by both having everything on a monolithic chip, as well as having soldered memory and storage with &lt;a href="https://hpc.pnl.gov/modsim/2014/Presentations/Kestor.pdf"&gt;shortened distances will help with reducing &lt;/a&gt;&lt;a href="https://www.nvidia.com/en-us/geforce/news/rtx-40-series-vram-video-memory-explained/"&gt;the energy required to push that data around&lt;/a&gt;! i.e. The chiplet design for mid-range and high-end RDNA desktop parts is detrimental to their energy efficiency.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;u&gt;Moving back to the overview:&lt;/u&gt;&lt;/b&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The short of it is that we can approximate the performance of a console with dedicated PC hardware but never be 100% accurate - there will always be specific optimisations in code that the developers can lean on in the console space to push performance above where they will be with otherwise &lt;i&gt;architecturally&amp;nbsp;exact&lt;/i&gt;&amp;nbsp;PC parts.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Moving Out...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;With all that futzing around out of the way, we can address what I actually intend to analyse in this post. &lt;a href="https://hole-in-my-head.blogspot.com/2024/03/analyse-this-simulating-ps5-pro.html"&gt;Like last time&lt;/a&gt;, I'm going to look at a Ryzen 5 4600G system, with 16 GB DDR4 3200 and various GPUs. The CPU is clock-locked to that of the CPU in the PS5 - 3.5 GHz.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;For the GPUs, things are a little more compliated...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I couldn't get ahold of an RX 6700 non-XT &lt;a href="https://www.eurogamer.net/digitalfoundry-2024-amds-radeon-rx-6700-is-a-ringer-for-the-ps5-gpu-but-which-is-faster"&gt;which would represent a base PS5 quite nicely&lt;/a&gt;. Or, at least, not for &lt;a href="https://www.amazon.de/-/en/PowerColor-6700-Fighter-10GB-DDR6/dp/B0B7C1NL3D/ref=sr_1_13?crid=1CF671Y2OJBN2&amp;amp;dib=eyJ2IjoiMSJ9.yXp7Mhkats1-Y61_Svh061FqSqADG5up0MLjHAZz5Iln4BLeHTLF1sFEButTMVDdRW0qypIZ1rkJ6ZKWgMXb-sk3jLxV972FJxRWQGpdv1nBQvhJxqmkudie6Z2WC0vNe6XwnLVMDlRxFp_LqTR8-Is5qexM71GvUWnxsNiT3lHalnVrcaMf629vpbZoUz8lqXKBuA4Vn745yn4rb-syilkCv0if63lC-iZ_CjbCLK4.HjWqrczhEs3O10aDEvg7P8SAzx06Wn1JQAygKRwfcik&amp;amp;dib_tag=se&amp;amp;keywords=RX+6700&amp;amp;qid=1727531799&amp;amp;sprefix=rx+6700%2Caps%2C151&amp;amp;sr=8-13"&gt;a reasonable price and quality&lt;/a&gt;! However, the RX 6650 XT is very close in performance if we leave it at stock settings... So, I'm planning to use that.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/a/AVvXsEgvuDbHve7Wo5E9PrLS8thnaaHNtqXUG427xp2jdBB_PmhF0WYSKNkFT6DizG6uOyRiJWnX121RD2EsqXvpw2RNlfFvR9p1mMmPEHzsYhSX7Yc2o5HEZLf60z5YuMOU6PjC5-5tfkJibykoRzpf-y4Za_pE2gJW8Wqk_wnODFrkXIVG_v2ORR5AsOQf35c" style="margin-left: auto; margin-right: auto;"&gt;&lt;img alt="" data-original-height="391" data-original-width="553" src="https://blogger.googleusercontent.com/img/a/AVvXsEgvuDbHve7Wo5E9PrLS8thnaaHNtqXUG427xp2jdBB_PmhF0WYSKNkFT6DizG6uOyRiJWnX121RD2EsqXvpw2RNlfFvR9p1mMmPEHzsYhSX7Yc2o5HEZLf60z5YuMOU6PjC5-5tfkJibykoRzpf-y4Za_pE2gJW8Wqk_wnODFrkXIVG_v2ORR5AsOQf35c=s16000" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;It's not perfect, the 8GB of VRAM may hold it back in some titles, but it's enough...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Yes, the 6650 XT will run at higher clock speeds than the actual PS5 GPU but this will make up for the lower Compute Unit count.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I've already pegged the &lt;a href="https://hole-in-my-head.blogspot.com/2024/09/how-powerful-is-ps5-pro.html"&gt;RX 7800 XT and RX 6800 non-XT&lt;/a&gt; as being potenial PS5 Pro GPU equivalents - downclocked, of course... but we have another aspect to explore here...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The rumours and talk about the Playstation 5 Pro being RDNA 3 or 3.5 in architecture for the GPU are rife in the tech community. So, why not introduce a PS5 base equivalent with "RDNA3"? Yes, the RX 6650 XT is already an approximation of the GPU performance of the desktop equivalent part (RX 6700) but there is also another part which was &lt;a href="https://www.rockpapershotgun.com/amd-radeon-rx-7600-review"&gt;roundly decried&lt;/a&gt; &lt;a href="https://www.tomshardware.com/reviews/amd-radeon-rx-7600-review"&gt;as being&lt;/a&gt;&amp;nbsp;a pointless upgrade: the RX 7600!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;For this testing, I've chosen an RX 7600 XT 16GB. (A), to avoid issues with memory usage and (B) to match the Compute Unit count of the RX 6650 XT - which, I am going to use to my advantage by allowing the card to run at default settings.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This should allow us to see some differences between RDNA 2 and RDNA 3 architectures both at the current GPU performance of the Playstation 5 but also at the potential Playstation 5 Pro.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Finally, I've chosen games which have a counterpart on the PS5 and have had console equivalent settings defined by outlets in the industry:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;&lt;a href="https://www.eurogamer.net/digitalfoundry-2023-alan-wake-2-rt-deep-dive#:~:text=Alan%20Wake%202%20is%20sublime,and%20DLSS%203.5%20ray%20reconstruction."&gt;Alan Wake 2&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href="https://youtu.be/SKQhM34G33c?si=V_MUu5U5YWXLSuqI&amp;amp;t=442"&gt;Avatar: Frontiers of Pandora&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href="https://youtu.be/IINkzUerDHU?si=7Qer_SRBGaFAe4z5&amp;amp;t=791"&gt;Hogwart's Legacy&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href="https://www.eurogamer.net/digitalfoundry-2023-ratchet-and-clank-rift-apart-pc-tech-analysis"&gt;Ratchet and Clank&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href="https://youtu.be/xI2VUQsPqJo?si=QIRkpe0NUTWfFYcc&amp;amp;t=1266"&gt;Spider-man: Remastered&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I've also taken some of these titles using these same settings and run them either with RT enabled (when not previously enabled) or at a different resolution to that used on the PS5 to see the effects on and capabilites of the GPUs in this analysis.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Aaand, with that - let's get into the testing... All results can be &lt;a href="https://docs.google.com/spreadsheets/d/1dzNqJmCjlV8dGF8c2gMxjZeO9wC6-Yzua0kP2sBul6A/edit?usp=sharing"&gt;found here&lt;/a&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Swing Away, Swing Away...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Alan Wake 2...&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Back when Alan Wake 2 released &lt;a href="https://hole-in-my-head.blogspot.com/2023/11/alan-wake-2-performance-analysis.html"&gt;I did a performance analysis&lt;/a&gt; using the GPUs I had on hand at the time. While I didn't test the specific settings combinations used below, I did note that there wasn't a lot of performance uplift from resolution changes or from the settings changes at the same resolution.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What I'm seeing in today's results is that - even though we're using a much weaker CPU (I used the i5-12400 in my original testing) and the GPU is clock-limited, both the RX 6800 and RX 7800 XT are performing WAY better! So, Remedy/AMD have done quite a lot of optimising on this title and perhaps on the graphics driver side as well. I had previously noticed the improvements in upscaling visual quality - especially when using ray tracing - but this is on another level!&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The same benchmark run back at release gave me 95 fps and 73 fps for the 7800 XT and 6800, respectively, on the low settings, without RT enabled. Now, we're getting 104 and 80 fps on a much weaker setup with the PS5 Performance settings - which are essentially the low settings with an internal 847p resolution (compared to 1080p native). Given that we know the increase in performance from increasing scaling factors has pretty harsh diminishing returns in this title, I believe that we're looking at the performance difference between the Low and High presets on "stock" PC hardware.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I think that's pretty impressive and worthy of a shout-out!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHUxhxBQ_sgUwtwP8C0f6manvCmushyGsacPWVAy_9DIl2WqrkzIS1HLVMRVKluzPLJvhabNqj4iXsaGmgwdZnh2dwNURzMhj76NFiTr1bzYXN6PL0P1L4ny1qIBslvHKu2UpYf010yQCDc3Ts4DoBb06qycwXXh-S_eV-OGEo2MU2PQGxyWfkuY1n42Q/s1357/Alan%20Wake%202%23.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="723" data-original-width="1357" height="340" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHUxhxBQ_sgUwtwP8C0f6manvCmushyGsacPWVAy_9DIl2WqrkzIS1HLVMRVKluzPLJvhabNqj4iXsaGmgwdZnh2dwNURzMhj76NFiTr1bzYXN6PL0P1L4ny1qIBslvHKu2UpYf010yQCDc3Ts4DoBb06qycwXXh-S_eV-OGEo2MU2PQGxyWfkuY1n42Q/w640-h340/Alan%20Wake%202%23.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Alan Wake 2 is very heavily reliant on GPU performance and throughput... though as a write this, I realised Quality mode should have been tested at 2160p FSR Balanced and not 1440p... Still, similar scaling will apply...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Considering that Alan Wake 2 on the console does not have ray tracing enabled, what we see in the charts above is that the PS5 Performance mode running on the RX 6650 XT matches the approximate performance of that mode running on the console. However, in quality mode, if the output resolution had been kept at 1440p, instead of being raised to 4K, the PS5 GPU equivalent RX 6650 XT would have been performing better than the ~30 fps the game is locked to on the console. At upscaled 4K, both the 6650 XT and 7600 XT are managing to keep a slightly sub-30 fps presentation...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I understand that Remedy wouldn't want to present that in a 60 Hz container on a TV or monitor as it would provide a poor experience for the user. However, the less consistent actual 30 fps quality mode on the PS5, with dips into the twenties, is just as bad, in my opinion. The message is clear, though - there's extra performance headroom available there if the resolution had been dropped slightly from 4K with FSR without notably affecting the visual quality on a per-frame basis.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In my opinion, this decision was a poor one, on Remedy's part.&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;There is another aspect of the performance of this game which is on RDNA2 parts there are numerous, repeated frametime stutters. The cause of which is not readily understandable. Upon release, I did not observe them but &lt;a href="https://hardwaretimes.com/alan-wake-2-pc-performance-optimization/"&gt;other outlets did&lt;/a&gt;. However, now I am experiencing these issues on both the RX 6650 XT and RX 6800 but neither RDNA 3 card. This is very strange and, given certain &lt;a href="https://youtu.be/D1INvx9ca9M?si=1g31vS6P0ixUE9Yt"&gt;recent events&lt;/a&gt;, I'm wondering if this is a Windows issue (though I'm on Windows 10!)...&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;Moving over to the RDNA3 equivalent part, the RX 7600 XT we see no real improvement in these two quality modes - perhaps a couple of fps improvement but this is just one small section of the game and that is most likely not a consistent difference. When we switch to the addition of ray tracing effects (low preset, of course!) the RDNA3 architecture actually does help &lt;i&gt;a little&lt;/i&gt; in that situation - &lt;a href="https://hole-in-my-head.blogspot.com/2023/11/the-performance-uplift-of-rdna-3-over.html"&gt;as I have previously noted&lt;/a&gt;&amp;nbsp;- but the performance is still sub-30 fps and I wouldn't want that experience on the PS5 base console.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgD6oEQCkJTMojzUMCbQFDFYVQlxH0S4IovhYgSsZuahyauQI9wfuWqdw_SMfDG3sCJQm9ZKjo28klpm_7mkXLnqRJkDL2Qo87PemvM1GBj9x3xMJM05bi1RbZWd_9j69oC4olvy6EQUoBpfv2VgSNodRGOm4OuZ6a2ZiUr1POS7hzmr4oZt8itIVN71iM/s1361/Alan%20Wake%202_2.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="355" data-original-width="1361" height="166" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgD6oEQCkJTMojzUMCbQFDFYVQlxH0S4IovhYgSsZuahyauQI9wfuWqdw_SMfDG3sCJQm9ZKjo28klpm_7mkXLnqRJkDL2Qo87PemvM1GBj9x3xMJM05bi1RbZWd_9j69oC4olvy6EQUoBpfv2VgSNodRGOm4OuZ6a2ZiUr1POS7hzmr4oZt8itIVN71iM/w640-h166/Alan%20Wake%202_2.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Having re-inserted all the cards to test the true PS5 Quality mode equivalent, we see the limitations of the "equivalent PS5"...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The PS5 Pro equivalent setups are more interesting. First up, there's essentially a guarantee of being able to push 60 fps, even in the quality mode, there is the suggestion that a Pro-enabled "Performance RT" mode at 30 fps could be on the cards with &lt;a href="https://hole-in-my-head.blogspot.com/2024/09/how-powerful-is-ps5-pro.html"&gt;RX 6800-class of hardware available on the PS5 Pro&lt;/a&gt;, with a potentially small increase in ray tracing performance if the RDNA3-style dual-issue FP32 upgrades to the compute units is in place.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The second interesting thing, to my mind, is that the performance difference between the monolithic RDNA2 &amp;gt; 3 jump compared to the monolithic RDNA2 to chiplet RDNA3 design is truly huge: A relatively miniscule 6% difference on the former but a 20-30% difference on the latter. While some of this performance may be explained through the memory frequency difference (17.2 Gbps on the overlcocked RX 6800, versus the stock 19.5 Gbps of the RX 7800 XT),&amp;nbsp; I'm still placing my bets that the&amp;nbsp;&lt;a href="https://hole-in-my-head.blogspot.com/2023/11/the-performance-uplift-of-rdna-3-over.html"&gt;decoupling and increase in front-end clock frequency&lt;/a&gt; on the RX 7800 XT is the real performance enhancer on RDNA3. Especially since I didn't observe any real gains in performance when adjusting memory speed on either card.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;&lt;i&gt;&lt;/i&gt;&lt;/b&gt;&lt;blockquote&gt;&lt;b&gt;&lt;i&gt;This is another reason why I specifically purchased the 6650 XT and 7600 XT - I'll be exploring the differences between RDNA 2 and 3 in a future blogpost!&lt;/i&gt;&lt;/b&gt;&lt;/blockquote&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpd-IK4YbfZcW5_5vXjw8T_T2je-a9agiy6mhwpBVsHYXdnTKJyMk2Iw-6S_J0lBG-_mz_PJFWXkUaGr3ps4REYkoIvh18vh0Yhyq1ZfyfvSN7rgTtVuaNx33hJNOLoiWxzICqUG6RLXhSTDS3yW_oxW8966O8vklip9a064TmMC4D65gX5TI50LcBxaE/s1363/Avatar.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="355" data-original-width="1363" height="166" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpd-IK4YbfZcW5_5vXjw8T_T2je-a9agiy6mhwpBVsHYXdnTKJyMk2Iw-6S_J0lBG-_mz_PJFWXkUaGr3ps4REYkoIvh18vh0Yhyq1ZfyfvSN7rgTtVuaNx33hJNOLoiWxzICqUG6RLXhSTDS3yW_oxW8966O8vklip9a064TmMC4D65gX5TI50LcBxaE/w640-h166/Avatar.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Here we see a different story to that in Alan Wake 2. RDNA3 really shines over RDNA2...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Avatar....&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This game has a huge focus on &lt;a href="https://www.ubisoft.com/en-gb/game/avatar/frontiers-of-pandora/news-updates/6WF5Ud05UCEmLp2R8cjjVn/avatar-frontiers-of-pandora-pc-features-deep-dive"&gt;GPU rendering&lt;/a&gt;. I always thought that Alan Wake 2 had a similar focus but looking at the above tests, it's clear that the engineers over at Massive Entertainment have really optimised based on the GPU's ability to perform. Their engine allows extra headroom in FP32 compute and other GPU-specific factors to really enable better performance and this puts the Snowdrop engine into a position that very few game engines occupy at the low-end.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In this test, all results are ray traced - since Snowdrop doesn't have a fall-back fully rasterised mode. We see a 30% difference in performance between the 6650 XT and the 7600 XT, most likely due to the extra compute performance. However, at the high-end, the 7800 XT performs only 6% better than the RX 6800 - exposing a potential CPU or data bandwidth* bottleneck.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*from storage/sytem memory...&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What's really interesting, here, is that for this game, there was the possibility of quite large gains &lt;i&gt;just from&lt;/i&gt;&amp;nbsp;the upgrade to the RDNA3 architecture! There was no need for the expanded CU count of the PS5 Pro and it speaks to the fact that not all games will be able to benefit equally between the two versions of the console...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, what's clear from the provided benchmark in the PC version is that the base console is not up to snuff in being able to provide a clean 60 fps in the performance mode, with these results confirming &lt;a href="https://www.eurogamer.net/digitalfoundry-2023-avatar-frontiers-of-pandora-ps5-series-x-s-tech-review"&gt;the problematic presentation on the base console&lt;/a&gt;...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What is nice, though, is the ability of either hardware version of the "Pro" being able to reach 60 fps in both modes - which means the CPU is able to reach this feat.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, for this game, I wouldn't be expecting any resolution upgrades for a Pro-enhanced version, only graphics settings updates.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhY3aq0qdUNFdboZz5e8uq0Rj9DFbsolZOkRyUtcbN9VjRlWfEQZD8tVyOszYry6CCcH_q-GvsB9EHzj9RmgIjOkHV32T_PedFRjFaBOv6Ek_gXUJkfRhyHosP9L1Em7SuAKPxK4cIISdjJ6TXADXgAe-zISmsqQeE3mW7I9kos7iOKOaJcOcfDjUH3RAU/s1357/Hogwarts.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="357" data-original-width="1357" height="168" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhY3aq0qdUNFdboZz5e8uq0Rj9DFbsolZOkRyUtcbN9VjRlWfEQZD8tVyOszYry6CCcH_q-GvsB9EHzj9RmgIjOkHV32T_PedFRjFaBOv6Ek_gXUJkfRhyHosP9L1Em7SuAKPxK4cIISdjJ6TXADXgAe-zISmsqQeE3mW7I9kos7iOKOaJcOcfDjUH3RAU/w640-h168/Hogwarts.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Hogwart's Legacy is a deeply CPU-limited title...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Hogwart's...&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Hogwart's Legacy is not a first party title for Sony, so I doubt it was included in their &lt;a href="https://www.theverge.com/2024/9/10/24167932/ps5-pro-sony-specs-announcement"&gt;+45% rendering performance&lt;/a&gt; calculation. However, it's an important game to test because it exhibits limitations from CPU, memory bandwidth, and GPU compute - it's one of those rare games that straddles the line of PC infrastructure design when choosing which parts you want to build with!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What we can observe, here, is the difference between RDNA2 and 3 -with the extra FP32 compute of RDNA3 giving an advantage over RDNA2 when it comes to the 100% rasterised testing. For ray tracing, things are more complicated, with only the RX 7800 XT showing any sort of gains once enabled. This may be another reflection of the quickened frontend clock frequency this part has...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I haven't tested the "Performance" mode settings, here, since I didn't have them specified but since we're observing fps values which are the same as those defined for the Fidelity and Fidelity RT modes with the base PS5 hardware, I'm fairly confident in saying that the CPU is the primary limiting factor for these high-end settings. It's unlikely that HL will improve on these two settings and would likely leave them to a 30 fps rate, but perhaps improve the rendering resolution or the performance mode, which could have more demanding graphical options at 60 fps...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfCJ_xS8uYTCSe1fjimgolgzYJcvaDgwtLAtb8FQNwAC9oKB5tgm6XGnYCwNQAUNC0eQYu3aPXqLUOhZk5h2yVz9lK4HbQXGjRBxON45LywfJM_bpT3tsZADWQEXNAsajktY6pCa7kB4Eq3EGMyEHQES3IHatE5oi10viVf9EiRTqcKBaa7zgCo3eHHGM/s1357/Ratchet.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="355" data-original-width="1357" height="168" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfCJ_xS8uYTCSe1fjimgolgzYJcvaDgwtLAtb8FQNwAC9oKB5tgm6XGnYCwNQAUNC0eQYu3aPXqLUOhZk5h2yVz9lK4HbQXGjRBxON45LywfJM_bpT3tsZADWQEXNAsajktY6pCa7kB4Eq3EGMyEHQES3IHatE5oi10viVf9EiRTqcKBaa7zgCo3eHHGM/w640-h168/Ratchet.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Ratchet &amp;amp; Clank displays two issues but by far the most egregious is the VRAM limit on the 6650 XT...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h4 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Ratchet &amp;amp; Clank...&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This title is one of those where the developers have easily observable optimisations which work specifically on the console hardware. The PS5 Performance mode achieves a locked 60 fps on the console but, here, we're VRAM limited* on the RX 6650 XT. This is multiplied multi-fold when switching to 4K resolution... and the 16 GB framebuffer of the RX 7600 XT handles both situations with ease.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;/span&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*The reason is that with the monitoring software I used (HWInfo64, MSI AfterBurner, RTSS) we never approched 6 GB application usage, let alone near 7 GB, at 1400p. So, this result is a little inexplicable to me... Memory bandwidth is very similar between this card and the RX 7600 XT, so, that doesn't explain the issue we're observing.&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Other than that, we see that both the RX 6800 and RX 7800 XT perform identically, with the game logic apparently running into a CPU bottleneck - with barely any performance difference between the two tested resolutions - meaning that any PS5 Pro port of the game will be able to push the graphical quality very heavily, no matter the internal rendering resolution, because the CPU limitation is the primary factor for performance.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEin1tR7nHTsO86VzQ36UVyDwwJkPBzyvqZRph0gIyeZj-ZRRgty8xd0UvlwYoLNvYlVgeSozx_LzR7iEc3Rik-svPfCBCsBs72u3O1hODn6J-LO1feAMCEYvhzfs0OBlEJC7Pqs90NVb5a-nWQSprCzP2Ir2vQB2LHdtW4PAxQOcZ_cr4fGRgOBdihH_oc/s1357/Spiderman%201.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="357" data-original-width="1357" height="168" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEin1tR7nHTsO86VzQ36UVyDwwJkPBzyvqZRph0gIyeZj-ZRRgty8xd0UvlwYoLNvYlVgeSozx_LzR7iEc3Rik-svPfCBCsBs72u3O1hODn6J-LO1feAMCEYvhzfs0OBlEJC7Pqs90NVb5a-nWQSprCzP2Ir2vQB2LHdtW4PAxQOcZ_cr4fGRgOBdihH_oc/w640-h168/Spiderman%201.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Interestingly, the RX 7600 XT draws equal with much more powerful GPUs...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;br /&gt;&lt;h4 style="text-align: left;"&gt;&lt;span style="color: #274e13;"&gt;Spider-Man...&lt;/span&gt;&lt;/h4&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Finally, we take a look at Spider-man: Remastered. This game is also heavily CPU-limited - to the point where the improvement provided by the RX 7600 XT over the RX 6650 XT matches that of the 6800 and 7800 XT parts.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, this PC comparison falls apart slightly because it isn't entirely able to keep up with the console port - meaning that those console-specific adaptations have been applied, once again. None of the CPU+GPU combinations are able to achieve a 60 fps minimum in any of the performance modes and we know this is entirely possible on the console port of the game.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Therefore, this data should be taken as an approximation: we should ignore the absolute values (i.e. lower than 60 fps average) and instead focus on the trend. And the trend is telling us that the game is heavily CPU-bound in performance.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyYPJOB9GiKU9DL_F-gHYSTqQwZycBTfqjU8ihpPeSWe1e9S2GpJ0W1u5xnQL967e_0N2PEn_WGrNI5bUxU57XiV7yQSr9NXEf0ldvIifUgWIVdO7TkETlsHulUnimJGgRsO6V1eVnNEQLttp0kkd_CO1bdXGo_2KhlPBXnO9IUuhWf_NWq4To1JQEXco/s677/Spiderman%202.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="357" data-original-width="677" height="338" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyYPJOB9GiKU9DL_F-gHYSTqQwZycBTfqjU8ihpPeSWe1e9S2GpJ0W1u5xnQL967e_0N2PEn_WGrNI5bUxU57XiV7yQSr9NXEf0ldvIifUgWIVdO7TkETlsHulUnimJGgRsO6V1eVnNEQLttp0kkd_CO1bdXGo_2KhlPBXnO9IUuhWf_NWq4To1JQEXco/w640-h338/Spiderman%202.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;It's like it's the same graph!&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I think this title is not a good test of the hardware because of how it's interacting with the software. If I hadn't have performed the tesitng, I wouldn't even have included it in the summary of this article!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The end result is that, like Ratchet, we could expect a large headroom in graphical quality upgrades in this title but very little in terms of actual framerate improvements - at least from the limited overlap in terms of performance profiles between the console and PC...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Wrapping Up...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If we take an average of the results of both the core clock frequency limited RX 6800 and RX 7800 XT over the "PS5 equivalent" RX 6650 XT when paired with the 3.5 GHz limited Ryzen 5 4600G, we get +42% and +62%, respectively.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Once again, the performance profile of the RX 6800 comes out closer to that of the claimed performance difference by Sony. While I believe, from the claimed teraflops number, that the RDNA3 architecture is used in the Pro, the gains we observe in the mid-range and high-end are not available to the console. This points to my position that a large percentage of the gen-on-gen improvements in performance come from that higher clocked fron-end design, which is missing in the monolithic RDNA3 designs - including console.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, despite the (rather large) claims of many commentators on the effect of moving to RDNA3 or 3.5, the actual performance uplift is going to be rather moderate, when speaking of the architectural makeup, compared to the pure number of FP32 compute unit uplift.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Aside from that, it's not clear that any such gains from a change in architecture will guarantee a performance uplift given the CPU bottlenecks which exist in many game and game engines being targetted to the platform.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The PS5 Pro will likely have its work cut out for it in many titles which do not heavily focus on GPU rendering techniques which would avail them of any extra performance if given access to a larger rendering pipeline...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;One final thought I have from doing all this is - why increase the size of the GPU? Simply moving to a 36 CU RDNA 3 design would have brought developers a decent performance increase, with only games which have a GPU-driven rendering engine really benefitting from the increased CU count. I'm not sure how common those examples are, compared to engines which will experience a CPU-limit.&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2024/09/analyse-this-simulating-ps5-pro-part-2.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhApoUQc6XQjvYABx1oJhTKQXp64nE5EEs9QrURiSR-jn3mtxge51RdgIGd-Cs0nJUpe1JkTOof09fJgbEOXR-4t7ZSRud9eLynI8Oe2eQEy9s_Hlp6ixJonxZ-tV-3SdGC7gZSm0G6jlo8Pyyc3-XkwY8yc-suncCTMVYuQZjw26x7IbrnNTwB1dOFVNQ/s72-w640-h360-c/Title.jpg" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-4898343263200704828</guid><pubDate>Sun, 15 Sep 2024 20:09:00 +0000</pubDate><atom:updated>2024-09-15T21:14:29.024+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>How Powerful is the PS5 Pro?</title><description>&lt;div style="text-align: justify;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzs2nN3czjHINQK0AhWl-QgJ0J8fLovYf0VX6dSgT9t8zwiMdCMQtskKG0PCR7r8fA5qTGVdPUum6J5XpXwmyO60pn6J0NikRz1bAw5Pdesy0oEbRb2dAzoeImALvZmwp9y9xWFkv3GrKKnjLSXnBpMUrMWFti_LYonlyDXLDpLeSZIHjG4QhyphenhyphenLlhOP6s/s1920/Title.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzs2nN3czjHINQK0AhWl-QgJ0J8fLovYf0VX6dSgT9t8zwiMdCMQtskKG0PCR7r8fA5qTGVdPUum6J5XpXwmyO60pn6J0NikRz1bAw5Pdesy0oEbRb2dAzoeImALvZmwp9y9xWFkv3GrKKnjLSXnBpMUrMWFti_LYonlyDXLDpLeSZIHjG4QhyphenhyphenLlhOP6s/w640-h360/Title.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;In part two of my &lt;a href="https://hole-in-my-head.blogspot.com/2024/09/is-ps5-pro-really-equivalent-to-rtx.html?m=1"&gt;Playstation 5 Pro fever&lt;/a&gt; articles, we take another look at the power the console will likely bring to the table. One of the most common discussions that's surrounded &lt;a href="https://blog.playstation.com/2024/09/10/welcome-playstation-5-pro-the-most-visually-impressive-way-to-play-games-on-playstation/"&gt;the announcement of the Playstation 5 Pro&lt;/a&gt; has been the performance of the GPU, with some likening it to an RX 7800 XT or even cards on Nvidia's side of the aisle.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The reality is simultaneously simple and a bit more complicated than just a comparison to a single desktop card.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;So, let's take a look!&lt;/div&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;p&gt;&lt;/p&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;Previously, I've taken a look at the logic of what's happening &lt;a href="https://hole-in-my-head.blogspot.com/2024/03/analyse-this-lets-look-at-ps5-pro-leaks.html?m=1"&gt;based on the prior leaks&lt;/a&gt;. I also &lt;a href="https://hole-in-my-head.blogspot.com/2024/03/analyse-this-simulating-ps5-pro.html?m=1"&gt;simulated the performance of the Pro&lt;/a&gt; using a Ryzen 5 4600G and an RX 6800 and compared it with both an RX 7800 XT and using a more powerful CPU architecture like the Ryzen 5 5600X - with all parts locked to the console frequency specifications, or as close as is possible.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;As PlayStation Pro fever ramps up, we have many people speculating about the performance now that the +45% rendering uplift and 2-3x ray tracing performance is confirmed from Sony's side of things.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Actually, people have mostly settled on the rendering performance of an&amp;nbsp;&lt;a href="https://www.tweaktown.com/news/100296/playstation-5-pro-performance-rumors-close-to-amd-radeon-rx-7700-xt-in-raster-faster-rt/index.html"&gt;RX 7700 XT&lt;/a&gt;&amp;nbsp;and&amp;nbsp;&lt;a href="https://www.ign.com/articles/you-can-build-a-pc-equivalent-to-the-ps5-pro-heres-how"&gt;RX 6800&lt;/a&gt;&amp;nbsp;as a pure rendering equivalent, which is around 45-46% faster than the RX 6700 (the equivalent desktop card to the base PS5) and this is something I can agree on.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNm6Ta7axqMUQa_IbqBDYrmy5iyAjLgAnDuOIaK3toUcD2MfUKTlR7etx52fSCvWsyQrGhcsVFCCc65y2RORjv_eWJrghMLJd6V___piCH4Hd0F2LWxtmT5EIEJrqb6JocsU1CYQliU4egj65EW2LaoAjYDXGrKG3UQkm_JJx7dyOwmB9J9CJFCKxlcFE/s825/TPU_scaling.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="581" data-original-width="825" height="450" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNm6Ta7axqMUQa_IbqBDYrmy5iyAjLgAnDuOIaK3toUcD2MfUKTlR7etx52fSCvWsyQrGhcsVFCCc65y2RORjv_eWJrghMLJd6V___piCH4Hd0F2LWxtmT5EIEJrqb6JocsU1CYQliU4egj65EW2LaoAjYDXGrKG3UQkm_JJx7dyOwmB9J9CJFCKxlcFE/w640-h450/TPU_scaling.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;In raster performance, we're looking at the &lt;a href="https://www.techpowerup.com/gpu-specs/radeon-rx-6700.c3716"&gt;RX 6800, RX 7700 XT and RTX 3070 Ti&lt;/a&gt;... but these results are very dependent on the games tested and analysis of TechPowerUp has shown that these percentages are not as accurate as their &lt;a href="https://www.techpowerup.com/review/sapphire-radeon-rx-7700-xt-pulse/32.html"&gt;actual performance reviews&lt;/a&gt;... I also really like the much recently maligned &lt;a href="https://www.tomshardware.com/reviews/gpu-hierarchy,4388.html"&gt;Tom's Hardware GPU Hierarchy&lt;/a&gt; for this sort of data...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What I'm finding myself not agreeing on is the level of performance uplift associated with the ray tracing.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Some are positing the &lt;a href="https://www.pcguide.com/gpu/ps5-pro-gpu-vs-rtx-3080/"&gt;RTX 3080&lt;/a&gt; and&amp;nbsp;&lt;a href="https://youtu.be/K3zS2aUa3qQ?si=J715sBwQ-qpI-YxX&amp;amp;t=1174"&gt;RTX 4070&lt;/a&gt;&amp;nbsp;as an RT equivalent. Though, there are those that are &lt;a href="https://x.com/Sebasti66855537/status/1833873845599539321"&gt;adding their own dissenting voices&lt;/a&gt; to the discussion.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;What's a Fair Comparison?&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The problem with all of this pontificating is that it's ultimately almost pointless. As is the problem with talking about CPU &lt;a href="https://www.reddit.com/r/Amd/comments/j8ib72/noob_question_what_does_a_19_increase_in_ipc_mean/"&gt;IPC improvements&lt;/a&gt;, GPU testing and relative performance will heavily depend on four things:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;The set of games tested.&lt;/li&gt;&lt;li&gt;The settings used to test those games.&lt;/li&gt;&lt;li&gt;The resolution used to test.&lt;/li&gt;&lt;li&gt;&lt;a href="https://www.techpowerup.com/forums/threads/looks-like-pci-e-3-0-with-the-rx-7600-can-have-a-pretty-big-performance-hit-in-some-games.309708/"&gt;The system the GPU is tested in&lt;/a&gt;.&lt;/li&gt;&lt;/ul&gt;As games industry commentators and enthusiasts, we can spend hours justifying our decisions based on the typical &lt;a href="https://www.techpowerup.com/gpu-specs/radeon-rx-6700.c3716"&gt;TechPowerUp database&lt;/a&gt;, &lt;a href="https://www.techspot.com/reviews/graphics-cards/"&gt;individual testing&lt;/a&gt;, or &lt;a href="https://youtu.be/UpgvzH3-w2I?si=qeqdce-TbLHnq4Kc"&gt;random youtube videos&lt;/a&gt;. Ultimately, none of these are 100% perfect and many have known (and oft-ignored) flaws in &lt;a href="https://hole-in-my-head.blogspot.com/2024/02/we-need-to-talk-about-fps-metrics.html"&gt;their and&lt;/a&gt; &lt;a href="https://hole-in-my-head.blogspot.com/2023/03/we-need-to-talk-about-fps-metrics.html"&gt;our methodology&lt;/a&gt; &lt;a href="https://youtu.be/WCIEzrFwKzM?si=11mDZQpJcDqLUScr&amp;amp;t=1250"&gt;for comparisons&lt;/a&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The only truly accurate comparison is by direct capture of the output of a system, such as &lt;a href="https://www.youtube.com/@DigitalFoundry"&gt;Digital Foundry perform&lt;/a&gt;. Even frametime captures from internal software monitoring systems (e.g. RTSS, FrameView, etc) only come close to the &lt;i&gt;actual&lt;/i&gt;&amp;nbsp;end-user experience as they can't take into account the disparity between the system output the display.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, if we had to be perfect, there would be almost no analysis or discussion in the world - and I think that would be very boring and trite. We need new voices in the space and new challengers to actually create a (preferably amicable) discourse. So, with that out of the way, let's address the issues with Sony's disclosure...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaaURsSbcDhj3AA5saSk6UpZBWGL83dMtRnhRBQIYZHexPaoigKppt_9YkF9H8oO19jtXTTa4xOU8SnQ6Hv9To0ZMLIyiK1f9K1D-LB1Z6iQI6gZ1TE7XUtdyJ1jfyIsDuQOWmST1gBInvFy9WoSUrPzlenkxEzgG_2jOdqmSidywHinlZfOb75BCbo-k/s621/Toms%20scaling.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="273" data-original-width="621" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaaURsSbcDhj3AA5saSk6UpZBWGL83dMtRnhRBQIYZHexPaoigKppt_9YkF9H8oO19jtXTTa4xOU8SnQ6Hv9To0ZMLIyiK1f9K1D-LB1Z6iQI6gZ1TE7XUtdyJ1jfyIsDuQOWmST1gBInvFy9WoSUrPzlenkxEzgG_2jOdqmSidywHinlZfOb75BCbo-k/s16000/Toms%20scaling.PNG" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;&lt;a href="https://www.tomshardware.com/reviews/gpu-hierarchy,4388.html"&gt;Tom's scaling&lt;/a&gt; shows that both the RX 6800 and 7700 XT are around 45% better than the RX 6700 at 1440p Ultra, but are these the resolution and quality settings that Sony is testing against?&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As noted above, the main claims Sony has made that get everyone excited are a &lt;a href="https://www.youtube.com/live/X24BzyzQQ-8?si=ZwtVHrAD5phiapfH&amp;amp;t=203"&gt;45% increase in rendering&lt;/a&gt;, and a &lt;a href="https://www.youtube.com/live/X24BzyzQQ-8?si=BTggmuwBQmmwNdPc&amp;amp;t=215"&gt;2-3x ray tracing calculation&lt;/a&gt;&amp;nbsp;speed increase. Those sound great on paper but have several major caveats in that those same four principles for testing GPUs apply, here, as well!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If we gloss over the first three points, we find that the RX 6800 or RX 7700 XT are approximately 45% faster than the RX 6700, ignoring the fact that the RX 6700 has a faster core clock speed than the base Playstation 5 console does. &lt;a href="https://www.eurogamer.net/digitalfoundry-2024-amds-radeon-rx-6700-is-a-ringer-for-the-ps5-gpu-but-which-is-faster"&gt;Digital Foundry have effectively confirmed&lt;/a&gt; that this GPU is very similar in performance, even with higher clockspeeds, to the console hardware, mitigated by memory configuration and data locality due to the shared memory of the console.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That still leaves the last point in contention.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We know the console APUs of both the Xbox Series X/S and PS5 are limited in both their cache sizes and processing power because they are frequency limited. Looking at the data I gathered during my&amp;nbsp;&lt;a href="https://hole-in-my-head.blogspot.com/2024/03/analyse-this-simulating-ps5-pro.html"&gt;simulation of the PS5 Pro&lt;/a&gt;, I found that Ratchet and Clank was only performing at 80% of the same components at stock settings. Similarly, I found that Starfield was performing at 87% of stock. Locking the frequency of the CPU and GPU and power limiting them has a big effect on the performance!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This isn't the only aspect - practically all the data points we're talking about sourced from the web are performed on systems with DDR5 and Intel's 12th to 14th generations or AMD's 7000 or 8000 X3D parts - CPU, RAM, and PCIe configurations which will &lt;i&gt;not&lt;/i&gt;&amp;nbsp;cause a bottleneck to hinder the performance.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As we can see in Tom's Hardware's testing, summarised in the table above, switching from 1080p Ultra to 1080p Medium settings changes the performance increase from the RX 6700 10GB by a pretty large margin and I'm pretty sure that these tests are not considering ray tracing...&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;And this is one of the primary concerns in all this discussion - our data points are mostly far from the truth of the consoles' performance abilities.&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;And this says nothing of driver versions and windows versions, &lt;a href="https://youtu.be/izqEZmjTfuM?si=Epsuv1RutUGRwZqR"&gt;which we know can have an effect&lt;/a&gt;!&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;My own testing (for an upcoming blogpost) shows the effect of the CPU and subsystem capability on GPU performance and the CPU portion of the APU in the Playstation 5 and Xbox Series X are even more constrained than these desktop parts!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXQ12VOC9kMbfYA01k_FsGVGrDz5cxKvlqdaQZ_HnlLZS4GvqgUrpGxuYuG_R2d73cMEGLh-q-gcHW4lBEmiX6C9-_pxANr21uliMFDwiVDbHwbyOKaeQxrB7qkdBGZrnZQblEDNtx6nE9NKsZIGCRpDmEJ4ed_YQEUJMEBHsgHVO9F5YWpU-L9zpVL7A/s829/CPU%20scaling%201.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="403" data-original-width="829" height="312" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXQ12VOC9kMbfYA01k_FsGVGrDz5cxKvlqdaQZ_HnlLZS4GvqgUrpGxuYuG_R2d73cMEGLh-q-gcHW4lBEmiX6C9-_pxANr21uliMFDwiVDbHwbyOKaeQxrB7qkdBGZrnZQblEDNtx6nE9NKsZIGCRpDmEJ4ed_YQEUJMEBHsgHVO9F5YWpU-L9zpVL7A/w640-h312/CPU%20scaling%201.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;These are quite high settings on titles which can show the difference between CPU, GPU and memory system limitations...&amp;nbsp;&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHPLBKmjxEx5AyDWdrnn1J5FNC9a6Roh9U0NHGM3u1R8-_WSfxgdqAi4HRSBTXAWnWEaZafQSAB5N0ZJ4i3tM3cATyjUQVwLcttSfsYqOQHKNYLt-hKX-s8-jwHqvvgp854e2kmBlsWpSLuKVkxLbSCoKqbhEiODH50Ez51RSDtxHo2qEyGcjiBhaE3VY/s849/CPU%20scaling%202.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="455" data-original-width="849" height="342" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHPLBKmjxEx5AyDWdrnn1J5FNC9a6Roh9U0NHGM3u1R8-_WSfxgdqAi4HRSBTXAWnWEaZafQSAB5N0ZJ4i3tM3cATyjUQVwLcttSfsYqOQHKNYLt-hKX-s8-jwHqvvgp854e2kmBlsWpSLuKVkxLbSCoKqbhEiODH50Ez51RSDtxHo2qEyGcjiBhaE3VY/w640-h342/CPU%20scaling%202.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The RX 7800 XT is really suffering under the yoke of the very limited Ryzen 5 4600G in non-synthetic workloads...&amp;nbsp;&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Architectural Differences...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Added to &lt;b style="text-decoration-line: underline;"&gt;ALL OF THIS&lt;/b&gt; are the architectural differences. &lt;a href="https://hole-in-my-head.blogspot.com/2023/11/the-performance-uplift-of-rdna-3-over.html"&gt;I found&lt;/a&gt; that RDNA 3 likely had a lot of its performance uplift from the front-end clock frequency scaling - not found in RDNA 2 or in the N33-based parts. Something I hope to further confirm in the near future! RDNA 3 also has vastly increased FP32 throughput and other bonuses which may or may not contribute to additional performance depending on the game engine and specific game code (some engineers optimise!).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The APU on the PS5 has neither this, nor the large L3 infinity cache, nor the higher core frequency boosting behaviour of the desktop parts.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Plus, the Shader Engine configuration is completely different. The desktop parts of N32 (RX 7700 XT and RX 7800 XT) consist of three shader engines comprised of 2 shader arrays (5 WGP+, 1/2 rasteriser, 2 RB+) but the Playstation 5 Pro appears to be 2 shader engines of 2 shader arrays (8 WGP+, 1 rasteriser, ?? RB+*)&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;N31 and N33 are similar to each other but different from the above two configurations...&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;*This is an interesting thing... the RB configuration of the PS5 Pro is presumed to be 96 ROPs but this may not be true and could result in lower performance. For sure, the rasteriser config is more optimal than the desktop parts as it was in the base PS5 - I'm assuming they're the same! - but if the ROP configuration is still 64, the Pro will be struggling with the output of all those ALU operations...&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;div style="text-align: justify;"&gt;So, what are we to do?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Well, my own interpretation of the answer to this question is: Let's use logic!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;The Logical Argument (IMO)...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;Here's the thing - it doesn't really matter what the CPU and GPU are capable of in relation to desktop parts, we &lt;i&gt;can&lt;/i&gt;&amp;nbsp;reason things out based on our (very detailed) understanding of what those desktop parts are capable of!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, here's my reasoning - see if you agree!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Rasterisation is the pinnacle of performance. Our GPU arhictectures are primarily optimised to render a requested frame through rasterisation of triangles and texture and light application to those triangles. It's as simple as that. This means that the rasterisation rendering performance (and my nomenclature may not be perfect here, so please forgive me!) is the highest potential performance of any given GPU and GPU/memory architecture.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Real-time ray tracing is an additional calculation based on ray intersections and other calculations which must be performed before the frame can continue to be rendered. The important part, here, is that the frame is still rasterised! It's just that the lighting/sound information applied to a particular pixel/triangle will be adjusted based on the output of the ray traced calculations.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In order to do this, the GPU must &lt;i&gt;first&lt;/i&gt;&amp;nbsp;calculate the RT portion of the frame data. It's important to note that this isn't happening in parallel! So, even though Nvidia and Intel GPUs have portions of their GPU dies dedicated to hardware that can improve the calculation speed of the ray tracing, clock cycles of each frame are spent &lt;i&gt;only doing that&lt;/i&gt;, before the GPU's ALUs/shaders can be put to work on the meat and potatoes of the frame the user is waiting to see.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Maybe I've misunderstood all of the above, I am of the understanding that I have not! But please correct me if I am because it forms the premise of the whole argument here:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Using RT effects means you will always be some percent &lt;i&gt;below&lt;/i&gt;&amp;nbsp;the theoretical 100% rasterised rendered frame time - because the ray tracing operations took some amount of performance from the GPU's ability to be able to render the frame with traditional rasterising techniques.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What percentage that is, is dependent on the ability of the architecture (dedicated calculation hardware, cache size, memory hierarchy, and memory hierarchy bandwidths and associations) to do that work.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;u&gt;This means that the pure rasterisation frame rendering performance of a GPU is its upper limit - a ray traced frame will only ever be able achieve a percentage of that 100% frame time.&lt;/u&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Using this logic, let's take a look at the potential RT performance of the Playstation 5 Pro...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwkzGax9FEe3cNuJSyWPp-HDRB-n1Ix94i1CN-N5xQ0Is8_a7xtnH4G0qjSLdATh6cwYx3n_ONt3GaOXkiMb4FRsaxBGCL1rha1ZsrMRdy7VUVXdaMPxHGW2SbpDn8Buf1nEZixwWe3OpADznTUS2w-uz8-dioOlhcr0p4a6uqV26KjVQEXxhWMCkxD2g/s773/Reasoning%201.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="773" data-original-width="743" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwkzGax9FEe3cNuJSyWPp-HDRB-n1Ix94i1CN-N5xQ0Is8_a7xtnH4G0qjSLdATh6cwYx3n_ONt3GaOXkiMb4FRsaxBGCL1rha1ZsrMRdy7VUVXdaMPxHGW2SbpDn8Buf1nEZixwWe3OpADznTUS2w-uz8-dioOlhcr0p4a6uqV26KjVQEXxhWMCkxD2g/w616-h640/Reasoning%201.PNG" width="616" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Here we have the raw fps results from the &lt;a href="https://www.techpowerup.com/review/sapphire-radeon-rx-7700-xt-pulse/6.html"&gt;raster&lt;/a&gt; and &lt;a href="https://www.techpowerup.com/review/sapphire-radeon-rx-7700-xt-pulse/34.html"&gt;ray tracing&lt;/a&gt; performance &lt;a href="https://www.techpowerup.com/review/sapphire-radeon-rx-7700-xt-pulse/32.html"&gt;covered by TechPowerUp&lt;/a&gt;. These are then compared to show a % performance loss due to ray tracing...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Using the data from a review of the RX 7700 XT Sapphire Pulse, we can see the amount of performance loss per title when ray tracing is enabled. From this data, I can calculate the performance loss of RDNA 2 to RDNA 3 (&lt;a href="https://www.pcworld.com/article/1370428/amd-rdna-3-radeon-rx-7900-xtx-reveal.html"&gt;which is know is 50% per CU&lt;/a&gt;). Using this, I will make the assumption that Sony are not speaking of 2-3x RT calculation performance uplift of the entire GPU (which would include CU increase) and instead assume that they are &lt;i&gt;per CU&lt;/i&gt;&amp;nbsp;in the design*.&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;*This is by far the best case scenario for the Pro's RT performance and goes against Sony's other metrics which are APU to APU comparisons... if we take this metric as a per die comparison figure, then the RT performance of the PS5 Pro is no better than a downclocked RX 6800 - or thereabouts...&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMWKVQYTxhoCOJ9lAsudxy0LX08kV2uJOqyxCEAcazWqZR_wAcykufHBnWbWHvtlQK0zAmR56AYvAtaCwMSIl0mANy67i2grEL6FL0df26BYavSxssgrQTxBwihYzBhi1vETs0vvZ6I1PXDwzVVO2-QUFNNbwtEt_YMEoeIU_mEainpFFahMwsYLm5yIM/s779/Reasoning%202.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="251" data-original-width="779" height="206" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMWKVQYTxhoCOJ9lAsudxy0LX08kV2uJOqyxCEAcazWqZR_wAcykufHBnWbWHvtlQK0zAmR56AYvAtaCwMSIl0mANy67i2grEL6FL0df26BYavSxssgrQTxBwihYzBhi1vETs0vvZ6I1PXDwzVVO2-QUFNNbwtEt_YMEoeIU_mEainpFFahMwsYLm5yIM/w640-h206/Reasoning%202.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Here, I work out the percent uplift from RDNA2 to RDNA3, based on the known 50% uplift - with almost all other factors constant based on N22 to N32 comparison...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglQKoHo6sJr6BkPhc9-uWNd8JB5IA4QvKd0hTUM47OpdDuzvXg8Ciqmq078ddJb4Cv3qF1Tavp11WZ2kbFbgpjdWjsHXeKMImyUfUaVPx7BdR0Q7vVGddRZyDs1drpTAxVI6xFh13qAho9WKFjDW4x7_3hKxfJulIUfqNXW642OO8jFMOEIeo0-lnwzgs/s421/Reasoning%202.5.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="249" data-original-width="421" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglQKoHo6sJr6BkPhc9-uWNd8JB5IA4QvKd0hTUM47OpdDuzvXg8Ciqmq078ddJb4Cv3qF1Tavp11WZ2kbFbgpjdWjsHXeKMImyUfUaVPx7BdR0Q7vVGddRZyDs1drpTAxVI6xFh13qAho9WKFjDW4x7_3hKxfJulIUfqNXW642OO8jFMOEIeo0-lnwzgs/s16000/Reasoning%202.5.PNG" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Here I calculate the fps per title, taking into account the 2x and 3x RT performance uplift against the loss of performance from pure rasterised frame to the ray tracing-included frame...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We can see that in some titles, the effect of ray tracing is very minimal and in others is quite severe! Just one of the reasons this comparison is very difficult! Adding to that, we cannot separate differences in cache size, association, and memory hierarchy bandwidth differences - but this is basically as close as it gets for this size of GPU...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Finally, I take that data and plug it into the real data for the desktop GPUs running on desktop CPUs.(again, not realistic! But humour me!) I decided that I would take the average of this data to present 2.5x as the RT performance of the PS5 Pro.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5qI2OMBCBvFd64NYl5iicYW64Gp87KKlLaSUzD7EUmx7ZmOnYjPlwwd1q_kkoanoxC9eySXQN80dPfD1hedu3IzLEPjZtMsQ3xfWvAzU4K8njZfdzNagPe0FHCudwxS05LRRxp1znaHU3WuO0BTFo-H9yUF8ETqGHIp2x9au3UGcxVrQiCFT6e1JXEIc/s1159/Reasoning%203.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="559" data-original-width="1159" height="308" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5qI2OMBCBvFd64NYl5iicYW64Gp87KKlLaSUzD7EUmx7ZmOnYjPlwwd1q_kkoanoxC9eySXQN80dPfD1hedu3IzLEPjZtMsQ3xfWvAzU4K8njZfdzNagPe0FHCudwxS05LRRxp1znaHU3WuO0BTFo-H9yUF8ETqGHIp2x9au3UGcxVrQiCFT6e1JXEIc/w640-h308/Reasoning%203.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Finally! The REAL performance of the PS5 Pro GPU (not)...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxoJF-1jtA5kesm6Xz7BmDKAuDhCphTa2UTLGUyZAPV4uFYCZQcHmoZaChzhYAg_xulZf28rfhpkh9nx0pOQ2hHCM-0eAP7zyV_pv_AUUtObJEQ1B3pVf42bbozpmSEl605IgDMy3jZhZPNP8eH5MjNKlYNA_6UVPRFAbu_SkULHAaxZjJe-7vHpZPjYI/s317/Reasoning%204.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="285" data-original-width="317" height="285" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxoJF-1jtA5kesm6Xz7BmDKAuDhCphTa2UTLGUyZAPV4uFYCZQcHmoZaChzhYAg_xulZf28rfhpkh9nx0pOQ2hHCM-0eAP7zyV_pv_AUUtObJEQ1B3pVf42bbozpmSEl605IgDMy3jZhZPNP8eH5MjNKlYNA_6UVPRFAbu_SkULHAaxZjJe-7vHpZPjYI/s1600/Reasoning%204.PNG" width="317" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;For some reason, I forgot this data in the chart above - I believe it was due to aesthetics...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What we can see from these calculations is that a theoretical PS5 Pro GPU with 2.5x RT calculation performance &lt;i&gt;&lt;u&gt;per CU&lt;/u&gt;&lt;/i&gt; would behave &lt;i&gt;very&lt;/i&gt;&amp;nbsp;similarly to an RX 7700 XT - winning in some scenarios/game engines but drawing equal in others.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's literally impossible for the ray tracing performance of the GPU to be greater than the rasterisation increase of 1.45x - which is approximately the RX 6800. What &lt;i&gt;is&lt;/i&gt;&amp;nbsp;probable, is that the %loss due to the extra burden of performing the ray tracing calculations is significantly reduced which we can see in the above chart in relation to the RX 6800 RT performance.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Conclusion...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;These comparisons and calculations are fraught with errors and issues due to their very nature but, in my honest opinion, these are the most detailed determinations of the potential performance of the console out there at this point in time...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In reality - the GPU in the console will not perform as well as any of these calculations in scenarios were the CPU is the limiting factor as the i9-13900K used by TechPowerUp is a far cry from the Ryzen 5 4700G equivalent found in the console APUs.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let me know your thoughts in the comments!&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2024/09/how-powerful-is-ps5-pro.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzs2nN3czjHINQK0AhWl-QgJ0J8fLovYf0VX6dSgT9t8zwiMdCMQtskKG0PCR7r8fA5qTGVdPUum6J5XpXwmyO60pn6J0NikRz1bAw5Pdesy0oEbRb2dAzoeImALvZmwp9y9xWFkv3GrKKnjLSXnBpMUrMWFti_LYonlyDXLDpLeSZIHjG4QhyphenhyphenLlhOP6s/s72-w640-h360-c/Title.jpg" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-6689005423924651947</guid><pubDate>Mon, 09 Sep 2024 11:54:00 +0000</pubDate><atom:updated>2024-09-09T13:02:48.094+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">Roundup</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>AMD's New Reality...</title><description>&lt;div style="text-align: left;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDpjEtKVkgLx1HAPIWq1sLVRV7mDN3cKNJkauRJ9i9MIFryxesINkccYYJHkpk8WBnLHOdmuslbrrLQQMCskaQjMEpVV6ze7Tit_nACEPJ_3aTIfcfS2yfKrpz6Okzo_04HvIT8ri8euKmm1Duw3wuF1Oj6yPyzUQChfeeyb85wsRbDI9O9KtxxcGrkCA/s1920/Title%201.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDpjEtKVkgLx1HAPIWq1sLVRV7mDN3cKNJkauRJ9i9MIFryxesINkccYYJHkpk8WBnLHOdmuslbrrLQQMCskaQjMEpVV6ze7Tit_nACEPJ_3aTIfcfS2yfKrpz6Okzo_04HvIT8ri8euKmm1Duw3wuF1Oj6yPyzUQChfeeyb85wsRbDI9O9KtxxcGrkCA/w640-h360/Title%201.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Last time, &lt;a href="https://hole-in-my-head.blogspot.com/2024/08/microsoft-and-intel-are-on-ropes-whats.html"&gt;I pontificated on the current situation&lt;/a&gt; surrounding two of the gaming industry greats. Today, I'm going to address &lt;a href="https://www.tomshardware.com/pc-components/gpus/amd-deprioritizing-flagship-gaming-gpus-jack-hyunh-talks-new-strategy-for-gaming-market?utm_medium=social&amp;amp;utm_source=twitter.com&amp;amp;utm_campaign=socialflow"&gt;&lt;i&gt;very&lt;/i&gt;&amp;nbsp;recent comments&lt;/a&gt;&amp;nbsp;on their current strategies relating to the consumer (client) graphics card market...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It may not be what you think it is...&lt;br /&gt;&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;The Setup...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Recently, &lt;a href="https://www.tomshardware.com/pc-components/gpus/amd-deprioritizing-flagship-gaming-gpus-jack-hyunh-talks-new-strategy-for-gaming-market?utm_medium=social&amp;amp;utm_source=twitter.com&amp;amp;utm_campaign=socialflow"&gt;Jack Hyunh spoke with Tom's Hardware&lt;/a&gt; and outlined some of AMD's broad-strokes reasoning relating to the consumer GPU market.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The issue, here, is that Jack specifically came out with a couple of choice quotes relating to strategy of client (i.e. "us") devices, going forward:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;blockquote&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #cc0000;"&gt;"Of course, we have to because that’s performance-per-dollar. Even Microsoft said Chat GPT4 runs the fastest on MI300. Here's the thing: In the server space, when we have absolute leadership, we gain share because it is very TCO-based [Total Cost of Ownership]. In the client space, even when we have a better product, we may or may not gain share because there's a go-to-market side, and a developer side; that's the difference.&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #cc0000;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #cc0000;"&gt;So, yeah, absolutely, we want to be the best [in the data center]. That’s why EPYC now has one-third of the world's market share.&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #cc0000;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #cc0000;"&gt;On the PC side, we've had a better product than Intel for three generations but haven’t gained that much share. So, to me, that means that it's the developers, it's the go-to-market, and that's where I'm focusing now. I think building a great product in the client [consumer] market gets us to 20% market share by pure grinding, but to go to 40% is another gear, and that’s the machine I’m trying to build.&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #cc0000;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #cc0000;"&gt;But don’t worry, I love gaming. When I present to the board, I say gaming is a strategic pillar in my strategy. I actually talk about a few things: commercial, PC, and gaming.&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #cc0000;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #cc0000;"&gt;[..] Don’t worry. We will have a great strategy for the enthusiasts on the PC side, but we just haven’t disclosed it. We'll be using chiplets, which doesn't impact what I want to do on scale, but it still takes care of enthusiasts. [...] Don't worry, we won’t forget the Threadrippers and the Ryzen 9’s."&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div&gt;&lt;div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Now, this &lt;a href="https://www.techradar.com/computing/gpu/amd-could-be-shifting-to-focus-on-the-mid-range-gpu-market-heres-why-thats-a-good-thing"&gt;was immediately&lt;/a&gt; &lt;a href="https://videocardz.com/newz/amd-strategy-shifts-to-prioritize-market-scale-over-enthusiast-radeon-gpus"&gt;picked up&lt;/a&gt; &lt;a href="https://www.techpowerup.com/312116/amd-retreating-from-enthusiast-graphics-segment-with-rdna4?cp=4"&gt;on&lt;/a&gt; &lt;a href="https://wccftech.com/amd-confirms-focus-on-mainstream-segment-first-rdna-4-gpus-compete-against-nvidia/"&gt;by various&lt;/a&gt; &lt;a href="https://www.guru3d.com/story/amd-shifts-focus-away-from-enthusiast-gpu-segment-in-strategy-to-gain-market-share/#:~:text=Jack%20Huynh%2C%20AMD's%20head%20of,directly%20with%20Nvidia's%20flagship%20offerings."&gt;media outlets&lt;/a&gt; and through Tom's own analysis as being quite the move... Even, I, in my infinite wisdom, was found to be in awe of AMD's apparent turnaround from their prior, previously held conclusion that top-end products drive sales in the lower-end, too!&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The question we're facing is whether this is actually true or not!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I would argue that, on a global scale, marketing works on this entire principle. You see a pro athelete or team wearing the products of a certain brand and you will be more likely enticed to buy &lt;i&gt;something&lt;/i&gt;&amp;nbsp;from that brand in order to "attune" or "align" with your favourite or aspirational vision of reality.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In clothing, fashion, and general societal cachet, these assumptions are met with a general agreement - whereby consumers will attach themselves to ancilliary products attached or aligned to their aspirational models.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In gaming? For console gamers this model works well; with influencers depitcting the "image" of "ideal" that the consumer would be able to aim for.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;For PC gamers, I would argue that this is still highly prevalent but to a lesser degree when it comes to hardware. My reasoning is that PC gamers, for the most part, have some technical knowledge, meaning that they can be more discerning of the merits of what they wish to buy to persue their hobby.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Conversely, many sports fans do not know how to sew a shoe or shirt. This limits their participation to a level of "support" or "no support". Now, a sports fan's support may arrive or be applied on many different levels - from buying beer and watching events in a sponsored bar, or all the way up to commisioning personalised branded items from an accredited outlet.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Gamers don't have as many levels of "appreciation" available to them and often, "gamers" as a percentage mass of society, do not coalesce into such such a monolithic society of appreciation as sports fans do and are able to...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Gamers want performance and lack of technical issues. They have a more direct input into their own experiences and executive decisions than the typical sports fan and that needs to be captured in any analysis of the market other than a simple "winner affects all purchasing decisions rhetoric"!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiP1q47T0hBFjLgdpcVhbHuWgP0I-0b5JIAKG3O9tasBGIrmvMrD_RnDL4t6iyTGRsRy93FzaeGCJV9l1UM8IuUZpHqRRT9hWGbukuQCus7DlnoX5PKxekyutJRi3cp7-NcPmVKYzg8RK3BZU6_RGryxUir-9Rtvyq51c4Vegzkhz_jhwFpcPQ0g6A8r3k/s1920/Header.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiP1q47T0hBFjLgdpcVhbHuWgP0I-0b5JIAKG3O9tasBGIrmvMrD_RnDL4t6iyTGRsRy93FzaeGCJV9l1UM8IuUZpHqRRT9hWGbukuQCus7DlnoX5PKxekyutJRi3cp7-NcPmVKYzg8RK3BZU6_RGryxUir-9Rtvyq51c4Vegzkhz_jhwFpcPQ0g6A8r3k/w640-h360/Header.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;A buyers market...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;The Reality...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;These comments may not actually reflect AMD's position on the client GPU market in the way we think they do. Mr. Hyunh interchanges CPU, GPU, client and business throughout his brief blurb. It's very difficult to pin down any particulars to the specific market segments other than the fact that "volume" and "marketshare" is the priority in the consumer (client) GPU space.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That doesn't preclude high-end parts. It doesn't preclude any specific products. What it does, or more specifically, what he &lt;i&gt;says&lt;/i&gt;&amp;nbsp;is that he is going to focus on developers to help move products.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That's the big takeaway.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;To put it another way, my re-reading of this interview is: &lt;i&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;"Relationships with various business segments will drive user adoption."&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In that context, the interview is a big nothing. Who (or what corporate entity) wouldn't state that they plan to grow the business by focussing on the business relationships which create value with which to grow the total addressable market for those same products?!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I think this interview and these quotes are taken completely out of understanding where the speaker is in their mindset.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;The Hope...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;There is one thing that I see in this interview that I am positive about and I've not seen it mentioned anywhere else: AMD may spend the money that they would have spent on a hypothetical high-end product on developer buy-in and adoption.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;All these years, Nvidia have been spending away on "RTX" implementations to help developers use those technologies and, by and large, AMD has sat on the sidelines ignoring the issues developers face in the short-term, instead focussing on what developers face in the long-term. The problem is that consumers don't care about the difficulties of developers and will never worry about back-end problems.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What they will care about is features added to new games that they can use to get more "performance" (read: usability) out of their hard-earned hardware. Nvidia offers that, AMD doesn't - or at least not to the extent of Nvidia.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;While I disagree with the concept that AMD don't need to compete at the high-end of the performance spectrum to "enable" lower end buy-in from consumers. I do see the need for them to spend more of their resources on enabling developers to push their implementations and hardware to help their public perception and sell more hardware units. If this can't be achieved with their current budget considerations, then pulling away from one-upmanship at the premium tiers to focus on this for their architectures is a huge win, in my opinion...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This strategy doesn't preclude them from re-entering the market with &lt;u style="font-style: italic; font-weight: bold;"&gt;earth-shattering performance&lt;/u&gt; at any future time and date. It just tells you that they want to become synonymous with features, like Nvidia are.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;AMD are already comparable in terms of stability and performance. What they lack is that extra mindshare, both in the sellers and consumers.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Finale...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If, all things being equal, Nvidia was no more of a choice over AMD than a monitor with "Freesync with G-sync compatible" hardware then it is conceivable that the hardware share of the sales market* would increase for AMD... if they were cheaper.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Despite any business-orientated hurdles that the company may have to overcome, the final hurdle will be user-mindshare and that, unfortunately, is very much in the minority. To turn that perception around, AMD will &lt;i&gt;need&lt;/i&gt;&amp;nbsp;to compete in the higher-end space. It's simple psychology...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;blockquote&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;*Because that's what we're talking about here...&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2024/09/amds-new-reality.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDpjEtKVkgLx1HAPIWq1sLVRV7mDN3cKNJkauRJ9i9MIFryxesINkccYYJHkpk8WBnLHOdmuslbrrLQQMCskaQjMEpVV6ze7Tit_nACEPJ_3aTIfcfS2yfKrpz6Okzo_04HvIT8ri8euKmm1Duw3wuF1Oj6yPyzUQChfeeyb85wsRbDI9O9KtxxcGrkCA/s72-w640-h360-c/Title%201.jpg" width="72"/><thr:total>4</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-4635441739500034044</guid><pubDate>Tue, 03 Sep 2024 17:46:00 +0000</pubDate><atom:updated>2024-09-03T18:46:44.819+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>Is the PS5 Pro really equivalent to an RTX 4090? (No!)</title><description>&lt;div style="text-align: left;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgo1p0vx-idrT6YP9iezVPHo3Jf3OrQc5_WtHAEGVWqyUkVeLsD560Bo4yFsVNLkmXsLimH8tccFTHI9mgCQlPM4nME2DrKD6UrHxHogyGxRnzvLH_HtN44pQSh063mLS2TdDP9x3JWE1wwz_Kf4gBoDFvwNDJ_u64koWGd5XJHRGseEne4Ty_HCrNoPx4/s1920/Title.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgo1p0vx-idrT6YP9iezVPHo3Jf3OrQc5_WtHAEGVWqyUkVeLsD560Bo4yFsVNLkmXsLimH8tccFTHI9mgCQlPM4nME2DrKD6UrHxHogyGxRnzvLH_HtN44pQSh063mLS2TdDP9x3JWE1wwz_Kf4gBoDFvwNDJ_u64koWGd5XJHRGseEne4Ty_HCrNoPx4/w640-h360/Title.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Playstation 5 Pro fever is nearing its zenith, with speculations becoming rife and consumers beginning to fantasize about what, exactly, the new console will be capable of. This is a fun look at some of the recent nonsense coming out of the Twittersphere on the subject:&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh84JTm2BWaZKG-UZelOWowoSzRb5aGJY-G0rJ8TrpCgm_5PixGCTJcym7GimE9aPFW6vdr9rQdN7e0zla0jw3cKc-n5MwU1YPeT1zHARS0TvOq3sl8NHX4uEU73Qaa5I-PUOVsPjMXU2i_f_oQ47NZjDXr0fdhEyGH4qPzgkymKLey7zDQwmGDAF8yeJ4/s727/Tweet.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="727" data-original-width="611" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh84JTm2BWaZKG-UZelOWowoSzRb5aGJY-G0rJ8TrpCgm_5PixGCTJcym7GimE9aPFW6vdr9rQdN7e0zla0jw3cKc-n5MwU1YPeT1zHARS0TvOq3sl8NHX4uEU73Qaa5I-PUOVsPjMXU2i_f_oQ47NZjDXr0fdhEyGH4qPzgkymKLey7zDQwmGDAF8yeJ4/w538-h640/Tweet.PNG" width="538" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;&lt;a href="https://x.com/mrpyo1/status/1829183659217387685"&gt;Captured for posterity before "Pyo" comes to his senses...&lt;/a&gt;&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;As POWERFUL as the RTX 4090...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As easy as it might be to discredit such a claim or "imasgibation"* where a graphics processing unit is compared to a fully functional Death Star I'm not that sort of commentator. Let it never be said that Duoae was a man who minced words but not numbers whilst navel-gazing!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;div&gt;&lt;b&gt;&lt;i&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*I just invented this word, I hope it becomes a new meme...&lt;/span&gt;&lt;/blockquote&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;Previously, I have already &lt;a href="https://hole-in-my-head.blogspot.com/2024/03/analyse-this-simulating-ps5-pro.html"&gt;simulated the PS5 Pro&lt;/a&gt;, using a Ryzen 5 4600G, 16GB of DDR4 3200 RAM, an SSD, and an RX 7800 XT. Both the CPU and GPU were clock-locked to the specifications of the PS5 (and rumoured Pro) - which I should note are both &lt;i&gt;below&lt;/i&gt;&amp;nbsp;the stock specifications for the PC parts!! - and the analysis can be found in that link just above.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;What I can see from my testing is that the clock-locked RX 7800 XT performs approximately equivalently to the stock RX 6800 in both Ratchet and Clank and Starfield. So, using some newly-generated data I have at hand, I will use this card as a comparison point...&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Unfortunately, I don't have an RX 6700 10GB to compare against (for the base Playstation 5 system), which would be nice... and buying one has become a very expensive proposition - so, given I'm self-funded, that's likely to never happen, now. I also don't have an RTX 4090 to be able to pair with the 4600G to test this out but I &lt;i&gt;DO&lt;/i&gt;&amp;nbsp;have an RX 4070 Super, which, while not as powerful as an RTX 4090, is more powerful than the RX 7800 XT (at stock settings) in both raster and when using ray tracing.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, let's take a look at what this looks like!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYsPRtvLWL6Kdfwi95UVHH16oDlIMjquZ-uJwCKqqBukhOdjH92abgvmgYZdpQIyh5bTIiO87RF0mAY9jNKG6xKmcdSpDbSRdfW7hmKz9VKKW58m1E4mml-0K7uOfAo8OCemyr7l-2lQyQ3FnIhSYIaZZxNjKZimmMm0Xf9QMAG3Md7L8ymb4mqDNIugQ/s729/4600G.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="455" data-original-width="729" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYsPRtvLWL6Kdfwi95UVHH16oDlIMjquZ-uJwCKqqBukhOdjH92abgvmgYZdpQIyh5bTIiO87RF0mAY9jNKG6xKmcdSpDbSRdfW7hmKz9VKKW58m1E4mml-0K7uOfAo8OCemyr7l-2lQyQ3FnIhSYIaZZxNjKZimmMm0Xf9QMAG3Md7L8ymb4mqDNIugQ/w640-h400/4600G.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Boom! Tested at 1080p with high/ultra settings in each game (except Alan Wake 2 which used the low RT preset)...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That's &lt;i&gt;not great&lt;/i&gt;...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In almost all scenarios, even though both GPUs will likely be suffering from some form of CPU bottleneck (see below) the games running on the RTX 4070 outperform the same sections tested on the RX 6800.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Looking at the GPUs in a selection of titles being run on a number of different CPUs, it becomes clear that the RX 6800 actually isn't even that CPU-bound by the lowly 4600G at the high settings tested and is, instead, limited by its graphical abilities!&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCaXlv5FRyF8vP3TCv87bbZ1rIAQbQNF0s1pqtzbPXuOY59w4tHJ2eN1t8SmPs-dWND9KDX6MFR81SJGfw33sQGt1QLclGwqzvUNi3qUcIU7fmCOsGUlD45RkkI3KrpzT-KcwxuPF9yN6JHitRnywcfUbzNgtaLxn56lPebc8mvO81ocHDXTzObjcPOmg/s1145/CPU_scaling.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="343" data-original-width="1145" height="192" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCaXlv5FRyF8vP3TCv87bbZ1rIAQbQNF0s1pqtzbPXuOY59w4tHJ2eN1t8SmPs-dWND9KDX6MFR81SJGfw33sQGt1QLclGwqzvUNi3qUcIU7fmCOsGUlD45RkkI3KrpzT-KcwxuPF9yN6JHitRnywcfUbzNgtaLxn56lPebc8mvO81ocHDXTzObjcPOmg/w640-h192/CPU_scaling.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Even here, the RX 6800 is less CPU-bound than the RTX 4070 Super...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Now, sure, you could argue that these settings would never be used on a console - and you'd be right! However, that just raises the possibility that the RTX 4070 Super would pull even further ahead, anyway!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;While I don't have that specific data, we can simulate such a situation by comparing the percentage increase for each graphics card when going between the anaemic Ryzen 5 4600G to the relatively powerful i5-12400.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCf_O_seE_tM9e33J1Wki6UMRAkDD3-drJTxFjgn6y9XAJvNo3NOEi4M7E0DJXWo3JPPhOOf64U0NiCdd87WpxKmpXtkHZvBP1WjuF0yn11GSrzx39bvcORZ6zC-3dAWXlcqEiDUth5251MVNdK6Gp8vcIkSW9y8sbT4p4HNf6O1fUNm_EZ7egFmGTREw/s763/Relative%20peformance%202.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="451" data-original-width="763" height="378" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCf_O_seE_tM9e33J1Wki6UMRAkDD3-drJTxFjgn6y9XAJvNo3NOEi4M7E0DJXWo3JPPhOOf64U0NiCdd87WpxKmpXtkHZvBP1WjuF0yn11GSrzx39bvcORZ6zC-3dAWXlcqEiDUth5251MVNdK6Gp8vcIkSW9y8sbT4p4HNf6O1fUNm_EZ7egFmGTREw/w640-h378/Relative%20peformance%202.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Even a CPU-boost doesn't really help the RX 6800 perform much better in the majority of titles...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We can see that the RTX 4070 Super, on average, benefits much more from the increase in CPU power, meaning it has a lot left in the tank, graphically speaking... and these are not clock-limited CPUs, we're talking about - so, scale back that 4600G performance by another small margin!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, no, the PS5 Pro will not have the graphical horsepower of the RTX 4090.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;span style="color: #274e13; font-size: large;"&gt;Imasgibation debunked!&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicz1ePNxHVoYANHGZDvcubxQ5ZMddxomNNQx3kO8GfLDcVF6tcn0-QGh-Wr3er8qyVT5kKIQ-Ggk2_x5_0appoTQV1e3irka70M_FK2nFsiZjPb6HdjKcFrnrng3oxhRQgySTB_valiXUarBfsIGkgyGNnlStizxY-3gf15Kx5fYYwsJlm8mGvb2rhymg/s613/Tweet%202.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="613" data-original-width="593" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicz1ePNxHVoYANHGZDvcubxQ5ZMddxomNNQx3kO8GfLDcVF6tcn0-QGh-Wr3er8qyVT5kKIQ-Ggk2_x5_0appoTQV1e3irka70M_FK2nFsiZjPb6HdjKcFrnrng3oxhRQgySTB_valiXUarBfsIGkgyGNnlStizxY-3gf15Kx5fYYwsJlm8mGvb2rhymg/w620-h640/Tweet%202.PNG" width="620" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;&lt;a href="https://x.com/NotJayzOneCent/status/1830667511446306995"&gt;Captured for posterity...&lt;/a&gt;&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;The PC Equivalent to the PS5 Pro...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, it seems clear to me that Hugh is not a sincere person and may, in fact, be having a little fun given his profile is labelled as "parody". However, enough people re-Tweeted his reaction to Wccftech's article that I felt I could include it here.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I think the second thing to note about this PC build is that it assumes a few key things:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ol&gt;&lt;li&gt;You can squeeze an AM4 CPU into an AM5 socket.&lt;/li&gt;&lt;li&gt;You don't know how to budget.&lt;/li&gt;&lt;li&gt;You know nothing about component performance.&lt;/li&gt;&lt;/ol&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Getting those out of the way, you realise that you can do far better than ol' Hugh in terms of cost! &lt;a href="https://pcpartpicker.com/list/D6hPMV"&gt;Here's my 5 minute white build&lt;/a&gt;, which would blow both the PS5 and PS5 Pro out of the water! (And also looks better, IMO!!)&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;a href="https://pcpartpicker.com/list/fNNnZJ"&gt;Here's another, this time using Intel!&lt;/a&gt;&amp;nbsp;&lt;a href="https://pcpartpicker.com/list/BdXsz6"&gt;Or DDR5!&lt;/a&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;All of these are around $850... so, that's a pretty big saving! (Actually, around the cost of a base PS5!)&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;But let's get even more specific! The 5700X is a beast of a CPU compared to the measly 4600G/4700G. It's not only a better, more performant core architecture, it also has a larger L3 cache (important for gaming!) and a faster boost clock frequency. The CPU also operates at a much higher TDP than the silicon in the PS5/Pro will have available to it... so, even ignoring the frequency differences, the APU will be throttling in order to keep itself working within the tight thermal and power limits applied to it!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The RX 7700 XT also has a similar issue, in that its frequency and power/thermal limits are higher than that of the GPU in the PS5/Pro. It also has an infinity cache (which the APU doesn't), larger L0 and L1 caches but a smaller L2 (half the size). The desktop GPU also has a smaller VRAM capacity and lower bandwidth to that VRAM - so, that's a bit of a wash.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The box cooler, given away with the CPU, will be sufficient enough to cool this beast - so no need for an expensive tower cooler - the same logic applying to the case, RAM, SSD, and power supply, these are also not needed unless you specifically want to make your CPU as expensive and small as possible, in which case, there are better options!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Finally, everyone and their mother knows that there is pre-applied thermal paste on all box coolers (or it comes for free with a cooler you purchase).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Really and truly, this was less of an imasgibation and more of an outright troll which got everyone talking. In which case, well-played!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Fin...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;If you've seen any other such PS5 Pro type posts, feel free to link them below and I might do a follow-up post dissecting them!&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2024/09/is-ps5-pro-really-equivalent-to-rtx.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgo1p0vx-idrT6YP9iezVPHo3Jf3OrQc5_WtHAEGVWqyUkVeLsD560Bo4yFsVNLkmXsLimH8tccFTHI9mgCQlPM4nME2DrKD6UrHxHogyGxRnzvLH_HtN44pQSh063mLS2TdDP9x3JWE1wwz_Kf4gBoDFvwNDJ_u64koWGd5XJHRGseEne4Ty_HCrNoPx4/s72-w640-h360-c/Title.jpg" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-7619796412849657164</guid><pubDate>Sat, 31 Aug 2024 02:35:00 +0000</pubDate><atom:updated>2024-09-08T19:19:18.429+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">curmudgeon</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>Microsoft and Intel are on the ropes... what's next?</title><description>&lt;div style="text-align: left;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8pIbHDRVS2GgdOIl1TwaZw8tP4Wr436gRi_vyochdEOOhUeA5Q0FI4wC6Isq3AY2mXjrp91E2PHj2-xZOluLghAnJkGEr0A6hT_yzGSlFVLDSwdVC786VAJBWb4Og2WdQtJ_cDYndYrMifaq-XOmR_fM0-cln_orMP3l1CXrx2bOTy6yssCMmMfyMbq4/s1920/Title.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8pIbHDRVS2GgdOIl1TwaZw8tP4Wr436gRi_vyochdEOOhUeA5Q0FI4wC6Isq3AY2mXjrp91E2PHj2-xZOluLghAnJkGEr0A6hT_yzGSlFVLDSwdVC786VAJBWb4Og2WdQtJ_cDYndYrMifaq-XOmR_fM0-cln_orMP3l1CXrx2bOTy6yssCMmMfyMbq4/w640-h360/Title.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;I'm not known for being a positive person when it comes to certain things. I'm generally critical, analytical, sceptical, and distrusting of processes, industries, and companies. However, I do love tech. I am critically positive about it, and for people in general and it's in the spirit of these two parts of me that I feel like I need to get these thoughts off of my chest.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Two of the biggest players in gaming are dying - their literal death throes are all around us and fill the tech news headlines, background blog rants, literal subreddits, and trending on social media.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I don't even think we need to introduce who I'm talking about but let's get it out of the way:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;Microsoft&lt;/li&gt;&lt;li&gt;Intel&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Their flailing is having a huge impact on gaming and tech in general and I'm a little worried as to what fills the hole they make if and when they do fail or contract away from the segments that I've traditionally engaged with.&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let's get this out of the way, neither Microsoft or Intel are going to go out of business. That's not the kind of death throes I'm talking about, here. What I'm thinking about is them "rationalising" their businesses and "focussing on their core strengths" - that's the danger from my point of view as an enthusiast and user.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;h3&gt;&lt;span style="color: #274e13;"&gt;Microsoft...&lt;/span&gt;&lt;/h3&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Back at the start of the Xbox One generation, Microsoft experienced &lt;a href="https://uproxx.com/viral/your-guide-to-the-xbox-one-pr-disaster/"&gt;what we then termed&lt;/a&gt; &lt;a href="https://www.networkworld.com/article/745406/microsoft-subnet-retracing-microsoft-s-missteps-that-sunk-the-xbox-one.html"&gt;a PR disaster&lt;/a&gt;. It was horrid - they were directionless, unsure of their decisions, vague and cagey. Then, when they saw the response from consumers they &lt;a href="https://www.shacknews.com/article/79827/microsoft-reverses-stance-on-drm-used-games-for-xbox-one"&gt;pretty much walked&lt;/a&gt; &lt;a href="https://www.pcmag.com/news/microsoft-xbox-one-wont-require-kinect-to-work"&gt;everything back&lt;/a&gt;... wasting time, energy, and consumer goodwill in the process.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The Xbox One was the console where they introduced the concept of cloud gaming and cloud-enabled gaming, which, is still being beaten like the dead horse that it is despite the concept roundly being decried as a flop... something which &lt;a href="https://hole-in-my-head.blogspot.com/2013/06/xbox-one-will-cloud-appreciably-change.html"&gt;we all saw coming from a mile away&lt;/a&gt;. Then there was a leadership change and we all thought there was something better on the horizon...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, however, things &lt;i&gt;feel&lt;/i&gt;&amp;nbsp;a lot worse.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Xbox has recently &lt;a href="https://www.essentiallysports.com/esports-news-microsoft-had-plans-to-spend-sony-out-of-business-to-boost-xbox-reveals-internal-email-from-chief-matt-booty/"&gt;stumbled from&lt;/a&gt; &lt;a href="https://news.microsoft.com/2022/01/18/microsoft-to-acquire-activision-blizzard-to-bring-the-joy-and-community-of-gaming-to-everyone-across-every-device/"&gt;one poorly thought-out&lt;/a&gt; &lt;a href="https://www.pcgamer.com/gaming-industry/a-week-after-microsoft-closed-4-game-studios-microsoft-owned-activision-announces-a-new-game-studio/"&gt;or excecuted situation&lt;/a&gt; &lt;a href="https://www.kotaku.com.au/2024/04/xbox-game-preservation-team/"&gt;to&lt;/a&gt; &lt;a href="https://forza.net/news/forza-horizon-4-delisting"&gt;another&lt;/a&gt;. They've moved into the realm of hardware as a service with their &lt;a href="https://www.ign.com/articles/microsofts-amazon-fire-tv-ad-declares-you-dont-need-an-xbox-to-play-xbox"&gt;cloud gaming push&lt;/a&gt;, trying to imitate what a business to business relationship would look like in the consumer space, &lt;a href="https://techcommunity.microsoft.com/t5/microsoft-learn/what-are-the-advantages-disadvantages-of-the-microsoft-azure/m-p/4064618"&gt;despite knowing the drawbacks&lt;/a&gt; to a subscription model which requires datacentre resources to manage...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Not only that but many of us &lt;a href="https://hole-in-my-head.blogspot.com/2016/11/why-ps4-pro-and-project-scorpio-are.html"&gt;highlighted &lt;/a&gt;the &lt;a href="https://hole-in-my-head.blogspot.com/2016/08/the-end-of-console-generations-or-just.html"&gt;dangers of pushing out&lt;/a&gt; &lt;a href="https://hole-in-my-head.blogspot.com/2016/03/why-phone-model-wont-work-for-games.html"&gt;sequential hardware updates&lt;/a&gt;&amp;nbsp;and tiered hardware generations, and, though I can't find any of my various diatribes* I have been vociferously &lt;a href="https://hole-in-my-head.blogspot.com/2020/10/analyse-this-next-gen-consoles-part-12.html"&gt;against subscriptions&lt;/a&gt; and other forms of locking games behind 'cable TV-like' services.&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;i&gt;&lt;b&gt;&lt;span style="color: #274e13;"&gt;*I guess they were made on social media!&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;Finally, though, the chickens are &lt;a href="https://www.vgchartz.com/article/461694/ps5-vs-xbox-series-xs-vs-switch-launch-sales-comparison-through-month-43/#:~:text=The%20PS5%20is%20ahead%20of,Switch%20is%20in%20the%20lead."&gt;coming home to roost&lt;/a&gt;:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhf_bk1ZsdRjAcLxF8c0RNvAlTXt7lNerSgpdkC-86nF0zOVW4TSbI9iZHk4qKczHn6kJzk-aFpOZpGXAxxI-lZQKENzrCCN6ko0xyCjEizHqMJI6TuJb44OHpP5O30El_SjS1VHsRIaG6EbFk71kLwclakFccJg6u5ptOLhXg97vg66PHP6GSnmx3IOyc/s3840/gamepass.png" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="2160" data-original-width="3840" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhf_bk1ZsdRjAcLxF8c0RNvAlTXt7lNerSgpdkC-86nF0zOVW4TSbI9iZHk4qKczHn6kJzk-aFpOZpGXAxxI-lZQKENzrCCN6ko0xyCjEizHqMJI6TuJb44OHpP5O30El_SjS1VHsRIaG6EbFk71kLwclakFccJg6u5ptOLhXg97vg66PHP6GSnmx3IOyc/w640-h360/gamepass.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Prices are going up, there's no stopping them... running instances in the datacentre will not get cheaper!&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Microsoft repeatedly denied the fact that the purchase of Activision Blizzard King would result in any sort of &lt;a href="https://www.eurogamer.net/supreme-court-denies-gamers-last-ditch-effort-to-block-microsoft-activision-blizzard-deal"&gt;negative effect&lt;/a&gt; on consumers but, now we can see the logical conclusion of that decision combined with several others: &lt;a href="https://www.theverge.com/2024/7/9/24195312/microsoft-xbox-game-pass-ultimate-price-increase-standard-subscription"&gt;prices are going up&lt;/a&gt;, and will continue to go up. This is mainly because growth through acquisition requires ever-expanding gross revenue to sustain it and compare against &lt;a href="https://x.com/Duoae/status/1793848682317856909"&gt;returns of that same monetary value invested in the stock market&lt;/a&gt;*...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*Even if I believe anyone thinking like this is an idiot, it's clear some suits do...&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The reason is very simple - games are expensive to make and Microsoft and Sony's current strategies are focussed mainly on producing double A or triple A games which take multiple years to do so - and these numbers have only been going up!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Ironically, the UK's CMA (Competition and Markets Authority) &lt;a href="https://www.gov.uk/government/news/microsoft-activision-deal-prevented-to-protect-innovation-and-choice-in-cloud-gaming"&gt;have been proven right&lt;/a&gt; on the situation... because Microsoft now have a tonne of IP wrapped up tightly in their stable that needs funding to be maintained.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Additionally, despite Europe &lt;i&gt;&lt;a href="https://hole-in-my-head.blogspot.com/2016/05/in-theory-is-it-so-hard-to-understand.html"&gt;never being a priority for Xbox&lt;/a&gt;&lt;/i&gt;, there &lt;a href="https://www.theverge.com/2024/7/11/24196361/microsoft-xbox-no-console-required-notepad"&gt;are now rumours&lt;/a&gt; that Xbox will reduce their focus on Europe and deprioritise the continent going forward... but that's not really &lt;a href="https://www.ign.com/articles/xbox-series-x-and-s-sales-have-collapsed-in-europe"&gt;a surprise&lt;/a&gt;, nor is is it actually a change of strategy in my opinion!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What is a change in strategy is choosing to treat different customers differently. Previously, Microsoft had chosen to denigrate the PC consumer in the mid-2000s, stopping PC ports and killing studios.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This was reversed in the 2010s, with an increased balance between the two hardware pools but now Microsoft apear to be punishing their loyal console enjoyers, instead.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Gamepass is a mess: the console players not only have to pay for online features but now have to pay more for the same day-1 releases and game selection than PC players. That's just crazy and very anti-consumer for the people who are most connected to the Xbox brand.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What this will likely mean is that Xbox will piss off those customers, further negatively impacting this generation of console sales and, even worse, any future generation!!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's a short-sighted, sorry state of affairs for which Xbox only has itself to blame. I feel there is a real risk that Xbox quits console hardware and, as a result, shutters a large percentage of the studios it has just recently acquired as they no longer fit into their long term plans...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's really pathetic - and, unfortunately, they've got a proven history of doing this as well.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;blockquote&gt;[Update: 08/09/2024] &lt;a href="https://x.com/EposVox/status/1832766171394851168"&gt;EposVox has listed even more problems&lt;/a&gt; that I hadn't even considered to mention here - Microsoft is basically falling apart and becoming completely dysfunctional...&lt;/blockquote&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlNAE3Oqa6JkYD5CS__XRqg3bWIuCYWxyZGx8AZcov3neOW60ck8M8CYb_nInSs850fvXEuBb0HVJXD_Qu6hBhiNz0-go1UsKZa88B_9ToLT7OsEMcqt3alolXGRZInWn4mEq38ZIsDhU1VQUkT53DJOuA4MmztBTTLtzF37DF-Ed_e11lnLb9Ws3qn3o/s773/Closures.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="229" data-original-width="773" height="190" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlNAE3Oqa6JkYD5CS__XRqg3bWIuCYWxyZGx8AZcov3neOW60ck8M8CYb_nInSs850fvXEuBb0HVJXD_Qu6hBhiNz0-go1UsKZa88B_9ToLT7OsEMcqt3alolXGRZInWn4mEq38ZIsDhU1VQUkT53DJOuA4MmztBTTLtzF37DF-Ed_e11lnLb9Ws3qn3o/w640-h190/Closures.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Microsoft went through a period of growth and acquisition in the early 2000s, then closed a lot of studios with storied histories...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Intel...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Moving on to Intel, we had almost ten years of them fleecing us with very little performance and technology improvements along with slowly increasing costs.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;During that time, they lost process and technology leadership, resulting in hot, power-hungry silicon, whilst AMD focused on their comeback with efficient and performant chips which doubled as server hardware.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Intel even hired some ex-AMD senior staff but their company culture essentially &lt;a href="https://www.reuters.com/article/us-intel-tech/intel-chip-executive-jim-keller-departs-company-idUSKBN23I378/"&gt;killed their products&lt;/a&gt;/ and &lt;a href="https://www.theverge.com/2023/3/21/23650611/intel-raja-koduri-gpus-amd-nvidia-apple-leave-ai-startup"&gt;pushed them out&lt;/a&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The other problem of Intel is that they wasted their massive earnings and market position to give dividends back to their investors instead of investing that money into their company and products. This happened to such an extent that when things started going badly, each and every one of the dominoes from process, to product, to staff fell one by one and now their stock price is garbage and they don't have enough money to pull themselves back up unless they do something like AMD and &lt;a href="https://www.pcgamer.com/hardware/processors/as-intels-struggles-continue-rumours-are-now-emerging-that-plans-are-afoot-to-flog-its-chip-manufacturing-fabs/"&gt;divest themselves of their manufacturing plants&lt;/a&gt;...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4hjeuVXPinPEsEz05cSvqQoEoEPB2t8-I0xCSHGU7CO5VvCXOAZdo64tTuXrgAMPoyS4LsxM_6NOMayfJLsmlH3RLvsHxUW1TKaWSNoK_6MalLmY4JfyOpENdzBpwekI9MzxOoPdQvsj0GcRWVmLguqTec5X719WkoiMOAOWfiJzNWj0iNXsvm82_1m4/s965/Intel%20stock.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="751" data-original-width="965" height="498" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4hjeuVXPinPEsEz05cSvqQoEoEPB2t8-I0xCSHGU7CO5VvCXOAZdo64tTuXrgAMPoyS4LsxM_6NOMayfJLsmlH3RLvsHxUW1TKaWSNoK_6MalLmY4JfyOpENdzBpwekI9MzxOoPdQvsj0GcRWVmLguqTec5X719WkoiMOAOWfiJzNWj0iNXsvm82_1m4/w640-h498/Intel%20stock.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Intel rode the wave of lack of competition from AMD throughout the 2010s, failed to capitalise on it to invest in their future, and are now paying the price...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, Intel has a third problem - aside from all their technology being expensive and late*, and having to utilise their competitor's manufacturing capabilities (&lt;a href="https://www.pcworld.com/article/2355435/intels-lunar-lake-is-actually-made-at-tsmc.html"&gt;aka TSMC&lt;/a&gt;), they've intentionally** ignored &lt;a href="https://www.lexology.com/library/detail.aspx?g=a3eb42be-fd9a-42aa-af46-52b7f7823e94#:~:text=Late%20Intel%20stated%20in%20a,may%20be%20experiencing%20the%20problem."&gt;architectural issues&lt;/a&gt; with their last generation of products in order to maintain competitiveness with AMD's Ryzen line of consumer products. Along with intentionally** ignoring defects such as the &lt;a href="https://wccftech.com/intel-identified-cpu-oxidation-issue-in-late-2022-claims-resolved-but-supply-chain-uncertainty-remains/#:~:text=Reviews%20How%20To-,Intel%20Identified%20CPU%20Oxidation%20Issue%20In%20Late%2D2022%2C%20Claims%20They,Supply%20Chain%20Uncertainty%20Still%20Remains&amp;amp;text=Intel%20has%20addressed%20the%20oxidation,ago%20but%20there's%20still%20doubt."&gt;oxidation issue&lt;/a&gt;&amp;nbsp;which has come to light and which was not disclosed to clients...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This has resulted in products which generate less profit for them both on the manufacturing and returns sides of things but also in products which have damaged Intel's brand image.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;blockquote&gt;&lt;span style="color: #274e13;"&gt;*Intel has &lt;a href="https://hardwaretimes.com/intel-14th-gen-meteor-lake-cpus-delayed-to-the-very-end-of-2023-due-to-production-woes-rumor/"&gt;delayed products&lt;/a&gt;, &lt;a href="https://www.anandtech.com/show/12693/intel-delays-mass-production-of-10-nm-cpus-to-2019"&gt;technical developments&lt;/a&gt; &lt;a href="https://hardwaretimes.com/intel-20a-18a-delayed/"&gt;in process&lt;/a&gt;, &lt;a href="https://www.crn.com/news/components-peripherals/2024/20b-intel-ohio-project-delayed-again-report#:~:text=An%20Intel%20spokesperson%20has%20confirmed,t%20expected%20until%20late%202026."&gt;manufacturing sites&lt;/a&gt;, you name it! This is a sign of a deeply disfunctional company that cannot deliver on anything. (I have first-hand experience of companies heading towards this endpoint)&lt;/span&gt;&lt;/blockquote&gt;&lt;/b&gt;&lt;/div&gt;&lt;blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;&lt;b&gt;**I say intentionally but that's a supposition on my part. Working in a tightly regulated industry, when we have RMAs/failures, we begin root cause investigations right away. Given the stated failure rate of processors from Intel's commercial partners, they would have had these investigations starting at least in early 2023. Which means they were either incompetently investigating the issue for more than a year, not investigating it at all (ignoring RMAs as normal failures), or that they knew about the problem, what it was, and just ignored it... None of the possibilities look good.&lt;/b&gt;&lt;/span&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Intel was previously the expensive, boring, reliable company that didn't give a lot of performance uplift, were stingy in platform upgradability and ran hot and power-hungry, with products which launched &lt;a href="https://www.engadget.com/2015-07-16-intel-skylake-chips-delayed.html?guccounter=1&amp;amp;guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvLnVrLw&amp;amp;guce_referrer_sig=AQAAACbpBIUkSgF-AezIa0-XL3J-ARYD_A8scqyagT94k_4o3aUWlEKpTz5b5NUgD_bTCzOhLtnqM784FOEd0v55sYU-caqgi1w7YnAkvZ33eAIscU9jGjAc28FTlZFE-Sy-ziBsjza6FZrYrPENifI8YQMS_G5rlM5hRn8L7ag81b8q"&gt;later than specified in their roadmap&lt;/a&gt;. Now, Intel is &lt;i&gt;&lt;u&gt;still&lt;/u&gt;&lt;/i&gt; all of those things but also unreliable, sneaky, deceptive, etc. etc.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Clients were already moving away from Intel in practically every segment but this debacle will only accelerate that, with consequences still to be realised over the coming years. This will mean more losses and less revenue leading to even further &lt;a href="https://www.theverge.com/2024/8/1/24210656/intel-is-laying-off-over-10000-employees-and-will-cut-10-billion-in-costs"&gt;efficiencies being implemented&lt;/a&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This bodes ill on Intel's already pathetic ability to execute, leaving the door open...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Filling the hole...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;AMD have been closing-in on Intel for multiple years now and Sony has been beating Microsoft's ass since the latter's missteps around the Xbox One launch. Now, we're potentially looking at a future where Microsoft really isn't in competition with Sony, instead focussing on publishing games on multiple platforms. This isn't a great situation, though, because the typically capricious corporate overlords within Microsoft are as likely to back the Xbox management, as they are to undermine them. We could be seeing a LOT of studio closures over the coming years - resulting in game IP being locked in a dying Microsoft division.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Stepping into such a void, Sony would be able to charge an arm and a leg, not only for the hardware but for access to the software and features since Nintendo is not really a competitor with the other two console giants. Additionally, it's likely that developers would get a worse deal in such an environment, as well. However, there's always the PC sphere for players to jump ship to:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;On the other side of things, &lt;a href="https://hole-in-my-head.blogspot.com/2020/07/next-gen-game-pc-hardware-requirements_36.html"&gt;AMD have been shown to have been pushing up the prices&lt;/a&gt; of their products every chance they get - they're no charity - and as Intel continues to flounder and provide a worse competition, we can expect them to push the boundaries on what is acceptable to charge for components...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;All of this would mean higher prices to access and play games as a hobby and that's not a good situation to be heading into.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I really hope I'm seeing things too negatively and that the reality will be less severe but I'm not hopeful, given the current trends and signs, and I cannot see any third party stepping into the gap to bring meaningful competition to either AMD or Sony...&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2024/08/microsoft-and-intel-are-on-ropes-whats.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8pIbHDRVS2GgdOIl1TwaZw8tP4Wr436gRi_vyochdEOOhUeA5Q0FI4wC6Isq3AY2mXjrp91E2PHj2-xZOluLghAnJkGEr0A6hT_yzGSlFVLDSwdVC786VAJBWb4Og2WdQtJ_cDYndYrMifaq-XOmR_fM0-cln_orMP3l1CXrx2bOTy6yssCMmMfyMbq4/s72-w640-h360-c/Title.jpg" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-1964304862475873669</guid><pubDate>Fri, 05 Jul 2024 16:24:00 +0000</pubDate><atom:updated>2024-07-05T18:01:16.143+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">Roundup</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>How CPU-limited IS a modern mid-range PC...?</title><description>&lt;div style="text-align: left;"&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9O3CULfmaWfrYyF3z4kZOwEc9qAU0KQ43PazZOfsU3RhbvbSODDTX1DeOX1vWsdoGea-9tfYU79E038rOX66QxcuaNNiL4wdKpgmCxu03Yg24g0I3nV-R8sPTqGSw7mULwfHnvbNMHXHPfvADab1gYxIeYpoJELHHGMYW0-XH_aZkXOVxuUDvCp_IJkA/s1920/Header.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9O3CULfmaWfrYyF3z4kZOwEc9qAU0KQ43PazZOfsU3RhbvbSODDTX1DeOX1vWsdoGea-9tfYU79E038rOX66QxcuaNNiL4wdKpgmCxu03Yg24g0I3nV-R8sPTqGSw7mULwfHnvbNMHXHPfvADab1gYxIeYpoJELHHGMYW0-XH_aZkXOVxuUDvCp_IJkA/w640-h360/Header.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;These days, there's a lot of talk about how CPU- or GPU-limited various games are but something I've always wondered is "&lt;i&gt;where is the cut-off point?&lt;/i&gt;" for pairing a GPU with a particular CPU? This is especially important as we reach a point where the Ryzen 5 5600 and i5-12400 reach around €120 - these are really cheap and relatively performant options for the lower mid-range.&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;That's something that you don't tend to see explored too often at any well-rated benchmarking outlets like Hardware Unboxed, eteknix, or GamersNexus, etc. So, while I don't have many configurations to test out, I wanted to explore where the point of diminishing returns might be for games, as gamers would actually play them...&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;The Premise...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;As gamers, we often get told where and how to spend our money from various spheres of the PC gaming space:&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;ul&gt;&lt;li&gt;Spend as much on the GPU as possible.&lt;/li&gt;&lt;li&gt;Buy CPU with as much cache as possible.&lt;/li&gt;&lt;li&gt;Get a CPU with as fast a single core throughput as possible.&lt;/li&gt;&lt;li&gt;The fastest RAM with the lowest latency is the best.&lt;/li&gt;&lt;li&gt;A fast SSD will result in the lowest loading times.&lt;/li&gt;&lt;/ul&gt;&lt;div&gt;We often hear much advice about where to spend our time as well:&lt;/div&gt;&lt;div&gt;&lt;ul&gt;&lt;li&gt;Reduce the latency on your RAM and you'll get higher FPS and better 1% lows.&lt;/li&gt;&lt;li&gt;Overclock your CPU and you'll get another 10% performance.&lt;/li&gt;&lt;li&gt;Overclock your GPU and VRAM to improve performance.&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The problem with a lot of this advice is that it is often anecdotal, platform specific, and, sometimes, even specific to a particular generation of hardware! Most people are looking at the high-end and extrapolating downward to lower performing hardware.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;So far, I've looked at the &lt;a href="https://hole-in-my-head.blogspot.com/2023/01/analyse-this-does-ram-speed-and-latency.html"&gt;effect of DDR4 RAM speed on mid-range systems&lt;/a&gt; and didn't find much of any improvement - on the order of &lt;a href="https://docs.google.com/spreadsheets/d/1Vsk0DI3SMw9S8me8eO99vezdTYRC1cPhkE5BoJyz2Ts/edit#gid=2115323624"&gt;around 5-10 fps&lt;/a&gt; (though only if you have the best RAM IC!) and that was fairly independent of latency, bandwidth, and RAM speed...&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;I've also noted the slight performance difference between the 12th gen cores and Zen 3 cores - with the &lt;a href="https://hole-in-my-head.blogspot.com/2023/05/mid-range-hardware-rtx-4070-review.html"&gt;12th gen slightly out-performing&lt;/a&gt; &lt;a href="https://hole-in-my-head.blogspot.com/2023/06/mid-range-hardware-rtx-4070-review-part.html"&gt;the Zen 3 part&lt;/a&gt;&amp;nbsp;in CPU-limited scenarios. This is &lt;u style="font-style: italic;"&gt;despite&lt;/u&gt;&amp;nbsp;the i5 having less cache than the R5 part, &lt;u style="font-style: italic;"&gt;because&lt;/u&gt;&amp;nbsp;different architectures work differently! Obviously, we know that the cache size is much more integral to Zen 3's performance profile as evidenced by the effect of adding the 3D cache to the desktop parts...&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;But, here's the thing: if, on a mid-range CPU RAM speed and latency really isn't that important (especially when considering total cost of ownership for your PC) how much should you spend on a GPU?&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;People like to throw out the idiom of "you'll be CPU-bottlenecked if you buy such-and-such a GPU, aim lower", and I've personally encountered situations where my advice to someone playing an esport to upgrade the CPU instead of focussing on the GPU was roundly criticised on a certain forum.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;This, then, is a stab at further reducing the number of choices that mid-range gamers should be making.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Remember - what's important here is not the absolute performance but the relative performance scaling for each GPU relative to the most powerful one I have on hand - the RTX 4070 Super. However, we may delve a little into that absolute performance scales, as well...&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3&gt;&lt;span style="color: #274e13;"&gt;The Setup...&lt;/span&gt;&lt;/h3&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;For this testing, I have my trusty Ryzen 5 5600X and Intel i5-12400. These aren't the totally up-to-date modern mid-range gaming CPUs that they were a year or two ago and have been superseded by the Ryzen 5 7600 - if not in price, then by performance. However, even these CPUs are still probably more powerful than those a large percentage of gamers are bringing to their gaming table - but you can probably say that about the GPUs I'm going to be speaking about, as well.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Each of these CPUs is paired with five GPUs: the RTX 3070, RTX 4070, RTX 4070 Super, RX 6800 and RX 7800 XT.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;These represent around three tiers of performance of the recent modern mid-range (price notwithstanding) and &lt;i&gt;should&lt;/i&gt;&amp;nbsp;be able to show whether there is any sort of performance bottleneck to be encountered!&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The CPUs are paired with whatever RAM I have in their motherboards - since I have already looked at the effect of the DDR4 on the 5600X and 12400, (as mentioned above) I know there really isn't a lot of headroom in cranking up the speed and lowering the latency on these bad boys. So, these are the system setups:&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;Ryzen 5 5600X&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;ul&gt;&lt;li&gt;2x 8GB Corsair 3200 MT/s CL16 DDR4 (1:1 ratio)&lt;/li&gt;&lt;/ul&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;Intel i5-12400&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;ul&gt;&lt;li&gt;2x 8GB Patriot 3800 MT/s CL18 DDR4 (1:1 ratio)&lt;/li&gt;&lt;li&gt;2x 16GB Corsair 6400 MT/s CL32 DDR5 (1:2 ratio)&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div&gt;The sub-timings of the DDR4 kits are listed in the spreadsheet linked above (in green in the case of the 3800 MT/s) but the DDR5 is stock with Trcd/Trp/Tras/Trc/Trfc at 40/40/84/124/510. If you really want to see the potential uplift for the configuration I had vetted on the Ryzen CPU, then add 10 fps to the average result. This will give you the possible increase in performance but not necessarily the real one.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: left;"&gt;&lt;div style="text-align: justify;"&gt;Drivers used are 551.52 on Nvidia's side and 24.2.1 on AMD's. (This testing was actually performed a while ago but I didn't have time to write it up.)&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Resolution is 1080p in all tests, with settings &lt;a href="https://docs.google.com/spreadsheets/d/11uIcgt0Akptn_niaLzCt-bOIpwgl36avp96UQeVGJw8/edit?usp=sharing"&gt;listed in the associated spreadsheet&lt;/a&gt;&amp;nbsp;(though they are typically ultra/highest) and I've selected a number of games which have varying bottlenecks, between CPU, GPU and combinations of both...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;The Results...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsJg5VjNBO67jXwRUyBM0y6wTIdSiYvuHuTOKgtS-9vGIkufeZB6dcMPBJsVWYvZmRea7OH1p7eANvJeEb0TzHwBtdrHBQRwDV5lbKKr7WvrqKL7iiMFG75BtoiDZ4ikX1SSmp7bWLzSXEsswx4GgP7xEKoaqvj5nb0OLVgC2PQrQV-twpdLNUE6GFHpg/s679/5600X%20relative%20avg%20fps.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="399" data-original-width="679" height="376" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsJg5VjNBO67jXwRUyBM0y6wTIdSiYvuHuTOKgtS-9vGIkufeZB6dcMPBJsVWYvZmRea7OH1p7eANvJeEb0TzHwBtdrHBQRwDV5lbKKr7WvrqKL7iiMFG75BtoiDZ4ikX1SSmp7bWLzSXEsswx4GgP7xEKoaqvj5nb0OLVgC2PQrQV-twpdLNUE6GFHpg/w640-h376/5600X%20relative%20avg%20fps.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;Even at 1080p a lot of games are GPU-bound...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Let's start with the Ryzen 5 5600X.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We can see that there are a number of games which are primarily CPU-bound: Hogwart's Legacy, Spider-man, Starfield and Counterstrike. These titles are all underutilising the power of the GPUs have on offer when using the 5600X - there is &lt;i&gt;still&lt;/i&gt;&amp;nbsp;improvement for the majority of them but it's not clear whether there are other effects in play.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Then we have the primarily GPU-bound titles: Avatar, Ratchet and Clank, Returnal, and Alan Wake 2. Here, the games generally scale with compute and RT performance - with the notable exception of Ratchet, which murders the 8 GB VRAM on the RTX 3070...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In comparison, Metro Exodus is a curious case because, for whatever reason, there is a bug which &lt;i&gt;sometimes&lt;/i&gt;&amp;nbsp;occurs with certain hardware on certain windows installations and &lt;i&gt;I have been unable to find a way around it!&lt;/i&gt; Seriously! I haven't seen anyone else mention it and I have been unable to troubleshoot what, exactly, causes it. In this chart, you can see it at the high preset on the RX 6800 and RTX 3070 and, maybe, to a lesser extent, the RX 7800 XT - but I'm not 100% sure about that!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinCAzErbWkAc09ustOvYLHR0-rebjsLonDdN4JHqn8XKxOTeIVCCI6WyjyceGGnbAmysoZnUOxQ5OvXwEzhEM-sPkdJhWdLoa8OSgG0UYTlGmMY4g1zBMua12EEXAmCqYGxTUIuXAFvjpYaOCk-mp22DIVYIeURoQ4vBZKwd_GFUMUqVS27Y9l4EmYqgY/s683/5600X%20relative%20min%20fps.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="399" data-original-width="683" height="374" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinCAzErbWkAc09ustOvYLHR0-rebjsLonDdN4JHqn8XKxOTeIVCCI6WyjyceGGnbAmysoZnUOxQ5OvXwEzhEM-sPkdJhWdLoa8OSgG0UYTlGmMY4g1zBMua12EEXAmCqYGxTUIuXAFvjpYaOCk-mp22DIVYIeURoQ4vBZKwd_GFUMUqVS27Y9l4EmYqgY/w640-h374/5600X%20relative%20min%20fps.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The CPU can have a great effect on the minimum fps experienced in the games...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Moving onto the minimum fps for each of these games, we see what might be more evidence regarding the 8 GB VRAM not being enough for Ratchet; the rest of the GPU-bound games confirming that they are, in fact so. What we do see, though, is evidence of the GPU overhead of running a powerful GPU on a CPU which isn't quite up to the challenge of managing it.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This is most evident for Counterstrike 2 - we saw this on the average fps, where the 4070 was rising slightly above the 4070 Super. However, the minimum fps of &lt;i&gt;all&lt;/i&gt;&amp;nbsp;the GPUs rises above that of the 4070 Super, indicating that when the going gets tough, the computational overhead of running that more power GPU weighs the performance down.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We see something similar for Ratchet, where both the RX 7800 XT and RTX 4070 are equal or slightly better in minimum fps than the 4070 Super. A more interesting case is Avatar at the low graphical preset - the more powerful GPUs are clearly running into a CPU bottleneck in the minimum fps but the RX 6800 and 3070 are sitting there in a category of their own in a way I cannot fully explain. For the 3070, we could posit that it is the 8 GB VRAM haunting it again, the RX 6800 being limited by its ray tracing prowess (or lack thereof)... but I do not know for certain.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1WQ_1m3bsOeqJgP-BDtFbjlYoVLG1RMXg4XSOchj2LgPsvwDJMWDtnclfNiYHBcXWS2fI-aWDtchfC9aIxaHbwqB2LeboN7ZYt321J5ftpfcVXRKlXRA4uCZbA-mUoxmiHgQD5l3gvMzE0XC5aOdZeZpqi9jA-FCn65LZhvmarq_wSO_QpoKzxRHClGQ/s679/12400%20DDR4%20relative%20avg%20fps.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="399" data-original-width="679" height="376" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1WQ_1m3bsOeqJgP-BDtFbjlYoVLG1RMXg4XSOchj2LgPsvwDJMWDtnclfNiYHBcXWS2fI-aWDtchfC9aIxaHbwqB2LeboN7ZYt321J5ftpfcVXRKlXRA4uCZbA-mUoxmiHgQD5l3gvMzE0XC5aOdZeZpqi9jA-FCn65LZhvmarq_wSO_QpoKzxRHClGQ/w640-h376/12400%20DDR4%20relative%20avg%20fps.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Moving onto the Intel 12400 (DDR4).&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We see a very similar story - except, because the 12400 is a bit more powerful, we see better separation in performance in a couple of titles. Namely, Starfield, Metro Exodus, and Avatar.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Curiously, we observe a strange situation with Ratchet where the RTX 3070 is now performing better. Was the 8 GB VRAM really to blame for the poor result on the 5600X? Maybe not! However, the root cause is not clear to me. We're getting more performance from &lt;i&gt;somewhere&lt;/i&gt;&amp;nbsp;but it's a complicated situation because aside from a slightly faster processor, we're talking about maybe a ~50% increase in system memory bandwidth going from the 3200 MT/s RAM to the 3800 MT/s.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I should test that!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Looking more generally at the data, the 7800 XT is performing relatively*&amp;nbsp;&lt;i&gt;worse&lt;/i&gt;&amp;nbsp;on the Intel platform than the 4070, whereas before it was matching or closer to that card. This is likely due to an actual CPU bottleneck!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*In some games it performs better in absolute terms but in others it's actually worse...&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;However, in Alan Wake, Ratchet and Avatar, the 7800 XT is losing performance while the Nvidia cards are gaining performance. At first I thought that this might be due to AMD's SAM giving an advantage on the 5600X or an Nvidia bias in the games but the RX 6800 gained performance just like the Nvidia cards did. So, could this be down to the RDNA 3 chiplet architecture?&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;It's a possibility, but we'll come back to this result in a moment...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgoWgdSGDCo-iTLl75js5etiVAsCMiwtkOCR9pWI66KNeAeRb4iH4_Xq8oEJS60v7SGlD-QVqpfaujzD8lUpIC5tWWSEaCdRL01HcRisUyyPuSRRcrZEN7pFFYz3uEuc2NXuQr9TQCn7qhe9W9zed2SQeT8T8kx77WKdmYAdGDJpRbgHdbtmIRRLMQMt-4/s675/12400%20DDR4%20relative%20min%20fps.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="397" data-original-width="675" height="376" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgoWgdSGDCo-iTLl75js5etiVAsCMiwtkOCR9pWI66KNeAeRb4iH4_Xq8oEJS60v7SGlD-QVqpfaujzD8lUpIC5tWWSEaCdRL01HcRisUyyPuSRRcrZEN7pFFYz3uEuc2NXuQr9TQCn7qhe9W9zed2SQeT8T8kx77WKdmYAdGDJpRbgHdbtmIRRLMQMt-4/w640-h376/12400%20DDR4%20relative%20min%20fps.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The 12400 removes the minimum fps CPU bottleneck we previously were experiencing for the 5600X...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The situation with minimum fps is much improved - most titles now have the RTX 4070 Super firmly in the lead and games like Counterstrike and Ratchet are no longer displaying any weird behaviour that would indicate a CPU bottleneck.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;On the other hand, Hogwart's is still displaying the signs of a CPU overhead from the RTX 4070 Super as both the 3070 and 4070 match the lows of this card. In contrast the 6800 and 7800 XT are probably GPU-bound in this title with ray tracing enabled, so the CPU-bottleneck isn't in effect, here...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRVcli5lDUAxgMOahgl7ksCT5Avok4BuVlRzlByIP_7xmRWl_PNnTtAW8RgPRouu3u0oHJiMQH0dPUHUoMfErlTFL6BqYY-7XmE_yZotd8y_kaVdORBgUCq3tKqL2Hwd1pm6_hW7WK6lR6__8VpTrLeEX7QZIkpbCf4e-3lAgpcDNXCpc9pxCo6o0fjE8/s677/12400%20DDR5%20relative%20avg%20fps.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="399" data-original-width="677" height="378" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRVcli5lDUAxgMOahgl7ksCT5Avok4BuVlRzlByIP_7xmRWl_PNnTtAW8RgPRouu3u0oHJiMQH0dPUHUoMfErlTFL6BqYY-7XmE_yZotd8y_kaVdORBgUCq3tKqL2Hwd1pm6_hW7WK6lR6__8VpTrLeEX7QZIkpbCf4e-3lAgpcDNXCpc9pxCo6o0fjE8/w640-h378/12400%20DDR5%20relative%20avg%20fps.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Shifting over to the 12400 (DDR5) results, the Metro Exodus bug once again rears its ugly head for the RX 6800. It's such a difficult issue to pin down!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Aside from that immediately obvious issue, the second story coming from the average fps data is &lt;i&gt;how well the RX 7800 XT is performing&lt;/i&gt;! The AMD card has closed the gap with the RTX 4070 again, now sitting in a more logical position, relative to the 4070 Super. This specifically speaks to the point I noted above regarding the 7800 XT's performance on the 12400 (DDR4) platform and, perhaps, gives us a hint as to what is going on: data management.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This has to be the only explanation there is - the bandwidth of the DDR5 memory is ~1.5x that of the 3800 MT/s DDR4 memory, which is ~1.5x the 3200 MT/s DDR4 memory of the 5600X. That's 2.2x that of the 5600X platform... It seems to me that AMD's SAM compensates for issues with data management to the VRAM across the PCIe bus which the faster DDR4 on the Intel system doesn't manage to match but which the DDR5 alleviates. I believe that the 6800 isn't affected in the same manner because of it's monolithic design.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;What's interesting here is that the absolute values for all GPUs mostly increase - indicating that on the DDR4 system, the 12400 is constrained and that extra bandwidth provided by the DDR5 gives the chip more room to stretch its legs.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEvUojCaf-HFYCRtIqtURoRWm9DP3XN1zj7XKoU-ZZDkn7IV-rpamv3Q3RHE4X-SXKMFMNN2BAWMeXuyP7wLhvBoX1Rps_WhXTYppJ8XNYCMoO5n4Ker_sJJwcERVjB7tsi1QeYpE3XyWUC1izmd0bp0jkLhE_Rv1OmO8Z93lDrbMNX6Al0f4A22xlHns/s677/12400%20DDR5%20relative%20min%20fps.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="397" data-original-width="677" height="376" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEvUojCaf-HFYCRtIqtURoRWm9DP3XN1zj7XKoU-ZZDkn7IV-rpamv3Q3RHE4X-SXKMFMNN2BAWMeXuyP7wLhvBoX1Rps_WhXTYppJ8XNYCMoO5n4Ker_sJJwcERVjB7tsi1QeYpE3XyWUC1izmd0bp0jkLhE_Rv1OmO8Z93lDrbMNX6Al0f4A22xlHns/w640-h376/12400%20DDR5%20relative%20min%20fps.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;With DDR5, the minimum fps values of the 7800 XT return to nearer where we saw them on the 5600X...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The minimum fps values tell a similar story: the 7800 XT is greatly bolstered by having a fat data pipe feeding it.&amp;nbsp; However, that's not the only GPU which is greatly benefitting.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The RTX 3070 with its 8GB is really pushed upwards in Spider-man and Counterstrike, as is the RX 6800 in Ratchet and Clank. Though, one of the biggest gains is had by the 4070 in Counterstrike - a whopping 57 fps on the minimum!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;The Conclusion...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In certain games, all of these cards will be capable of performing better than we have seen here: being able to take advantage of a faster CPU. However, it's not wasted money to purchase up to an RTX 4070 Super on these two CPUs paired with these three platforms - it's still around 20% faster than both the 4070 non-super and 7800 XT in the majority of titles, with the notable exceptions of the really CPU-constrained titles... which matches well with &lt;a href="https://www.techpowerup.com/gpu-specs/geforce-rtx-4070-super.c4186"&gt;Techpowerup's listing&lt;/a&gt; for the 4070 and 5-10% more relatively performant than that &lt;a href="https://www.tomshardware.com/reviews/gpu-hierarchy,4388.html"&gt;described by Tom's Hardware&lt;/a&gt;.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Now, both Tom's and TPU have the 7800 XT closer to the 4070 Super in performance which, likely means that the effect of the CPU is quite high in the game titles tested. My testing appears to indicate that both memory speed/bandwidth is important for some of these results and so the relative value that the mid-range consumer gets from these GPUs will vary with how much they've spent on the CPU, RAM and platform.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Lastly, the CPU overhead is still an issue for running more powerful GPUs. The RTX 4070 Super, although pretty much always top or joint top performance, suffered somewhat with these mid-range generation-old CPUs. Yes, there's still performance there in the tank - especially if you're thinking of upgrading to a 1440p monitor or running at higher graphical quality settings - but you won't get the most value for your money spent...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The upshot of all of this is if you have an AMD platform, you're well served by either the RX 7800 XT or the RTX 4070. However, if you're running an Intel platform, you're probably better off sticking to the RTX 4070 for best bang for your buck. The RTX 4070 Super, while not a waste, will mostly serve you well if you're hungering for more graphically intensive games at higher resolutions...&lt;/div&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2024/07/how-cpu-limited-is-modern-mid-range-pc.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9O3CULfmaWfrYyF3z4kZOwEc9qAU0KQ43PazZOfsU3RhbvbSODDTX1DeOX1vWsdoGea-9tfYU79E038rOX66QxcuaNNiL4wdKpgmCxu03Yg24g0I3nV-R8sPTqGSw7mULwfHnvbNMHXHPfvADab1gYxIeYpoJELHHGMYW0-XH_aZkXOVxuUDvCp_IJkA/s72-w640-h360-c/Header.jpg" width="72"/><thr:total>0</thr:total><author>noreply@blogger.com (The Easy Button)</author></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7560610393342650347.post-8208612858807329376</guid><pubDate>Tue, 28 May 2024 04:10:00 +0000</pubDate><atom:updated>2024-09-26T16:20:47.825+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">analysis</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">videogames</category><title>Next Gen PC gaming requirements (2023 update)</title><description>&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCQ7ipyDlvvu2hCs6EZnohyphenhyphenFz0dfMyMe_CwM1tlCAe7aCkWbbr2yDbpOWYo__FCt0ZT0wuhmVj98Jzi_TBihzqqk2pG2agLBKIZI3dWlsClQa7FDhwbgk1TPOp2r5bi-4pZ0JTTbFYRNrhRXVvvX0lk8FKxCGyrokxljjv2QFkqVKsSkNpncbipkXbX0w/s1024/Title_2023.jpg" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="681" data-original-width="1024" height="426" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCQ7ipyDlvvu2hCs6EZnohyphenhyphenFz0dfMyMe_CwM1tlCAe7aCkWbbr2yDbpOWYo__FCt0ZT0wuhmVj98Jzi_TBihzqqk2pG2agLBKIZI3dWlsClQa7FDhwbgk1TPOp2r5bi-4pZ0JTTbFYRNrhRXVvvX0lk8FKxCGyrokxljjv2QFkqVKsSkNpncbipkXbX0w/w640-h426/Title_2023.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;div&gt;It's that time for a belated look at the yearly trending of recommended system specifications. I'm mostly late because I got side-tracked by other projects, by work, and various other life events. So, apologies for that if anyone was waiting on this data.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;As always, &lt;a href="https://docs.google.com/spreadsheets/d/1O1_0bsnKrmazhoxtyJzuzXRKGFTIjvnRheF3nlVFDu4/edit?usp=sharing"&gt;the raw data is available here&lt;/a&gt;. And, once again, I'd like to &lt;a href="https://www.shamusyoung.com/twentysidedtale/?p=54513"&gt;pay tribute to Shamus Young&lt;/a&gt;, who inspired this series with his &lt;a href="https://www.shamusyoung.com/twentysidedtale/?p=49109"&gt;Steam data analytics&lt;/a&gt;....&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;However, as they say - on with the show!&lt;span&gt;&lt;a name='more'&gt;&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;h3&gt;&lt;span style="color: #274e13;"&gt;Back to Basics...&lt;/span&gt;&lt;/h3&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Just a few short paragraphs to cover the premise of this study and trending, along with caveats and limitations:&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;As with last year, I have refreshed the Passmark data but the Geekbench data was close enough that I didn't feel it warranted the effort to refresh the CPU data covering 14 years.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The games in this study are picked by myself based on what's big, popular, and/or challenging on the hardware to run. There are a LOT of games released all of the time and the vast majority of them are not difficult to run - i.e. you do not need any sort of capable hardware to play them! So, in reality, there's no reason to even trend that sort of game (though there are a few included due to the stipulations above).&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;This yearly tracking of data is meant to serve as a rough barometer for what hardware gamers should be thinking of targeting to be able to both play the most games by a certain year, thus helping with decision-making when purchasing hardware in the here and now. The data may also may provide some sort of idea for developers of target hardware profiles for more graphically demanding games.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Ultimately, I'll let you decide what the data means to you.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;One big caveat is that I am not correcting for frequency of required hardware in this data. If you have ten games requiring an RX 6700 XT and one game requiring an RX 470, they both get counted once. If you wanted to extract that data for yourself, it's all up there in the spreadsheet. The reason I do this is because I want to counter the bias I have in picking the titles and also even out temporary trends in requirements - as well as over-requirements* (which we know sometimes happens!)&lt;/div&gt;&lt;div&gt;&lt;blockquote&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;*Sometimes, we see requirements tables and lists with WAY overpowered hardware and struggle to understand why it is there. These results will bias the data towards the higher end and so using only a single instance of each piece of hardware will actually skew things more towards the low end. It's not perfect but it is a balance of sorts...&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;div&gt;Saying all of that, it does indicate that my trending is of a conservative nature and hopefully will not be overly biased towards graphically intensive titles while still providing a guide.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;This methodology does not extend to memory, where each individual game's requirement is counted.&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;h3&gt;&lt;span style="color: #274e13;"&gt;CPU Trends...&lt;/span&gt;&lt;/h3&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhX7WtSiZYNFPei1mj8RIwWMonIVUO02eI0RSnV54ZgRkgTZjx9PlvJx1H5LyNcgDBvuqos5qEltPJGkVF53FqfBNcbK3UI0u6AEzyD2DS8Ox-Q-iP067OiHeaZfmLU0ZEQksjWqVrsj0qGaOZpSd9RdaQPx3zQq-duNI6jWR0rDdkQKVGeGWuA1dgdwAQ/s751/CPU%20perf%202.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="427" data-original-width="751" height="364" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhX7WtSiZYNFPei1mj8RIwWMonIVUO02eI0RSnV54ZgRkgTZjx9PlvJx1H5LyNcgDBvuqos5qEltPJGkVF53FqfBNcbK3UI0u6AEzyD2DS8Ox-Q-iP067OiHeaZfmLU0ZEQksjWqVrsj0qGaOZpSd9RdaQPx3zQq-duNI6jWR0rDdkQKVGeGWuA1dgdwAQ/w640-h364/CPU%20perf%202.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;span style="text-align: justify;"&gt;&lt;b&gt;The downward trend for the end-point continues from 2022...&lt;/b&gt;&lt;/span&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Due to some software shenanigans, I was unable to exactly replicate the graphs I was previously using so I haven't overlaid the prior year's graph like I did &lt;a href="https://hole-in-my-head.blogspot.com/2023/01/next-gen-pc-gaming-requirements-2022.html"&gt;last time&lt;/a&gt;. However, the overall trend is clear - a drop from an expected ~14100 points in 2025 to around ~13600 points, continuing from the drop from 2021's expected 2025 result. This still corresponds to a 10600/10700K in terms of single core performance...&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNfwO0Yudj5zLNIMmnMOeO_fji1EEIuoXUPy7ipr9RxI5GjfaV898cCMvvBJurRu5CEKDV0zx9MfvADVojHRlZoLmrIh0iCGD_2-w4eMz5iCwiu57GSLCkhSoMXqkD9nUQGCHYmueqX763wD7H-8kqSAXw7ZC4WVdKeBo14_LZS7YiOaaLuoLmpKRzLhY/s749/CPU%20perf%204.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="424" data-original-width="749" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNfwO0Yudj5zLNIMmnMOeO_fji1EEIuoXUPy7ipr9RxI5GjfaV898cCMvvBJurRu5CEKDV0zx9MfvADVojHRlZoLmrIh0iCGD_2-w4eMz5iCwiu57GSLCkhSoMXqkD9nUQGCHYmueqX763wD7H-8kqSAXw7ZC4WVdKeBo14_LZS7YiOaaLuoLmpKRzLhY/w640-h362/CPU%20perf%204.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Multicore performance also continues to drop from around 7700 points to 7200 points. This time, we're actually shifting down a performance notch from an R7 3700X to a 10600K. This comes as current game titles continue to be unable to really benefit from parallelisation past 6 cores 12 threads.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;In fact, when looking at the most common number of cores/threads and their averages across the polled games, we find that most games are requiring a 6C/12T processor, with the average matching the number of physical cores and the number of logical cores staying at 10. This indicates that a good portion of games are still requiring at least an older Intel CPU that lacks hyperthreading, dragging the average down.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhd97ZfMhAcL9cYU6CCbmJgPTL1k4L-1z2DpDFt65YuVnTyrcXU4bP36x7cRxIm6LsViTL2G2gMXcXBCPfw_jOZYY7PwkDITqKxxZGlID8yrPT32bzuM86kNNkNjkbH40jhodmAZwEVrPXkyaBiLb1ve7m1jTxrWaAgdmk-1nEGe12BIOcmnHbDACrY3UQ/s749/CPU%20cores.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="423" data-original-width="749" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhd97ZfMhAcL9cYU6CCbmJgPTL1k4L-1z2DpDFt65YuVnTyrcXU4bP36x7cRxIm6LsViTL2G2gMXcXBCPfw_jOZYY7PwkDITqKxxZGlID8yrPT32bzuM86kNNkNjkbH40jhodmAZwEVrPXkyaBiLb1ve7m1jTxrWaAgdmk-1nEGe12BIOcmnHbDACrY3UQ/w640-h362/CPU%20cores.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Last year, I was predicting that the most required cores and threads would be 6 and 12, respectively (as they are this year). However, it wasn't the case. I predicted it a year early, it seems. However, this year, the prediction is spot on. Let's see if things will increase to 8/16 next year, like I was predicting three years ago!&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;h3&gt;&lt;span style="color: #274e13;"&gt;GPU Trends...&lt;/span&gt;&lt;/h3&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiObcZOl3NZ61-MvpllF7KpfjlTa0xRyE3Xc09UusAsSUWdZ14BfmO3d70GqHlDtVmYe3jcbBTbgdxN-3Rr_L8wmivWDzMrxd5ejCQy8LezYUHyfwu_7uWJoFmsTghGAaoF5DMoNRs2KrdkBAkUBbm36-bXBmAuGAWBqVyc7rrf_F7B4hB9CduaakPQppQ/s749/GPU%20perf%201.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="749" height="364" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiObcZOl3NZ61-MvpllF7KpfjlTa0xRyE3Xc09UusAsSUWdZ14BfmO3d70GqHlDtVmYe3jcbBTbgdxN-3Rr_L8wmivWDzMrxd5ejCQy8LezYUHyfwu_7uWJoFmsTghGAaoF5DMoNRs2KrdkBAkUBbm36-bXBmAuGAWBqVyc7rrf_F7B4hB9CduaakPQppQ/w640-h364/GPU%20perf%201.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div&gt;The trend for required GPUs has not really changed from last year. We're looking at a 2080 Super level of performance in 2025 for rasterised games.&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;But what about ray tracing titles?&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Well, I've gone through the data and put together a graph covering that but the issue we have here is that the data is still very sparse....&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFbH9z-rVjwMbw-ACC10335Fd5SEBpX3bzXKDx4osSxH5t85rczObbvcLzziwwCsmZ-siIzv7TEY1cWpMIfKTD7sQxqGo7xqHWMHB5EzzFrVHwVz3CtfHt3zAHThkX49PAIMbxJVKtpqxtKJbnhhE6-KvoWgoQnRapoS0KMGXhN6Ee8cMVt10_xqnA82c/s748/GPU%20perf%202.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="748" height="364" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFbH9z-rVjwMbw-ACC10335Fd5SEBpX3bzXKDx4osSxH5t85rczObbvcLzziwwCsmZ-siIzv7TEY1cWpMIfKTD7sQxqGo7xqHWMHB5EzzFrVHwVz3CtfHt3zAHThkX49PAIMbxJVKtpqxtKJbnhhE6-KvoWgoQnRapoS0KMGXhN6Ee8cMVt10_xqnA82c/w640-h364/GPU%20perf%202.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The main problem we have here is the disconnect between GPU power and memory amount. A 3070/4060 Ti is as powerful as a 2080 Ti but its 8 GB VRAM isn't that great for modern titles.... So, we'll get onto that, now.&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;&lt;span style="color: #274e13;"&gt;Memory Trends...&lt;/span&gt;&lt;/h3&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRo4ERVWz6exlAThMu6lFxBHiHuI63l_GxBBMPwP-J4vEPmM7yI6_NBMBYLSu-rZcO2CwrMO5eLflx_JwvJUTEU-aPyhGH9CVEDdBjjXxvyviFD2a904zslEt1F4WXLwL8UYVGy_obRaPq0vBEr5l2VAABfUK8WUJrKqnQRLIpHt6HwRlj1h8PKB1fGNA/s747/RAM%202.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="425" data-original-width="747" height="364" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRo4ERVWz6exlAThMu6lFxBHiHuI63l_GxBBMPwP-J4vEPmM7yI6_NBMBYLSu-rZcO2CwrMO5eLflx_JwvJUTEU-aPyhGH9CVEDdBjjXxvyviFD2a904zslEt1F4WXLwL8UYVGy_obRaPq0vBEr5l2VAABfUK8WUJrKqnQRLIpHt6HwRlj1h8PKB1fGNA/w640-h364/RAM%202.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div&gt;I am surprised and disappointed that games aren't utilising system memory more effectively. It might not be "smart management" to front load data into RAM but I bet it's a hell of a lot better than requiring data being streamed from storage - especially since we've given up on optane as a storage format...&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;What is mostly worrying to me is the level of stagnation on show. We're actually regressing to 8/12 GB, like we're approaching an asymptote... At this rate, gaming will no longer be the driving force of system memory upgrades, "AI" will be!*&lt;/div&gt;&lt;div&gt;&lt;blockquote&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;*Not that I believe in anything "AI" for the consumer at this point in time...&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjByVSbkEjf8O502Br5Dvc95vsB4zugWYwkis-lGjlbijy0VKYf9DV12VsuYlTsa5bSEBQTsXz2aGz7vhhjfTYUkwoF5YUMX0eOJMVGVsGbN9KMlBV-CYa5PRTeG7HEJykdUmxmoZ5-xk1g4tuVWumiH3lKUaYMjdTqxTbKeAQ6NLd7ZFkelBQFyJi-XTs/s747/VRAM%202.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="426" data-original-width="747" height="364" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjByVSbkEjf8O502Br5Dvc95vsB4zugWYwkis-lGjlbijy0VKYf9DV12VsuYlTsa5bSEBQTsXz2aGz7vhhjfTYUkwoF5YUMX0eOJMVGVsGbN9KMlBV-CYa5PRTeG7HEJykdUmxmoZ5-xk1g4tuVWumiH3lKUaYMjdTqxTbKeAQ6NLd7ZFkelBQFyJi-XTs/w640-h364/VRAM%202.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;VRAM presents a more logical progression: we're seeing an increase, which is very obviously hamstrung by the choices of Nvidia and AMD, along&amp;nbsp; with the problems of the memory manufacturers getting 4 Gb packages to the market.&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;I understand the problem - it makes no sense to design and ship a budget GPU with a large memory interface when it needs to be small and cheap to produce and operate (thus reducing board costs).&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;As a consequence, it makes the memory configuration on the board difficult for the manufacturer because Samsung, Micron, et al. have not increased memory module capacity for years, now.&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;They're stuck between a rock and a hard place.&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjr2xmWsR3N7tF92RJWQNeMsj2TK88oEPJ1S4vT1Ok1OGTp52Wz0Zhfsv6ZTddWxnHYIm5IhQSLM-2n6drPD4I6avU2o71q7sVTPxbJ5ZajYIQg0RzZT4rPMURE_cPoD5A7wRolIq9GXSGF-n5kldqEdBylg12oiG_x3iWpe8S2GXwjlB9FsNNVLvUJ5AU/s1812/RAM%20table%201.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="394" data-original-width="1812" height="140" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjr2xmWsR3N7tF92RJWQNeMsj2TK88oEPJ1S4vT1Ok1OGTp52Wz0Zhfsv6ZTddWxnHYIm5IhQSLM-2n6drPD4I6avU2o71q7sVTPxbJ5ZajYIQg0RzZT4rPMURE_cPoD5A7wRolIq9GXSGF-n5kldqEdBylg12oiG_x3iWpe8S2GXwjlB9FsNNVLvUJ5AU/w640-h140/RAM%20table%201.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The percentage of each requirement...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;Looking at the trends for RAM and VRAM in a different format, we see a slightly more nuanced view of things. For system RAM, we are seeing that &lt;i&gt;all&lt;/i&gt;&amp;nbsp;game requirements are essentially 16 GB and the other RAM quantities are almost rounding errors. What is concerning to me is that, unlike other periods where a memory hegemon existed (e.g. 2013, and 2015/2017), there is no forward-looking quantity of RAM being requested by game-makers. In both of those times, a larger quantity was also rising in requirements, in parallel. In 2013, 4 GB was being eclipsed by 8 GB. In 2015, we were seeing the stirrings of the rise of 16 GB, whilst 40% of the market was spread across 4 and 6 GB. By 2017, there was a massive jump looking forward to 16 GB with 4 and 6 GB essentially missing in action in the course of a single year.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;2023 is an oddball.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;We have been through at least two periods since 2020 where memory was &lt;i&gt;SUPER&lt;/i&gt;&amp;nbsp;cheap and modern gaming systems really should not be being outfitted with 16 GB of memory. Even now, in a period of relatively expensive RAM availability, 32 GB is only €20-30 more than high-end dual-stick DDR4 16 GB is and has historically been. Why aren't developers asking for it?&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;I guess, it's partly due to the focus on streaming of data &lt;a href="https://hole-in-my-head.blogspot.com/2023/01/yearly-directstorage-rant-part-3.html"&gt;directly from bulk storage&lt;/a&gt; (aka SSDs) &lt;a href="https://www.youtube.com/watch?v=EZWfiqRB02w&amp;amp;ab_channel=DFClips"&gt;but that's not going so well&lt;/a&gt;*, &lt;a href="https://www.reddit.com/r/gamedev/comments/1b217o3/why_still_no_one_is_implement_directstorage/"&gt;is it&lt;/a&gt;? In a way, I feel like modern game engine design is trying to be too clever for its own good and in the process is making the experience worse for players. Other than those two reasons, I'm drawing a blank, though.&amp;nbsp; &lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;b&gt;&lt;i&gt;&lt;span style="color: #274e13;"&gt;&lt;blockquote&gt;*You might be thinking to point out that PSO stutter is not related to this, but I'm mostly thinking about loading stutter. Saying that, having data already in system memory, pre-compiled and potentially decompressed well-in advance will reduce both... Then we only have to worry about &lt;a href="https://youtu.be/zPzuDoTz4ss?si=4jBqBFI51gQjmZ0J&amp;amp;t=263"&gt;PCIe bandwidth&lt;/a&gt;.&lt;br /&gt;&lt;/blockquote&gt;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;So, we're either going to continue to stagnate at 16 GB over the next couple of years or we're going to see another strange jump like we did over the 2015-2016 period. &lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwtjBvuNrqrCITdtOxSiiOLGT-XlwHCIondeD6lV6jG_jDKI4yqduZMdD9o9h2EIlqwTWTyirxR1DZz4i6v1L7XosRPPnmwfush4R2Vtb7IFSX-YrC96kFR7lOnboCkzOjo6U4XZJdYiulbloLIaFqIv3eba-q0TvOqAOukDxddWIgBnIsxqltIcSWVMU/s1815/RAM%20table%203.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="387" data-original-width="1815" height="136" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwtjBvuNrqrCITdtOxSiiOLGT-XlwHCIondeD6lV6jG_jDKI4yqduZMdD9o9h2EIlqwTWTyirxR1DZz4i6v1L7XosRPPnmwfush4R2Vtb7IFSX-YrC96kFR7lOnboCkzOjo6U4XZJdYiulbloLIaFqIv3eba-q0TvOqAOukDxddWIgBnIsxqltIcSWVMU/w640-h136/RAM%20table%203.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;This chart shows the number of titles which required the indicated amount...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;VRAM requirements show a similar trend. This time, though, the inability of developers to require higher than 8 GB is fully understandable given the paucity of GPUs below the $500 mark with more than 8 GB... Though, the RTX 3060 (presumably mostly the 12 GB variant) &lt;a href="https://store.steampowered.com/hwsurvey/videocard/"&gt;is relatively popular&lt;/a&gt;, it is vastly outnumbered by any number of 8 GB cards (around 25% of the survey is 8 GB). We're stuck in a rut until GDDR manufacturers actually release 4 Gb modules which will allow the GPU makers to increase RAM on their lower-end, smaller memory bus products (clamshell designs are more expensive to manufacture and design)...&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Console Comparisons...&lt;/span&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEid9QHnLTyrz-EpsfE2qHuSG9PfNk1Yl1WT186Q1iD5T5KAptKSIonhEfG_Xus485s4jHZXTnV80UCr4bOpJbd9Fp0BP3UewgeBiOCXswBUQNFL_aNiQ9XXK4xLga2k4i5-hmceeJk9c3AazUeRxsj3wFsqwaWrgKe95Ae8xYw5Q0crByHe_9GaSF1roN4/s789/CPU%20perf%203.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="431" data-original-width="789" height="350" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEid9QHnLTyrz-EpsfE2qHuSG9PfNk1Yl1WT186Q1iD5T5KAptKSIonhEfG_Xus485s4jHZXTnV80UCr4bOpJbd9Fp0BP3UewgeBiOCXswBUQNFL_aNiQ9XXK4xLga2k4i5-hmceeJk9c3AazUeRxsj3wFsqwaWrgKe95Ae8xYw5Q0crByHe_9GaSF1roN4/w640-h350/CPU%20perf%203.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The averaged yearly CPU requirements relative to the requirements in the year of the last hardware release...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Looking over at console land, we can see that required CPU performance on the PC is not very different from the power of the Xbox Series and PS5 consoles (at the end of 2023, the averaged requirements are 98% of the theoretical performance of the XSX). Four years into the current console generation, we're looking at games not pushing CPU performance beyond what the PS5/XSX can do - with engine technology generally focussing on GPU processing.&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Out of the last three console hardware changes, we observe the lowest level of CPU advancement over the same time period. Both the PS4/XBO and Pro/XBOX hardware saw around +50% for single-threaded performance in the third year after release. We're currently at +8%.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;You could argue that this is all the CPU performance that we actually need. That may be the reason - and that everything else is just pure optimisation of existing codebases to improve hardware utilisation, reduce bottlenecks, and manage data more efficiently... Let's hope that happens!&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCJgjfjZeMjcwPtrV1l99WUmQSa9gYM2WMpwHl79Lj3xpFgxouHoBuU8RcCdzk3jYqjkXdID14h4-WpJb6k4De1YShYmGfLqxtinagsNCB_aw_QVNnm-b4AQ-Vt4p37IGH_9m_xi-IGH-gNZikPwazdg52VSo-6f3SN0m8dYbCdPQ_0NJf6fW2Pgb3Ycw/s749/GPU%20perf%203.PNG" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="424" data-original-width="749" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCJgjfjZeMjcwPtrV1l99WUmQSa9gYM2WMpwHl79Lj3xpFgxouHoBuU8RcCdzk3jYqjkXdID14h4-WpJb6k4De1YShYmGfLqxtinagsNCB_aw_QVNnm-b4AQ-Vt4p37IGH_9m_xi-IGH-gNZikPwazdg52VSo-6f3SN0m8dYbCdPQ_0NJf6fW2Pgb3Ycw/w640-h362/GPU%20perf%203.PNG" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;&lt;b&gt;The averaged yearly GPU requirements relative to the requirements in the year of the last hardware release...&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;GPU game requirement advancement tells a similar story: GPU "power" is powerful enough to make almost any game. We're looking at the power of an RX 6600 (non-XT) here in 2023, which is around 80% the estimated performance of the GPU in the XSX in rasterisation workloads.&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;At this point, it seems to be clear that VRAM is a bigger limiting factor when not-taking ray tracing performance into account.&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="color: #274e13;"&gt;Wrapping up...&lt;/span&gt;&lt;br /&gt;&lt;/h3&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This console generation continues to refuse to take-off with generationally-exclusive titles still feeling few and far between. Hellblade 2 just released and is a graphically demanding powerhouse but which scales well. Alan Wake 2 also pushes the boundaries of graphics but other than those, we're not really seeing very demanding titles.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;In PC land, I think that we are in a good situation with regards to CPU and platform performance. Getting an AMD 3D V-cache Zen 3 part or a cheap Zen 4 part will provide the user with more than sufficient performance to run games for the next few years. RAM and storage are pretty cheap (despite increasing in price over the last half a year).&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&amp;nbsp;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Unfortunately, both motherboards and GPUs spoil the party, price-wise. In a sense, this is understandable. Motherboards (at least on AMD's side) have to account for decreased sales due to the longevity of AMD's platforms - thus, a higher buy-in price is logical as it is ameliorated over a longer period of time. GPUs fall flat mostly due to the cost to achieve &lt;i&gt;&lt;u&gt;good&lt;/u&gt;&lt;/i&gt; ray tracing performance (&amp;gt; $500) and because VRAM is such an issue. However, as I said, this appears to mostly be a problem due to the stagnation from the memory module manufacturers.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;This year, I attempted to try trending ray tracing requirements. However, it wasn't possible - there are too few titles to trend that include RT and even fewer which actually have separate RT recommended requirements. Maybe I will be able to start when reviewing 2024 games.&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Until then...&lt;br /&gt;&lt;/div&gt;</description><link>http://hole-in-my-head.blogspot.com/2024/05/next-gen-pc-gaming-requirements-2023.html</link><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCQ7ipyDlvvu2hCs6EZnohyphenhyphenFz0dfMyMe_CwM1tlCAe7aCkWbbr2yDbpOWYo__FCt0ZT0wuhmVj98Jzi_TBihzqqk2pG2agLBKIZI3dWlsClQa7FDhwbgk1TPOp2r5bi-4pZ0JTTbFYRNrhRXVvvX0lk8FKxCGyrokxljjv2QFkqVKsSkNpncbipkXbX0w/s72-w640-h426-c/Title_2023.jpg" width="72"/><thr:total>1</thr:total><author>noreply@blogger.com (The Easy Button)</author></item></channel></rss>