<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>IEEE Spectrum</title><link>https://spectrum.ieee.org/</link><description>IEEE Spectrum</description><atom:link href="https://spectrum.ieee.org/feeds/topic/computing.rss" rel="self"></atom:link><language>en-us</language><lastBuildDate>Thu, 23 Apr 2026 14:00:45 -0000</lastBuildDate><item><title>What Anthropic’s Mythos Means for the Future of Cybersecurity</title><link>https://spectrum.ieee.org/ai-cybersecurity-mythos</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-cgi-image-of-a-translucent-padlock-filled-with-0s-and-1s-one-spot-is-broken-and-the-numbers-are-spraying-out-of-that-spot.jpg?id=65714765&width=1245&height=700&coordinates=0%2C156%2C0%2C157"/><br/><br/><p>Two weeks ago, Anthropic <a href="https://red.anthropic.com/2026/mythos-preview/" rel="noopener noreferrer" target="_blank">announced</a> that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, <a href="https://spectrum.ieee.org/tag/anthropic" target="_blank">Anthropic</a> is not releasing the model to the general public, but instead to a <a href="https://www.anthropic.com/glasswing" rel="noopener noreferrer" target="_blank">limited number</a> of companies.</p><p>The news rocked the internet security community. There were few details in Anthropic’s announcement, <a href="https://srinstitute.utoronto.ca/news/the-mythos-question-who-decides-when-ai-is-too-dangerous" rel="noopener noreferrer" target="_blank">angering</a> many observers. Some speculate that Anthropic <a href="https://kingy.ai/ai/too-dangerous-to-release-or-just-too-expensive-the-real-reason-anthropic-is-hiding-its-most-powerful-ai/" rel="noopener noreferrer" target="_blank">doesn’t have</a> the GPUs to run the thing, and that cybersecurity was the excuse to limit its release. Others argue Anthropic is holding to their AI safety mission. <a href="https://www.nytimes.com/2026/04/07/opinion/anthropic-ai-claude-mythos.html" rel="noopener noreferrer" target="_blank">There’s</a> <a href="https://www.axios.com/2026/04/08/anthropic-mythos-model-ai-cyberattack-warning" rel="noopener noreferrer" target="_blank">hype</a> and <a href="https://www.artificialintelligencemadesimple.com/p/anthropics-claude-mythos-launch-is" rel="noopener noreferrer" target="_blank">counter</a>-<a href="https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier" rel="noopener noreferrer" target="_blank">hype</a>, <a href="https://www.aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities" rel="noopener noreferrer" target="_blank">reality</a> and marketing. It’s a lot to sort out, even if you’re an expert.</p><p>We see Mythos as a real but incremental step, one in a long line of incremental steps. But even incremental steps can be important when we look at the big picture.</p><h2>How AI Is Changing Cybersecurity</h2><p>We’ve <a href="https://spectrum.ieee.org/online-privacy" target="_self">written about</a> Shifting Baseline Syndrome, a phenomenon that leads people—the public and experts alike—to discount massive long-term changes that are hidden in incremental steps. It has happened with online privacy, and it’s happening with AI. Even if the vulnerabilities found by Mythos could have been found using AI models from last month or last year, they couldn’t have been found by AI models from five years ago.</p><p>The Mythos announcement reminds us that AI has come a long way in just a few years: The baseline really has shifted. Finding vulnerabilities in source code is the type of task that today’s large language models excel at. Regardless of whether it happened last year or will happen next year, it’s been clear for a <a href="https://sockpuppet.org/blog/2026/03/30/vulnerability-research-is-cooked/" rel="noopener noreferrer" target="_blank">while</a> this kind of capability was coming soon. The question is how we <a href="https://labs.cloudsecurityalliance.org/mythos-ciso/" rel="noopener noreferrer" target="_blank">adapt to it</a>.</p><p>We don’t believe that an AI that can hack autonomously will create permanent asymmetry between offense and defense; it’s likely to be more <a href="https://danielmiessler.com/blog/will-ai-help-moreattackers-defenders" rel="noopener noreferrer" target="_blank">nuanced</a> than that. Some vulnerabilities can be found, verified, and patched automatically. Some vulnerabilities will be hard to find, but easy to verify and patch—consider generic cloud-hosted web applications built on standard software stacks, where updates can be deployed quickly. Still others will be easy to find (even without powerful AI) and relatively easy to verify, but harder or impossible to patch, such as IoT appliances and industrial equipment that are rarely updated or can’t be easily modified.</p><p>Then there are systems whose vulnerabilities will be easy to find in code but difficult to verify in practice. For example, complex distributed systems and cloud platforms can be composed of thousands of interacting services running in parallel, making it difficult to distinguish real vulnerabilities from false positives and to reliably reproduce them.</p><p>So we must separate the patchable from the unpatchable, and the easy to verify from the hard to verify. This taxonomy also provides us guidance for how to protect such systems in an era of powerful AI vulnerability-finding tools.</p><p>Unpatchable or hard to verify systems should be protected by wrapping them in more restrictive, tightly controlled layers. You want your fridge or thermostat or industrial control system behind a restrictive and constantly-updated firewall, not freely talking to the internet.</p><p>Distributed systems that are fundamentally interconnected should be traceable and should follow the principle of least privilege, where each component has only the access it needs. These are bog standard security ideas that we might have been tempted to throw out in the era of AI, but they’re still as relevant as ever.</p><h2>Rethinking Software Security Practices</h2><p>This also raises the salience of best practices in software engineering. Automated, thorough, and continuous testing was always important. Now we can take this practice a step further and use defensive AI agents to <a href="https://www.secwest.net/ai-triage" rel="noopener noreferrer" target="_blank">test exploits</a> against a real stack, over and over, until the false positives have been weeded out and the real vulnerabilities and fixes are confirmed. This kind of <a href="https://www.csoonline.com/article/4069075/autonomous-ai-hacking-and-the-future-of-cybersecurity.html" rel="noopener noreferrer" target="_blank">VulnOps</a> is likely to become a standard part of the development process.</p><p>Documentation becomes more valuable, as it can guide an AI agent on a bug finding mission just as it does developers. And following standard practices and using standard tools and libraries allows AI and engineers alike to recognize patterns more effectively, even in a world of individual and ephemeral <a href="https://www.csoonline.com/article/4152133/cybersecurity-in-the-age-of-instant-software.html" rel="noopener noreferrer" target="_blank">instant software</a>—code that can be generated and deployed on demand.</p><p>Will this favor <a href="https://www.schneier.com/essays/archives/2018/03/artificial_intellige.html" rel="noopener noreferrer" target="_blank">offense or defense</a>? The defense eventually, probably, especially in systems that are easy to patch and verify. Fortunately, that includes our phones, web browsers, and major internet services. But today’s cars, electrical transformers, fridges, and lampposts are connected to the internet. Legacy banking and airline systems are networked.</p>Not all of those are going to get patched as fast as needed, and we may see a few years of constant hacks until we arrive at a new normal: where verification is paramount and software is patched continuously.]]></description><pubDate>Thu, 23 Apr 2026 14:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/ai-cybersecurity-mythos</guid><category>Cybersecurity</category><category>Anthropic</category><category>Agentic-ai</category><category>Hacking</category><dc:creator>Bruce Schneier</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-cgi-image-of-a-translucent-padlock-filled-with-0s-and-1s-one-spot-is-broken-and-the-numbers-are-spraying-out-of-that-spot.jpg?id=65714765&amp;width=980"></media:content></item><item><title>AI Agent Designs a RISC-V CPU Core From Scratch</title><link>https://spectrum.ieee.org/ai-chip-design</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-graphic-design-system-plot-of-a-risc-v-cpu-core-it-resembles-a-square-grid-covered-in-colorful-vertical-and-horizontal-scratc.jpg?id=65519361&width=1245&height=700&coordinates=0%2C469%2C0%2C469"/><br/><br/><p>In 2020, researchers fine-tuned a GPT-2 model to <a href="https://arxiv.org/html/2411.11856v2" rel="noopener noreferrer" target="_blank">design fragments of logic circuits</a>; in 2023, researchers used GPT-4 <a href="https://arxiv.org/abs/2305.13243" rel="noopener noreferrer" target="_blank">to help design an 8-bit processor</a> with a novel instruction set; by 2024, a variety of LLMs could <a href="https://arxiv.org/pdf/2405.02326" rel="noopener noreferrer" target="_blank">design and test chips</a> with basic functionality, like dice rolls (though often these were flawed).</p><p>Now Verkor.io, an <a href="https://spectrum.ieee.org/chip-design-ai" target="_blank">AI chip design</a> startup, claims a bigger milestone: a <a href="https://spectrum.ieee.org/risc-v-laptops" target="_blank">RISC-V </a>CPU core designed entirely by an agentic AI system. The CPU, dubbed VerCore, has a clock speed of 1.5 gigahertz and performance similar to a 2011-era laptop CPU. </p><p><a href="https://www.linkedin.com/in/suresh-krishna-793506158" rel="noopener noreferrer" target="_blank">Suresh Krishna</a>, cofounder at <a href="https://verkor.io/" rel="noopener noreferrer" target="_blank">Verkor.io</a>, says the team’s key claim is that this approach is more effective than using only specialized AI systems for specialized tasks within the overall design process. “ What we learned is that the better approach is to let the AI agent solve the whole problem,” he says.</p><h2>Bringing Human Workflows to Agentic AI</h2><p>Verkor.io’s agentic system is called <a href="https://arxiv.org/pdf/2603.08716" rel="noopener noreferrer" target="_blank">Design Conductor</a>, and it’s not itself an AI model. It’s a harness for large language models (LLMs). A harness is software that forces an AI agent to proceed through structured steps. In this case, the steps are like those a team of human chip architects would follow: design, implementation, testing, and so on. The harness also manages subagents and a database of related files.</p><p>That means it can work autonomously with only an initial prompt—in this case a 219-word design specification—from the user. (<a href="https://arxiv.org/pdf/2603.08716" target="_blank">The prompt is published in the Design Conductor paper</a>.) It outputs <a href="https://en.wikipedia.org/wiki/GDSII" rel="noopener noreferrer" target="_blank">a Graphic Design System II (GDSII) file</a>, which can be used in existing electronic design automation (EDA) software.</p><p><a href="https://www.synopsys.com/ai/agentic-ai.html" rel="noopener noreferrer" target="_blank">Synopsys</a> and <a href="https://www.cadence.com/en_US/home/ai/ai-for-design.html" rel="noopener noreferrer" target="_blank">Cadence</a>, two major players in EDA software, also have agentic AI tools. These allow chip architects to automate some tasks with AI agents. Design Conductor is different because it’s built to handle chip design from spec to completion with full autonomy, something major EDA companies have not yet touted.</p><p><a href="https://www.linkedin.com/in/ravi-k-a10287122/" target="_blank">Ravi Krishna</a>, founding engineer at Verkor.io, says Design Conductor’s workflow is “mirrored after the traditional process a human engineer might use.” It analyzes the specification, then writes and debugs a register-transfer level, or RTL, file (an abstraction of the CPU’s data flow) before iterating through subtasks like power delivery, signal timings, and layout, which are again checked against the specification. Some tasks, like layout, <a href="https://theopenroadproject.org/" target="_blank">call tools</a> to assist the agent. “It’s an iterative system.”</p><p>The system took 12 hours to create the VerCore design. That’s not long, but, because it uses AI agents, you might imagine it taking more or less time based on the number of agents thrown at it. However, Ravi Krishna says it’s not that simple, because some design tasks aren’t easily parallelized. </p><p>However, the general improvement of AI models over time has proven essential. “I remember that around the middle of last year, we tried to build a floating-point multiplier with the models of that time. It was slightly beyond what they could do,” says Ravi Krishna. VerCore—designed in December 2025— represents an increase in capability since then. “If it can’t do it today, it’ll do it in six months,” he says. “I don’t know if that’s a scary thing or a good thing.”</p><h2>A First for AI Chip Design</h2><p>VerCore uses the RISC-V instruction set architecture (ISA), a popular open-standard ISA that’s beginning to break out of niche applications, like storage controllers, into systems on a chip (SoCs) that can power <a href="https://spectrum.ieee.org/risc-v-laptops" target="_self">laptops or smartphones</a>. The CPU’s exact clock speed is 1.48 GHz and it achieved a <a href="https://www.eembc.org/coremark/" rel="noopener noreferrer" target="_blank"></a>score of 3,261 on the <a href="https://www.eembc.org/coremark/" rel="noopener noreferrer" target="_blank">CoreMark</a> processor core benchmark. </p><p>Verkor says this puts VerCore’s performance in line with the CPU core performance of <a href="https://www.notebookcheck.net/Intel-Celeron-Dual-Core-SU2300-Notebook-Processor.33847.0.html" rel="noopener noreferrer" target="_blank">Intel’s Celeron SU2300</a>. Whether that sounds impressive depends on your perspective. The Celeron SU2300, which arrived in 2011, uses Intel’s <a href="https://www.intel.com/content/dam/doc/white-paper/45nm-next-generation-core-microarchitecture-white-paper.pdf" rel="noopener noreferrer" target="_blank">Penryn CPU architecture</a>, which debuted in November of 2007.<br/><br/> In other words, VerCore is no threat to leading-edge CPUs, but it’s notable for two reasons.<br/><br/>VerCore is the first RISC-V CPU core designed by an AI agent. Previous examples of AI chip design presented portions of a design but didn’t present a complete core. Ravi Krishna says the company wanted to target a design that an AI agent hadn’t previously accomplished. “From the perspective of trying to push the limits of what AI models can do, that was interesting to us,” he says.</p><p>And while VerCore’s theoretical performance has limits, it’s enough to suggest the design could be useful. Indeed, RISC-V is popular because it provides an ISA that’s free to use (RISC-V is an open standard). RISC-V chips generally aren’t as quick as their <em>x</em>86 and Arm peers, but they’re less expensive. </p><p>There’s one final caveat worth mentioning; the chip has not been physically produced. VerCore was verified in simulation with <a href="https://github.com/riscv-software-src/riscv-isa-sim" rel="noopener noreferrer" target="_blank">Spike</a>, the reference RISC-V ISA simulator, and laid out using the open-source <a href="https://github.com/The-OpenROAD-Project/asap7" rel="noopener noreferrer" target="_blank">ASAP7 PDK</a>, an academic design kit that simulates a 7-nanometer production node. Both tools are commonly used for RISC-V design. VerCore says its CPU can run a variant of <a href="https://en.wikipedia.org/wiki/%CE%9CClinux" rel="noopener noreferrer" target="_blank">uCLinux</a> in simulation. </p><p>Skeptics will have a chance to judge for themselves. Verkor.io plans to release design files at the end of April. This will include the VerCore CPU and several other designs recently completed by the AI agent system. Verkor also plans to show an FPGA implementation of VerCore at <a href="https://dac.com/2026" rel="noopener noreferrer" target="_blank">DAC</a>, the leading electronic design automation conference.</p><h2>Should Chip Designers Worry about AI Agents Taking Their Jobs?</h2><p>An AI chip designer that can bang out a CPU in 12 hours might seem like troubling news for flesh-and-blood engineers, but Design Conductor has its limitations. The team at Verkor.io say that despite improvements, LLMs still lack the intuition a human can bring.</p><p>Design Conductor can fall down rabbit holes that a human engineer would avoid. In one instance the agent made a mistake in timing, meaning that data was not moved across the CPU in agreement with its clock cycle. The model didn’t recognize the cause and made broad changes while hunting for the fix. It did eventually find a fix, but only after reaching many dead ends. “Basically, we are trading off experience for compute,” says <a href="https://www.linkedin.com/in/david-chin-a5092a/" rel="noopener noreferrer" target="_blank">David Chin</a>, vice president of engineering at the startup.<br/><br/>Suresh Krishna concurs and adds that Design Conductor’s brute-force approach is likely to become less efficient as agentic systems tackle more complex designs. “It’s a nonlinear design space, so the compute grows very quickly,” he says. “As a practical matter, expert guidance and common sense helps a lot.”</p><p>Despite such issues, agentic systems like Design Conductor might accelerate chip design by accelerating iteration. They may also make design accessible to small teams that otherwise lack the resources or head count to pull off a project.</p><p>“It’s not at the point where you can have one person. I would say you still need five to ten, all experts in different areas,” says Ravi Krishna. “That team could get you to [a production-ready chip design] at this point.”</p>]]></description><pubDate>Wed, 22 Apr 2026 11:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/ai-chip-design</guid><category>Eda</category><category>Chip-design</category><category>Agentic-ai</category><category>Risc-v</category><category>Cpu</category><dc:creator>Matthew S. Smith</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-graphic-design-system-plot-of-a-risc-v-cpu-core-it-resembles-a-square-grid-covered-in-colorful-vertical-and-horizontal-scratc.jpg?id=65519361&amp;width=980"></media:content></item><item><title>Designing Broadband LPDA-Fed Reflector Antennas With Full-Wave EM Simulation</title><link>https://content.knowledgehub.wiley.com/efficient-design-and-simulation-of-lpda-fed-parabolic-reflector-antennas/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/wipl-d-logo.png?id=26851496&width=980"/><br/><br/><p>A practical guide to designing log-periodic dipole array fed parabolic reflector antennas using advanced 3D MoM simulation — from parametric modeling to electrically large structures.</p><p><strong>What Attendees will Learn</strong></p><ol><li>How to set design requirements for LPDA-fed reflector antennas — Understand the key specifications including bandwidth ratio, gain targets, and VSWR matching constraints across the full operating range from 100 MHz to 1 GHz.</li><li>Why advanced 3D EM solvers enable simulation of electrically large multiscale structures — Learn how higher order basis functions, quadrilateral meshing, geometrical symmetry, and CPU/GPU parallelization extend MoM simulation capability by an order of magnitude.</li><li>How to apply a systematic three-step design strategy with proven workflow starting with first optimizing the stand-alone LPDA for VSWR and gain, then integrating the reflector, and finally tuning parameters to satisfy all performance requests including gain and impedance matching.</li><li>How parametric CAD modeling accelerates LPDA design — Discover how self-scaling geometry, automated wire-to-solid conversion, and multiple-copy-with-scaling features enable fully parametrized antenna models that streamline optimization across dozens of design variants.</li></ol><div><span><a href="https://content.knowledgehub.wiley.com/efficient-design-and-simulation-of-lpda-fed-parabolic-reflector-antennas/" target="_blank">Download this free whitepaper now!</a></span></div>]]></description><pubDate>Fri, 17 Apr 2026 14:00:50 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/efficient-design-and-simulation-of-lpda-fed-parabolic-reflector-antennas/</guid><category>Type-whitepaper</category><category>Broadband</category><category>Antennas</category><category>Simulation</category><dc:creator>WIPL-D</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/26851496/origin.png"></media:content></item><item><title>Stealth Signals Are Bypassing Iran’s Internet Blackout</title><link>https://spectrum.ieee.org/iran-internet-blackout-satellite-tv</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.png?id=65716479&width=1245&height=700&coordinates=0%2C700%2C0%2C701"/><br/><br/><p><strong>On 8 January 2026, </strong>the Iranian government imposed a near-total communications shutdown. It was the country’s first full information blackout: For weeks, the internet was off across all provinces while services including the government-run intranet, VPNs, text messaging, mobile calls, and even landlines were severely throttled. It was an unprecedented lockdown that left more than <a href="https://www.chathamhouse.org/2026/01/irans-internet-shutdown-signals-new-stage-digital-isolation" rel="noopener noreferrer" target="_blank">90 million people</a> cut off not only from the world, but from one another.</p><p>Since then, connectivity has never fully returned. Following <a href="https://en.wikipedia.org/wiki/2026_Iran_war" rel="noopener noreferrer" target="_blank">U.S. and Israeli airstrikes</a> in late February, Iran again imposed near-total restrictions, and people inside the country again saw global information flows dry up.</p><p>The original January shutdown came amid nationwide protests over the deepening economic crisis and political repression, in which millions of people chanted antigovernment slogans in the streets. While Iranian protests have become frequent in recent years, this was one of the most significant uprisings since the Islamic Revolution in 1979. The government responded quickly and brutally. One report put the death toll at <a href="https://www.en-hrana.org/the-crimson-winter-a-50-day-record-of-irans-2025-2026-nationwide-protests/" rel="noopener noreferrer" target="_blank">more than 7,000 confirmed deaths</a> and more than 11,000 under investigation. Many sources believe the death toll could exceed 30,000.</p><p>Thirteen days into the January shutdown, we at <a href="https://www.netfreedompioneers.org/" rel="noopener noreferrer" target="_blank">NetFreedom Pioneers</a> (NFP) turned to a system we had built for exactly this kind of moment—one that sends files over ordinary satellite TV signals. During the national information vacuum, our technology, called <a href="https://www.netfreedompioneers.org/toosheh-datacasting-technology/" rel="noopener noreferrer" target="_blank">Toosheh</a>, delivered real-time updates into Iran, offering a lifeline to millions starved of trusted information.</p><h2>How Iran Censors the Internet<br/></h2><p>I joined NetFreedom Pioneers, a nonprofit focused on anticensorship technology, in 2014. Censorship in <a href="https://spectrum.ieee.org/tag/iran" target="_blank">Iran</a> was a defining feature of my youth in the 1990s. After the Islamic Revolution, most Iranians began to lead double lives—one at home, where they could drink, dance, and choose their clothing, and another in public, where everyone had to comply with stifling government laws.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Photo of a helmeted soldier with a machine gun standing in front of an Iranian flag and cell tower." class="rm-shortcode" data-rm-shortcode-id="ef533f84cc5eb097a4cfe78e30b2984b" data-rm-shortcode-name="rebelmouse-image" id="7a368" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-a-helmeted-soldier-with-a-machine-gun-standing-in-front-of-an-iranian-flag-and-cell-tower.jpg?id=65520617&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Iran’s internet infrastructure is more centralized than in other parts of the world, making it easier for the government to restrict the flow of information. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Morteza Nikoubazl/NurPhoto/Getty Images</small></p>My first experience with secret communications was when I was five and living in the small city of Fasa in southern Iran. My uncle brought home a satellite dish—dangerously illegal at the time—that allowed us to tune into 12 satellite channels. My favorite was Cartoon Network. Then, during my teenage years, this same uncle introduced me to the internet through dial-up modems. I remember using Yahoo Mail with its 4 megabytes of storage, reading news from around the world, and learning about the Chandra X-ray telescope from NASA’s website. <p><br/><br/><span>That openness didn’t last. As internet use spread in the early 2000s, the Iranian government began reshaping the network itself. Unlike the highly distributed networks in the United States or Europe, where thousands of providers exchange traffic across many independent routes, Iran’s connection to the global internet is relatively centralized. Most international traffic passes through a small number of gateways controlled by state-linked telecom operators. That architecture gives authorities unusual leverage: By restricting or withdrawing those connections, they can sharply reduce the country’s access to the outside world.</span></p><p>Over the past decade, Iran has expanded this control through what it calls the <a href="https://en.wikipedia.org/wiki/National_Information_Network" target="_blank">National Information Network</a>, a domestically routed system designed to keep data inside the country whenever possible. Many government services, banking systems, and local platforms are hosted on this internal network. During periods of unrest, access to the global internet can be throttled or cut off while portions of this domestic network continue to function.</p><h3></h3><br/><div class="rblad-ieee_in_content"></div><p>The government began its censorship campaign by redirecting or blocking websites. As internet use grew, it adopted more sophisticated approaches. For example, the <a href="https://en.wikipedia.org/wiki/Telecommunication_Company_of_Iran" target="_blank">Telecommunication Company of Iran</a> uses a technique called <a href="https://www.fortinet.com/resources/cyberglossary/dpi-deep-packet-inspection" target="_blank">deep packet inspection</a> to analyze the content of data packets in real time. This method enables it to identify and block specific types of traffic, such as VPN connections, messaging apps, social media platforms, and banned websites.</p><h2>The Stealth of Satellite Transmissions<br/></h2><p>Toosheh’s communication workaround builds on a history of satellite TV adoption in Middle Eastern and North African countries. By the early 2000s, satellite dishes were common in Iran; today the majority of households in Iran have access to satellite TV despite its official prohibition.</p><p>Unlike subscription services such as DirecTV and Dish Network, “free-to-air” satellite TV broadcasts are unencrypted and can be received by anyone with a dish and receiver—no subscription required. Because the signals are open, users can also capture and store the data they carry, rather than simply watching it live. Tech-savvy people learned that they could use a digital video broadcasting (DVB) card—a piece of hardware that connects to a computer and tunes into satellite frequencies—to transform a personal computer into a satellite receiver. This way, they could watch and store media locally as well as download data from dedicated channels.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Photo of satellite dishes adorning the side of an apartment building." class="rm-shortcode" data-rm-shortcode-id="a558326e8ca2bd5c645e392fb0166b58" data-rm-shortcode-name="rebelmouse-image" id="577d2" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-satellite-dishes-adorning-the-side-of-an-apartment-building.jpg?id=65520620&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Many Iranian citizens have free-to-air satellite dishes, like the ones on this apartment building in Tehran, and can thus download Toosheh transmissions, giving them a lifeline during internet blackouts.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Morteza Nikoubazl/NurPhoto/Getty Images</small></p><p>Toosheh, a Persian word that translates to “knapsack,” is the brainchild of <a href="https://x.com/mehdiy_fa" target="_blank">Mehdi Yahyanejad</a>, an Iranian-American technologist and entrepreneur. Yahyanejad cofounded NetFreedom Pioneers in 2012. He proposed that the satellite-computer connections enabled by a DVB card could be re-created in software, eliminating the need for specialized hardware. He added a simple digital interface to the software to make it easy for anyone to use. The next breakthrough came when the NFP team developed a new transfer protocol that tricks ordinary satellite receivers into downloading data alongside audio and video content. Thus, Toosheh was born.</p><p>Satellite TV uses a file system called an <a href="https://en.wikipedia.org/wiki/MPEG_transport_stream" target="_blank">MPEG transport stream</a> that allows multiple audio, video, or data layers to be packaged into a single stream file. When you tune in to a satellite channel and select an audio option or closed captions, you’re accessing data stored in different parts of this stream. The NFP team’s insight was that, by piggybacking on one of these layers, Toosheh could send an MPEG stream that included documents, videos, and more.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="An illustration of an 8 step process for sending digital files via satellite TV signals." class="rm-shortcode" data-rm-shortcode-id="500fc02c0c38f890606e42dec590ae8f" data-rm-shortcode-name="rebelmouse-image" id="371ea" loading="lazy" src="https://spectrum.ieee.org/media-library/an-illustration-of-an-8-step-process-for-sending-digital-files-via-satellite-tv-signals.png?id=65521138&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">HOW TOOSHEH WORKS: At NetFreedom Pioneers, content curators pull together files—news articles, videos, audio, and software [1]. Toosheh’s encoder software [2] compresses the files into a bundle, in .ts format, creating an MPEG transport stream [3]. From there, it’s uploaded to a server for transmission [4] via a free-to-air TV channel on a Yahsat satellite that’s positioned over the Middle East to provide regional coverage [5]. Satellite receivers [6] directly capture the data streams, which are downloaded to computers, smartphones, and other devices, and decoded by Toosheh software [8].</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Chris Philpot</small></p><p>A satellite receiver can’t tell the difference between our data and normal satellite audio and video data since it only “sees” the MPEG streams, not what’s encoded on them. This means the data can be downloaded and read, watched, and saved on local devices such as computers, smartphones, or storage devices. What’s more, the system is entirely private: No one can detect whether someone has received data through Toosheh; there are no traceable logs of user activity.</p><p>Toosheh doesn’t provide internet access, but rather delivers curated data through satellite technology. The fundamental distinction lies in the way users interact with the system. Unlike traditional internet services, where you type a request into your browser and receive data in response, Toosheh operates more like a combination of radio and television, presenting information in a magazine-like format. Users don’t make requests; instead, they receive 1 to 5 gigabytes of prepackaged, carefully selected data.</p><p class="pull-quote"><span>Access to information is not only about news or politics, but about exposure to possibilities.  </span></p><p>During this year’s internet blackout, we distributed official statements from Iranian opposition leader Crown Prince Reza Pahlavi and the U.S. government. We provided first-aid tutorials for medics and injured protesters. We sent uncensored news reports from BBC Persian, Iran International, IranWire, VOA Farsi, and others. We also shared critical software packages including anticensorship and antisurveillance tools, along with how-to guides to help people securely connect to Starlink satellite terminals, allowing them to stay protected and anonymous as they sent their own communications.</p><h2>How to Combat Signal Interference<br/></h2><p>Because Toosheh relies on one-way satellite broadcasts, it evades the usual tactics governments use to block internet access. However, it remains vulnerable to <a href="https://spectrum.ieee.org/satellite-jamming" target="_blank">satellite signal jamming</a>.</p><p>The Iranian government is notorious for deploying signal jamming, especially in larger cities. In 2009, the government <a href="https://www.dw.com/fa-ir/%D9%86%D8%A7%D8%AA%D9%88%D8%A7%D9%86%DB%8C-%D8%AF%D8%B1-%D9%85%D9%82%D8%A7%D8%A8%D9%84-%D8%A7%D9%85%D9%88%D8%A7%D8%AC-%D9%BE%D8%A7%D8%B1%D8%A7%D8%B2%DB%8C%D8%AA-%D8%A7%D8%B2-%D8%AA%D9%87%D8%B1%D8%A7%D9%86/a-5417209" target="_blank">used uplink interference</a>, which attacks the satellite in orbit by beaming strong noise in the frequency of the satellite’s receiver. This makes it impossible for the satellite to distinguish the information it’s supposed to receive. However, because this type of attack temporarily disables the entire satellite, Iran was threatened with international <a href="https://www.dw.com/fa-ir/%D8%AA%D8%B4%D8%AF%DB%8C%D8%AF-%D8%A7%D9%86%D8%AA%D9%82%D8%A7%D8%AF%D9%87%D8%A7-%D8%A8%D9%87-%D8%A7%D8%B1%D8%B3%D8%A7%D9%84-%D9%BE%D8%A7%D8%B1%D8%A7%D8%B2%DB%8C%D8%AA-%D8%A7%D8%B2-%D8%B3%D9%88%DB%8C-%D8%A7%DB%8C%D8%B1%D8%A7%D9%86/a-5382663" target="_blank">sanctions</a> and in 2012 stopped using the method .</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A chart displayed on a cellphone shows internet connectivity in Iran dropped from almost 100% to 0% on 9 January 2026." class="rm-shortcode" data-rm-shortcode-id="c5f3ef2e60cfa653b7c461cda6d68e0f" data-rm-shortcode-name="rebelmouse-image" id="c778a" loading="lazy" src="https://spectrum.ieee.org/media-library/a-chart-displayed-on-a-cellphone-shows-internet-connectivity-in-iran-dropped-from-almost-100-to-0-on-9-january-2026.jpg?id=65520652&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">A graph of network connectivity in Iran shows that on 9 January 2026, internet access dropped from nearly 100 percent to 0. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Samuel Boivin/NurPhoto/Getty Images</small></p><p>The current method, called terrestrial jamming, uses antennas installed at higher elevations than the surrounding buildings to beam strong noise over a specific area in the frequency range of household receivers. This attack is effective in keeping some of the packets from arriving and damaging others, effectively jamming the transmission. But it’s short-range and requires significant power, so it’s impossible to implement nationwide. There are always people somewhere who can still watch TV, download from Toosheh, or tune into a satellite radio despite the jamming. Even so, we wanted a workaround that would keep our transmissions broadly accessible.</p><p>NFP’s solution was to add redundancy, similar in principle to a data-storage technique called RAID (redundant array of independent disks). Instead of sending each piece of data once, we send extra information that allows missing or corrupted packets to be reconstructed. Under normal circumstances, we often use 5 percent of our bandwidth for this redundancy. During periods of active jamming, we increase that to as much as 25 to 30 percent, improving the chances that users can recover complete files despite interference.</p><h2>From Crisis Response to Public Access<br/></h2><p>Toosheh initially came online in 2015 in Iran and Afghanistan. Its full potential, however, was first realized during the 2019 protests in Iran, which saw the most widespread internet shutdown prior to the blackout this year. <a href="https://www.wired.com/story/iran-news-internet-shutdown/" target="_blank"><em><em>Wired</em></em></a> called the 2019 shutdown “the most severe disconnection” tracked by <a href="https://netblocks.org/" target="_blank">NetBlocks</a> in any country in terms of its “technical complexity and breadth.” Our technology helped thousands of people stay informed. We sent crucial local updates, legal-aid guides, digital security tools, and independent news to satellite receivers all over the country, seeing a sixfold increase in our user base.</p><p>When that wave of protests subsided, the government allowed some communication services to return. People were again able to access the free internet using VPNs and other antifilter software that allowed them to bypass restrictions. Toosheh then became a public access point for news, educational material, and entertainment beyond government filtering.</p><p>Toosheh’s impact is often personal. A traveling teacher in western Iran told NFP that he regularly distributed Toosheh files to students in remote villages. One package included footage of female athletes competing in the Olympic Games, something never broadcast in Iran. For one young girl, it was the first time she realized women could compete professionally in sports. That moment underscores a broader truth: Access to information is not only about news or politics, but about exposure to possibilities.</p><h2>The Cost of Toosheh<br/></h2><p>Unlike internet-based systems, Toosheh’s operational cost remains constant regardless of the number of users. A single TV satellite in geostationary earth orbit, deployed and maintained by an international company such as Eutelsat, can broadcast to an entire continent with no increase in cost to audiences. What’s more, the startup cost for users isn’t high: A satellite dish and receiver in Iran costs less than US $50, which is affordable to many. And it costs nothing for people to use Toosheh’s service and receive its files.</p><p class="pull-quote"><span>We aim not just to build a tool for censorship circumvention, but to redefine access itself. </span></p><p>However, operating the service is costly: NetFreedom Pioneers pays tens of thousands of dollars a month for satellite bandwidth. We had received funding from the U.S. State Department, but in August of 2025, that funding ended, forcing us to suspend services in Iran.</p><p>Then the December protests happened, and broadcasting to Iran became an urgent priority. To turn Toosheh back on, we needed roughly $50,000 a month. With the support of a handful of private donors, we were able to meet these costs and sustain operations in Iran for a few months, though our future there and elsewhere is uncertain.</p><h2>Satellites Against Censorship<br/></h2><p>Toosheh’s revival in Iran came alongside NFP’s ongoing support for deployments of Starlink, a satellite internet service that allows users to connect directly to satellites rather than relying on domestic networks, which the government can shut down. Unlike Toosheh’s one-way broadcasts, <a href="https://spectrum.ieee.org/tag/starlink" target="_blank">Starlink</a> provides full two-way internet access, enabling users to send messages, upload videos, and communicate with the outside world.</p><p>In 2022, we started gathering <a href="https://www.gofundme.com/f/urgent-help-deliver-starlink-and-vpn-access-for-freedom" target="_blank">donations</a> to buy Starlink terminals for Iran. We have delivered more than 300 of the <a href="https://www.theguardian.com/world/2026/jan/13/ecosystem-smuggled-tech-iran-last-link-outside-world-internet" target="_blank">roughly 50,000</a> there, enabling citizens to send encrypted updates and videos to us from inside the country. Because the technology is banned by the government, access remains limited and carries risk; Iranian authorities have recently arrested Starlink users and sellers. And unlike Toosheh’s receive-only broadcasts, Starlink terminals transmit signals back to orbit, creating a radio footprint that can potentially be detected.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A photo of a laptop screen says the user is offline." class="rm-shortcode" data-rm-shortcode-id="2c0caa05d5589d7d25beeb8342db442e" data-rm-shortcode-name="rebelmouse-image" id="103c7" loading="lazy" src="https://spectrum.ieee.org/media-library/a-photo-of-a-laptop-screen-says-the-user-is-offline.png?id=65521782&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The internet shutdown in Iran continued after the attacks by Israel and the United States began in late February, preventing Iranians from communicating with the outside world and with one another.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Fatemeh Bahrami/Anadolu/Getty Images</small></p><p>Looking ahead, we envision Toosheh becoming a foundational part of global digital resilience. It is uncensored, untraceable, and resistant to government shutdowns. Because Toosheh is downlink only, it can sometimes feel hard to explain the value of this technology to those living in the free world, those accustomed to open internet access. Yet, people living under censorship have few other choices when there’s a digital blackout.</p><p>Currently, NFP is developing new features like intelligent content curation and automatically prioritizing data packages based on geographic or situational needs. And we’re experimenting with local sharing tools that allow users who receive Toosheh broadcasts to redistribute those files via Wi-Fi hotspots or other offline networks, which could extend the system’s reach to disaster zones, conflict areas, and climate-impacted regions where infrastructure may be destroyed.</p><p>We’re also looking at other use cases. Following the Taliban’s return to power in Afghanistan, NetFreedom Pioneers designed a satellite-based system to deliver educational materials. Our goal is to enable private, large-scale distribution of coursework to anyone—including the girls who are banned from Afghanistan’s schools. The system is technically ready but has yet to secure funding for deployment.</p><p>We aim not just to build a tool for censorship circumvention, but to redefine access itself. Whether in an Iranian city under surveillance, a Guatemalan village without internet, or a refugee camp in East Africa, Toosheh offers a powerful and practical model for delivering vital information without relying on vulnerable or expensive networks.</p><p>Toosheh is a reminder that innovation doesn’t have to mean complexity. Sometimes, the most transformative ideas are the simplest, like delivering data through the sky, quietly and affordably, into the hands of those who need it most.<span class="ieee-end-mark"></span></p><p><em>This article appears in the May 2026 print issue as “The Stealth Signals Bypassing Iran’s Internet Blackout.”</em></p>]]></description><pubDate>Wed, 15 Apr 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/iran-internet-blackout-satellite-tv</guid><category>Satellite-communications</category><category>Censorship</category><category>Iran</category><category>Protests</category><category>Democracy</category><category>Internet-shutdowns</category><dc:creator>Evan Alireza Firoozi</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/image.png?id=65716479&amp;width=980"></media:content></item><item><title>Crypto Faces Increased Threat From Quantum Attacks</title><link>https://spectrum.ieee.org/quantum-safe-crypto</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/abstract-pixel-art-resembling-a-padlock-and-token.jpg?id=65520763&width=1245&height=700&coordinates=0%2C156%2C0%2C157"/><br/><br/><p>The <a href="https://spectrum.ieee.org/post-quantum-cryptography-standards-nist" target="_self">race</a> to transition online security protocols to ones that can’t be cracked by a quantum computer is already on. The algorithms that are commonly used today to protect data online—<a href="https://en.wikipedia.org/wiki/RSA_cryptosystem" rel="noopener noreferrer" target="_blank">RSA</a> and <a href="https://en.wikipedia.org/wiki/Elliptic-curve_cryptography" rel="noopener noreferrer" target="_blank">elliptic curve cryptography</a>—are uncrackable by supercomputers, but a large enough quantum computer would make quick work of them. There are <a href="https://spectrum.ieee.org/post-quantum-cryptography-2668949802" target="_self">algorithms</a> secure enough to be out of reach for both classical and future quantum machines, called post-quantum cryptography, but transitioning to these is a <a href="https://spectrum.ieee.org/post-quantum-cryptography-2667758178" target="_self">work in progress</a>. </p><p>Late last month, the team at <a href="https://quantumai.google/" rel="noopener noreferrer" target="_blank">Google Quantum AI</a> published a <a href="https://arxiv.org/abs/2603.28846" rel="noopener noreferrer" target="_blank">whitepaper</a> that added significant urgency to this race. In it, the team showed that the size of a quantum computer that would pose a cryptographic threat is approximately 20 times <a href="https://research.google/blog/safeguarding-cryptocurrency-by-disclosing-quantum-vulnerabilities-responsibly/" rel="noopener noreferrer" target="_blank">smaller</a> than previously thought. This is still far from accessible to the quantum computers that exist today: The largest machines currently consist of approximately 1,000 quantum bits, or qubits, and the whitepaper estimated that about 500 times as much is needed. Nonetheless, this shortens the timeline to switch over to post-quantum algorithms. </p><p>The news had a surprising beneficiary: Obscure cryptocurrency <a href="https://algorand.co/" rel="noopener noreferrer" target="_blank">Algorand</a> <a href="https://www.indexbox.io/blog/algorand-price-surges-44-after-google-research-paper-citation/" rel="noopener noreferrer" target="_blank">jumped</a> 44% in price in response. The whitepaper called out Algorand specifically for implementing post-quantum cryptography on their blockchain. We caught up with Algorand’s chief scientific officer and professor of computer science and engineering at the University of Michigan, <a href="https://web.eecs.umich.edu/~cpeikert/" rel="noopener noreferrer" target="_blank">Chris Peikert</a>, to understand how this announcement is impacting cryptography, why cryptocurrencies are feeling the effects, and what the future might hold. Peikert’s early work on a particular type of algorithm known as <a href="https://en.wikipedia.org/wiki/Lattice-based_cryptography" rel="noopener noreferrer" target="_blank">lattice cryptography</a> underlies most post-quantum security today.</p><p><strong>IEEE Spectrum:</strong><span> What is the significance of this Google Quantum AI whitepaper?</span></p><p><strong>Peikert:</strong> The upshot of this paper is that it shows that a quantum computer would be able to break some of the cryptography that is most widely used, especially in blockchains and cryptocurrencies, with much, much fewer resources than had previously been established. Those resources include the time that it would take to do so and the number of qubits (or quantum bits) that it would have to use.</p><p>This cryptography is very central to not just cryptocurrencies, but more broadly to cryptography on the internet. It is also used for secure web connections between web browsers and web servers. Versions of elliptic curve cryptography are used in national security systems and military encryption. It’s very prevalent and pervasive in all modern networks and protocols.</p><p>And not only was this paper improving the algorithms, but there was also a concurrent paper showing that the hardware itself was substantially improved. The claim here was that the number of physical qubits needed to achieve a certain kind of logical qubit was also greatly reduced. These two kinds of improvements are compounding upon each other. It’s a kind of a win-win situation from the quantum computing perspective, but a lose-lose situation for cryptography.</p><p><strong>IEEE Spectrum: </strong>What do Google AI’s findings mean for cryptocurrencies and the broader cybersecurity ecosystem?</p><p><strong>Peikert:</strong> There’s always been this looming threat in the distance of quantum computers breaking a large fraction of the cryptography that’s used throughout the cryptocurrency ecosystem. And I think what this paper did was really the loudest alarm yet that these kinds of quantum attacks might not be as far off as some have suspected, or hoped, in recent years. It’s caused a reevaluation across the industry, and a moving up of the timeline for when quantum computers might be capable of breaking this cryptography.</p><p>When we think about the timelines and when it’s important to have completed these transitions [to post-quantum cryptography], we also need to factor in the unknown improvements that we should expect to see in the coming years. The science of quantum computing will not stay static, and there will be these further breakthroughs. We can’t say exactly what they will be or when they will come, but you can bet that they will be coming.</p><p><strong>IEEE Spectrum:</strong> What is your guess on if or when quantum computers will be able to break cryptography in the real world?</p><p><strong>Peikert:</strong> Instead of thinking about a specific date when we expect them to come, we have to think about the probabilities and the risks as time goes on. There have been huge breakthrough developments, including not only this paper, but also <a href="https://research.google/blog/making-quantum-error-correction-work/" target="_blank">some</a> last year. But even with these, I think that the chance of a cryptographic attack by quantum computers being successful in the next three years is extremely low, maybe less than a percent. But then, as you get out to several years, like five, six, or 10 years, one has to seriously consider a probability, maybe 5 percent or 10 percent or more. So it’s still rather small, but significant enough that we have to worry about the risk, because the value that is protected by this kind of cryptography is really enormous. </p><p>The U.S. government has put 2035 as its target for migrating all of the national security systems to post-quantum cryptography. That seems like a prudent date, given the timelines that it takes to upgrade cryptography. It’s a slow process. It has to be done very deliberately and carefully to make sure that you’re not introducing new vulnerabilities, that you’re not making mistakes, that everything still works properly. So, you know, given the outlook for quantum computers on the horizon, it’s really important that we prepare now, or ideally, yesterday, or a few years ago, for that kind of transition.</p><p><strong>IEEE Spectrum: </strong>Are there significant roadblocks you see to industrial adoption of post-quantum cryptography going forward?</p><p><strong>Peikert:</strong> Cryptography is very hard to change. We’ve only had one or maybe two major transitions in cryptography since the early 1980s or late 1970s, when the field first was invented. We don’t really have a systematic way of transitioning cryptography. </p><p>An additional challenge is that the performance trade-offs are very different in post-quantum cryptography than they are in the legacy systems. Keys and cipher texts and digital signatures are all significantly larger in post-quantum cryptography, but the computations are actually faster, typically. People have optimized cryptography for speed in the past, and we have very good fast speeds now for post-quantum cryptography, but the sizes of the keys are a challenge. </p><p>Especially in blockchain applications, like cryptocurrencies, space on the blockchain is at a premium. So it calls for a reevaluation in many applications of how we integrate the cryptography into the system, and that work is ongoing. And, the blockchain ecosystem uses a lot of advanced cryptography, exotic things like zero-knowledge proofs. In many cases, we have rudimentary constructions of these fancy cryptography tools from post-quantum-type mathematics, but they’re not nearly as mature and industry-ready as the legacy systems that have been deployed. It continues to be an important technical challenge to develop post-quantum versions of these very fancy cryptographic schemes that are used in cutting-edge applications.</p><p><strong>IEEE Spectrum: </strong>As an academic cryptography researcher, what attracted you to work with a cryptocurrency, and Algorand in particular?</p><p><strong>Peikert:</strong> My former Ph.D. advisor is <a href="https://en.wikipedia.org/wiki/Silvio_Micali" target="_blank">Silvio Micali</a>, the inventor of Algorand. The system is very elegant. It is a very high-performing blockchain system, and it uses very little energy, has fast transaction finalization, and a number of other great features. And Silvio appreciated that this quantum threat was real and was coming, and the team approached me about helping to improve the Algorand protocol at the basic levels to become more post-quantum secure in 2021. That was a very exciting opportunity, because it was a difficult engineering and scientific challenge to integrate post-quantum cryptography into all the different technical and cryptographic mechanisms that were underlying the protocol.</p><p><strong>IEEE Spectrum: </strong>What is the current status of post-quantum cryptography in Algorand, and blockchains in general? </p><p><strong>Peikert:</strong> We’ve identified some of the most pressing issues and worked our way through some of them, but it’s a many-faceted problem overall. We started with the integrity of the chain itself, which is the transaction history that everybody has to agree upon. </p><p>Our first major project was developing a system that would add post-quantum security to the history of the chain. We developed a system called <a href="https://dev.algorand.co/concepts/protocol/state-proofs/" rel="noopener noreferrer" target="_blank">state proofs</a> for that, which is a mixture of ordinary post-quantum cryptography and also some more fancy cryptography: It’s a way of taking a large number of signatures and digesting them down into a much smaller number of signatures, while still being confident that these large number of signatures actually exist and are properly formed. We also followed it with other papers and projects that are about adding post-quantum cryptography and security to other aspects of the blockchain in the Algorand ecosystem. </p><p>It’s not a complete project yet. We don’t claim to be fully post-quantum secure. That’s a very challenging target to hit, and there are aspects that we will continue to work on into the near future.</p><p><strong>IEEE Spectrum: </strong>In your view, will we adopt post-quantum cryptography before the risks actually catch up with us? </p><p><strong>Peikert:</strong> I tend to be an optimist about these things. I think that it’s a very good thing that more people in decision-making roles are recognizing that this is an important topic, and that these kinds of migrations have to be done. I think that we can’t be complacent about it, and we can’t kick the can down the road much longer. But I do see that the focus is being put on this important problem, so I’m optimistic that most important systems will eventually have good either mitigations or full migrations in place. </p><p>But it’s also a point on the horizon that we don’t know exactly when it will come. So, there is the possibility that there is a huge breakthrough, and we have many fewer years than we might have hoped for, and that we don’t get all the systems upgraded that we would like to have fixed by the time quantum computers arrive.</p>]]></description><pubDate>Wed, 15 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/quantum-safe-crypto</guid><category>Quantum-computing</category><category>Post-quantum-cryptography</category><category>Cryptocurrency</category><category>Lattice-cryptography</category><category>Security-protocols</category><category>Blockchain</category><category>Cryptography</category><dc:creator>Dina Genkina</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/abstract-pixel-art-resembling-a-padlock-and-token.jpg?id=65520763&amp;width=980"></media:content></item><item><title>Squishy Photonic Switches Promise Fast Low-Power Logic</title><link>https://spectrum.ieee.org/soft-photonics</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/illustration-of-a-micropipette-piercing-through-a-hemisphere-shaped-membrane-to-inject-a-droplet-at-its-core.jpg?id=65506297&width=1245&height=700&coordinates=0%2C187%2C0%2C188"/><br/><br/><p><span>Photonic devices, which rely on light instead of electricity, have the potential to be faster and more energy efficient than today’s electronics. They also present a unique opportunity to develop devices using <a href="https://spectrum.ieee.org/soft-robot-actuators-bugs" target="_self">soft materials</a>, such as polymers and gels, which are poor conductors of electricity but are easier to manufacture and more environmentally friendly. The development of these potentially squishy, <a href="https://spectrum.ieee.org/wearable-sensors" target="_self">flexible photonics</a>, however, requires the ability to manipulate light using only light, not electricity.</span></p><p>In soft matter, that’s been done primarily by changing the physical properties of optical materials or by using intense light pulses to change the direction of light. Now, an international team of scientists has developed a new way of controlling light with light using very low light intensities and without changing any of the physical properties of materials. </p><p><a href="https://musevic.fmf.uni-lj.si/" target="_blank"><span>Igor Muševič</span></a>, a professor of physics at the University of Ljubljana who led the project, says that he first got the idea for the device while at a conference in San Francisco, listening to a talk by <a href="https://www.nobelprize.org/prizes/chemistry/2014/hell/facts/" target="_blank">Stefan W. Hell </a>about stimulated emission depletion (STED) microscopy. The imaging technique, for which Hell won a <a href="https://www.nobelprize.org/prizes/chemistry/2014/summary/" target="_blank">Nobel Prize in Chemistry in 2014</a>, uses two lasers to produce an extremely small light beam to scan objects. “When I saw this, I said, This is manipulation light by light, right?” Muševič recalls.</p><p><span>His realization inspired a device into which a laser pulse is fired. Whether or not this beam makes it out of the device depends on whether or not a second pulse is fired less than a nanosecond afterwards.</span></p><h2>A liquid crystal photonic switch</h2><p><span>The device consists of a spherically shaped bead of liquid crystal, held in shape by its elastic material properties and the forces between its molecules, infused with a fluorescent dye and trapped between four upright cone-shaped polymer structures that guide light in and out of the device. When a laser pulse is sent through one of the four polymer waveguides, the light is quickly transferred into the liquid crystal, exciting the fluorescent dye. In a process known as whispering gallery mode resonance, the photons inside the liquid crystal are reflected back inside each time they hit the liquid’s spherical surface. The result is that light circulates inside the cavity until it is eventually reflected into one of the waveguides, which then emits the photons out in a laser beam. </span></p><p>The team realized that sending a second laser pulse of a different color into the waveguides before the liquid crystal started emitting light from the first laser pulse resulted in stimulated emission of the excited dye molecules. The photons from the second laser pulse, which had to be fired into the waveguides after the first laser pulse, interact with the already-excited dye molecules. The interaction causes the dye to emit photons identical to those in the second pulse while depleting the energy from the first pulse. The second laser beam, called the STED beam, is amplified by the process, while the light from the first pulse is so diminished that it isn’t emitted at all. Because the outcome of the first laser pulse could be controlled using the second laser pulse, the team had successfully demonstrated the control of light by light.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="0cb7a5df3d8c2896d2f429edfd746f29" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/mImgOT2zJ0I?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Vandna Sharma, Jaka Zaplotnik, et al.</small> </p><p><span>According to the Ljubljana team, the energy efficiency of the liquid crystal approach is much better than previous soft-matter techniques, which had typically involved using intense light fields to change material properties of the soft matter, such as the index of refraction. The new method reduces the energy needed by more than a factor of a hundred. Because the STED laser pulse circulates repeatedly in the crystal, a single photon can deplete many dye molecules of the energy from the first laser pulse.</span> </p><p><a href="https://ravnik.fmf.uni-lj.si/" target="_blank">Miha Ravnik</a>, a theoretical physicist also at the University of Ljubljana who worked on the project, explains that control of light by light is essential in soft-matter photonic logic gates. “You can very much control when [light] is generated and in which direction,” Ravnik says of the light shined into the polymer waveguides. “And this gives you, then, this capability that you create logical operations with light.”</p><p>Aside from its potential in photonic logical circuits, the team’s approach presents several technical advantages over photonics made from silicon or other hard materials, Muševič says. For example, using soft matter greatly simplifies the manufacturing process. The liquid crystal in the team’s device can be inserted in less than a second, but manufacturing a similar structure with hard materials is difficult. Additionally, soft-matter devices can be manufactured at much lower temperatures than silicon and other hard materials. Muševič also points out that soft matter presents an opportunity to experiment with the geometry of the device. With liquid crystals “you can make many different kinds of cavities,” says Muševič. “You have, I would say, a lot of engineering space.”</p><p>Ravnik is excited for the potential of the team’s breakthrough, particularly as a step toward <a href="https://spectrum.ieee.org/generative-optical-ai-nature-ucla" target="_self">photonic computing</a> and even photonic neural networks. But, he recognizes that these developments are far down the line. “There’s no way this technology can compete with current neural network implementation at all,” he admits. Still, the possibilities are tantalizing. “The energy losses are predicted to be extremely low, the speeds for calculation extremely high.”</p>]]></description><pubDate>Mon, 13 Apr 2026 12:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/soft-photonics</guid><category>Flexible-circuits</category><category>Photonics</category><category>Optical-switch</category><dc:creator>Velvet Wu</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/illustration-of-a-micropipette-piercing-through-a-hemisphere-shaped-membrane-to-inject-a-droplet-at-its-core.jpg?id=65506297&amp;width=980"></media:content></item><item><title>HIPPO Turns One Master Password Into Many Without Storing Any</title><link>https://spectrum.ieee.org/storeless-password-manager</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/personified-dots-sneaking-out-of-a-hidden-password-field-on-a-login-page.jpg?id=65493656&width=1245&height=700&coordinates=0%2C62%2C0%2C63"/><br/><br/><p>
<em>This article is part of our exclusive <a href="https://spectrum.ieee.org/collections/journal-watch/" target="_blank">IEEE Journal Watch series</a> in partnership with IEEE Xplore.</em>
</p><p>Most people are all too familiar with attempting to type out a password multiple times—only to get locked out of their accounts, triggering a vicious cycle of new passwords that are quickly forgotten. <a data-linked-post="2650276006" href="https://spectrum.ieee.org/qa-paul-grassi-of-nist-on-what-makes-a-strong-password" target="_blank">Password managers</a> can be a helpful solution to sidestep this issue but also come with some risk if the saved passwords become compromised.</p><p>However, there is a different solution that does not involve saving passwords on a server. Instead, it requires a single master password that is easily remembered or written down on paper.</p><p>In a recent study, researchers found that people are willing to complete one extra step to access their accounts using the approach, which they report feeling is more secure and easier to use than traditional manual password entry. The <a href="https://ieeexplore.ieee.org/document/11415666" rel="noopener noreferrer" target="_blank">results</a> were published 27 February in <a href="https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=4236" rel="noopener noreferrer" target="_blank"><em><em>IEEE Internet Computing</em></em></a>.</p><h2>HIPPO Password Manager Security</h2><p>Remembering multiple passwords across various accounts is a challenge for many people. Password managers avoid the need to memorize every single password by storing encrypted passwords in a secure online “vault.” Still, these vaults can be hacked. Malicious attackers can break into the vaults’ servers or steal the passwords by hacking the user’s own computer.</p><p>To overcome these challenges, a team of researchers created a password manager that doesn’t store the passwords, called HIPPO (Hidden-Password Online Password), which works like a browser extension.</p><p>The user needs to remember or write down only one master password. As they visit each site that requires them to log in, they enter their master password, and then HIPPO generates a site-specific password on the spot.</p><p>To do so, HIPPO’s browser extension first applies a <a data-linked-post="2650267393" href="https://spectrum.ieee.org/new-king-of-security-algorithms-crowned" target="_blank">cryptographic function</a> called an <a href="https://en.wikipedia.org/wiki/Oblivious_pseudorandom_function" rel="noopener noreferrer" target="_blank">oblivious pseudorandom function</a> to the master password. The result is a website-specific “masked” password that is sent to the HIPPO server. That server applies its own secret cryptographic key and sends the result back. The browser then removes its temporary mask and uses the result to generate the site-specific password on the spot. </p><p>“You can think of it as a calculator computing the exact same complex password on the spot every time you visit the site, eliminating the need to save it anywhere,” says <a href="https://sa.linkedin.com/in/mohammed-jubur-302495105" rel="noopener noreferrer" target="_blank">Mohammed Jubur</a>, an assistant professor of computer science at Jazan University, in Saudi Arabia, and co-creator of HIPPO. “Neither the master secret nor the derived site password is ever stored locally or remotely—the fresh-derived [password] is simply autofilled into the target site’s login field.”</p><h2>User Perception and Trust for Password Managers</h2><p>In their study, Jubur and colleagues had 25 volunteers provide feedback on HIPPO, comparing it to traditional manual password entry. Participants were tasked with setting up their accounts with a password given to them on a piece of paper. They were then asked to log in to a site 10 times using traditional manual password entry, and another 10 times using HIPPO. Participants rated their experience with each approach, evaluating different factors such as perceived security and ease of use.</p><p>The results show that users rated both approaches with a “good” usability score. However, Jubur notes, “participants perceived HIPPO to be significantly more secure and trustworthy compared to traditional password-only authentication.”</p><p>Whereas HIPPO received an average score of 4.04 out of 5 for perceived security, traditional manual password entry received a score of 3.09. Users also reported higher trust scores for HIPPO, at 4.00, compared to 3.30 for traditional password entry.</p><p>The researchers were surprised to learn that users also reported HIPPO as being easier to use—even though it requires an extra activation step, such as pressing F2 or entering a prefix like “@@” to activate the password-generation mode.</p><p>“We initially expected HIPPO’s usability to be merely comparable [to traditional password entry],” explains <a href="https://engineering.tamu.edu/cse/profiles/Saxena-Nitesh.html" rel="noopener noreferrer" target="_blank">Nitesh Saxena</a>, a professor in the Department of Computer Science and Engineering and associate director of the Global Cyber Research Institute (GCRI) at Texas A&M University, who co-created HIPPO with Jubur. “However, participants found the cognitive burden of repeatedly typing a complex random password to be so substantial, that even a tool with an extra step improved their experience.”</p><p>The researchers note that this was a small-scale, single-session study, so a follow-up study over a longer period is needed to explore HIPPO’s performance.</p><p>Jubur adds that, in future work, the team plans to evaluate longer-term life-cycle events, such as measuring the completion time, error rates, and lockout risk associated with HIPPO.</p><p>For example, he says, “we also plan to evaluate the user experience and the risk of account lockouts when a user needs to change their master password, which forces them to update their credentials across all their connected websites.”</p>]]></description><pubDate>Sat, 11 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/storeless-password-manager</guid><category>Cryptography</category><category>Passwords</category><category>Journal-watch</category><dc:creator>Michelle Hampson</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/personified-dots-sneaking-out-of-a-hidden-password-field-on-a-login-page.jpg?id=65493656&amp;width=980"></media:content></item><item><title>Chip Can Project Video the Size of a Grain of Sand</title><link>https://spectrum.ieee.org/mems-photonics</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-array-of-tiny-metallic-cantilevers-curving-away-from-the-surface-of-a-photonic-chip.jpg?id=65493217&width=1245&height=700&coordinates=0%2C156%2C0%2C157"/><br/><br/><p><span>By many estimates, quantum computers will need <a href="https://spectrum.ieee.org/neutral-atom-quantum-computing" target="_blank">millions of qubits </a>to realize their potential applications in cybersecurity, drug development, and other industries. The problem is, anyone who has wanted to simultaneously control millions of a certain kind of qubit has run into the problem of trying to control millions of laser beams. </span> </p><p><span>That’s exactly the challenge that was faced by scientists working on the <a href="https://www.mitre.org/resources/quantum-moonshot" target="_blank">MITRE Quantum Moonshot project</a>, which brought together scientists from MITRE, MIT, the University of Colorado at Boulder, and Sandia National Laboratories. The solution they developed came in the form of an image projection technology that they realized could also be the fix for a host of other challenges in augmented reality, biomedical imaging, and elsewhere. The device is a 1-square-millimeter photonic chip capable of projecting the Mona Lisa onto an area smaller than the size of two human egg <a href="https://spectrum.ieee.org/embryo-electrode-array" target="_blank">cells</a>. </span> </p><p><span>“When we started, we certainly never would have anticipated that we would be making a technology that might revolutionize imaging,” says Matt Eichenfield, one of the leaders of the Quantum Moonshot project, a collaborative research effort focused on developing a scalable, diamond-based quantum computer, and a professor of quantum engineering at the University of Colorado at Boulder. Each second, their chip is capable of projecting 68.6 million individual spots of light—called scannable pixels—to differentiate them from physical pixels. That’s more than 50 times the capability of previous technology, such as <a href="https://spectrum.ieee.org/mems-lidar" target="_blank">micro-electromechanical systems (MEMS) micromirror arrays</a>.</span></p><p> <span>“We have now made a scannable pixel that is at the absolute limit of what diffraction allows,” says <a href="https://www.linkedin.com/in/y-henry-wen-2b41979/" target="_blank">Henry Wen</a>, a visiting researcher at MIT and a photonics engineer at <a href="https://www.quera.com/" target="_blank">QuEra Computing</a>.</span></p><p>The chip’s distinguishing feature is an array of tiny microscale cantilevers, which curve away from the plane of the chip in response to voltage and act as miniature “ski jumps” for light. Light is channeled along the length of each cantilever via a waveguide and exits at its tip. The cantilevers contain a thin layer of aluminum nitride, a piezoelectric that expands or contracts under voltage, thus moving the micromachine up and down and enabling the array to scan beams of light over a two-dimensional area.</p><p>Despite the magnitude of the team’s achievement, Eichenfield says that the process of engineering the cantilevers was “pretty smooth.” Each cantilever is composed of a stack of several submicrometer layers of material and curls approximately 90 degrees out of the plane at rest. To achieve such a high curvature, the team took advantage of differences in the contraction and expansion of individual layers caused by physical stresses in the material resulting from the fabrication process. The materials are first deposited flat onto the chip. Then, a layer in the chip below the cantilever is removed, allowing the material stresses to take effect, releasing the cantilever from the chip and allowing it to curl out. The top layer of each cantilever also features a series of silicon dioxide bars running perpendicular to the waveguide, which keep the cantilever from curling along its width while also improving its lengthwise curvature.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="5525c992b93704c6dfdada2cd2c1d9c2" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/A4-ZqQTZauw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">A micro-cantilever wiggles and waggles to project light in the right place.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Matt Saha, Y. Henry Wen, et al.</small></p><p>What was more of a challenge than engineering the chip itself was figuring out the details of actually making the chip project images and videos. Working out the process of synchronizing and timing the cantilevers’ motion and light beams to generate the right colors at the right time was a substantial effort, according to <a href="https://www.linkedin.com/in/agreenspon/" target="_blank">Andy Greenspon</a>, a researcher at MITRE who also worked on the project. Now, the team has successfully projected a variety of videos from a single cantilever, including clips from the movie <em><em><a href="https://www.youtube.com/watch?v=GPG3zSgm_Qo&list=PLnvfBuirq7alZgA0yGBnNObE5CeJTpUW4" target="_blank">A Charlie Brown Christmas</a></em></em>. </p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="A warped projection of the Mona Lisa." class="rm-shortcode" data-rm-shortcode-id="a4e5294e1a010872e545dbc18fb0e208" data-rm-shortcode-name="rebelmouse-image" id="a1039" loading="lazy" src="https://spectrum.ieee.org/media-library/a-warped-projection-of-the-mona-lisa.jpg?id=65493253&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The chip projected a roughly 125-micrometer image of the Mona Lisa.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..."><a href="https://www.nature.com/articles/s41586-025-10038-6" target="_blank">Matt Saha, Y. Henry Wen, et al.</a></small></p><p>Because the chip can project so many more spots in any given time interval than any previous beam scanners, it could also be used to control many more qubits in quantum computers. The Quantum Moonshot program’s mission is to build a quantum computer that can be scaled to millions of qubits. So clearly, it needs a scalable way of controlling each one, explains Wen. Instead of using one laser per qubit, the team realized that not every qubit needed to be controlled at every given moment. The chip’s ability to move light beams over a two-dimensional area would allow them to control all of the qubits with many fewer lasers. </p><p>Another process that Wen thinks the chip could improve is scanning objects for <a href="https://spectrum.ieee.org/3d-printed-linear-motor" target="_blank">3D printing</a>. Today, that typically involves using a single laser to scan over the entire surface of an object. The new chip, however, could potentially employ thousands of laser beams. “I think now you can take a process that would have taken hours and maybe bring it down to minutes,” says Wen. </p><p>Wen is also excited to explore the potential of different cantilever shapes. By changing the orientations of the bars perpendicular to the waveguide, the team has been able to make the cantilevers curl into helixes. Wen says that such unusual shapes could be useful in making a <a href="https://spectrum.ieee.org/neurobot-living-robot-nervous-system" target="_blank">lab-on-a-chip for cell biology</a> or <a href="https://spectrum.ieee.org/lab-on-a-chip-grippers" target="_blank">drug development</a>. “A lot of this stuff is imaging, scanning a laser across something, either to image it or to stimulate some response. And so we could have one of these ski jumps curl not just up, but actually curl back around, and then move around and scan over a sample,” Wen explains. “If you can imagine a structure that will be useful for you, we should try it.”</p>]]></description><pubDate>Thu, 09 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/mems-photonics</guid><category>Microarray</category><category>Digital-micromirror-device</category><category>Mems</category><category>Quantum-computers</category><category>Nitrogen-vacancy-defects-diamond</category><dc:creator>Velvet Wu</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-array-of-tiny-metallic-cantilevers-curving-away-from-the-surface-of-a-photonic-chip.jpg?id=65493217&amp;width=980"></media:content></item><item><title>AI Models Trained on Physics Are Changing Engineering</title><link>https://spectrum.ieee.org/large-physics-models-design-engineering</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/diagram-of-airflow-over-a-moving-sedan.jpg?id=65494121&width=1245&height=700&coordinates=0%2C62%2C0%2C63"/><br/><br/><p>Large language models have already <a href="https://spectrum.ieee.org/best-ai-coding-tools" target="_self">transformed</a> software engineering, for better or worse. Now, so-called large physics models are also starting to transform design engineering. These tools are beginning to replace—or at least amend—the role of full-fledged physics simulation in the automotive and aerospace industries, semiconductor engineering, and more.</p><p>Before the advent of computer simulation, a car manufacturer, for example, would create prototypes to test their designs, says <a href="https://www.linkedin.com/in/thomas-von-tschammer/" rel="noopener noreferrer" target="_blank">Thomas von Tschammer</a>, managing director at physics-based AI company <a href="https://www.neuralconcept.com/" rel="noopener noreferrer" target="_blank">Neural Concept</a>. “For the past 40 years, we reduced a lot of the need for prototypes by using numerical simulations for aerodynamics, for crash testing, and so on.” Now, von Tschammer explains, AI is drastically reducing the need for simulation, the same way simulation reduced the need for physical prototypes.</p><p>Growing adoptions of this type of AI was a topic of interest at <a href="https://www.nvidia.com/gtc/" rel="noopener noreferrer" target="_blank">Nvidia GTC</a> in March. <a href="https://www.linkedin.com/in/chris-johnston-/" rel="noopener noreferrer" target="_blank">Chris Johnston</a>, senior technical specialist at Jaguar Land Rover, <a href="https://www.nvidia.com/en-us/on-demand/session/gtc26-s81736/?playlistId=gtc26-industrial-engineering" rel="noopener noreferrer" target="_blank">presented</a> how his company is using Neural Concept’s technology. <a href="https://www.physicsx.ai/" rel="noopener noreferrer" target="_blank">PhysicsX</a>, another physics-based AI company, <a href="https://www.physicsx.ai/newsroom/physicsx-announces-advancement-to-open-standards-for-physics-ai-powered-by-nvidia" rel="noopener noreferrer" target="_blank">announced</a> a collaboration with Nvidia to advance open standards for such models, also at GTC.</p><h2>The AI design engineering workflow</h2><p>Over the past six months, <a href="https://www.gm.com/" rel="noopener noreferrer" target="_blank">General Motors</a> (GM) has introduced large physics models into their car design process to speed up the workflow. </p><p>Previously, a creative design engineer would develop a 3D model of a new car concept. This model would be sent to aerodynamics specialists, who would run physics simulations to determine the coefficient of drag of the proposed car—an important metric for energy efficiency of the vehicle. This simulation phase would take about two weeks, and the aerodynamics engineer would then report the drag coefficient back to the creative designer, possibly with suggested modifications.</p><p>Now, GM has trained an in-house large physics model on those simulation results. The AI takes in a 3D car model and outputs a coefficient of drag in a matter of minutes. “We have experts in the aerodynamics and the creative studio now who can sit together and iterate instantly to make decisions [about] our future products,” says <a href="https://www.linkedin.com/in/rdstrauss/" rel="noopener noreferrer" target="_blank">Rene Strauss</a>, director of virtual integration engineering at GM. </p><p>For GM and other companies, running inference on an AI model trained on physics simulations, instead of running the simulation itself, can bring immense time savings. “Depending on the kinds of physics [being simulated], or the resolution, it can be anywhere between 10,000 to close to a million times faster,” says <a href="https://www.linkedin.com/in/jacomo-corbo/" rel="noopener noreferrer" target="_blank">Jacomo Corbo</a>, CEO and co-founder of PhysicsX.</p><h2>How accurate are large physics models?</h2><p>But what about accuracy? For GM’s purposes, Strauss says accuracy is not a huge concern at the design stage because finer details are ironed out later in the process. “When it really starts to matter is when we’re getting close to launching a vehicle, and the coefficient of drag is going to be used for our energy calculation, which eventually goes to the certification of our miles per gallon on the sticker.” At that stage, Strauss says, a physical model of the car will be put into a wind tunnel for an exact number.</p><p>PhysicsX’s Corbo argues that, with the right data, the AI model accuracy can supersede the accuracy of the simulation it’s trained on. The trick is to incorporate experimental measurements to fine-tune the model. If a physics simulation doesn’t agree exactly with experimental data, it is often difficult to figure out why and tweak the model until they agree. With AI, incorporating a few experimental examples into the training process is a lot more straightforward, and it’s not necessary to understand where exactly the model went wrong.</p><p>All in all, by drastically bringing down the time it takes to model the physics, large physics models enable engineers to explore a much greater range of possibilities before a final design is reached. </p><h2>Training large physics models</h2><p>There is no one-size-fits-all approach to training large physics models. Depending on the types of data available, and the physics in question, the models may use the <a href="https://spectrum.ieee.org/what-is-generative-ai" target="_self">transformer</a> architecture that underlies LLMs, a generalized version of convolutional neural networks known as <a href="https://dataroots.io/blog/a-gentle-introduction-to-geometric" rel="noopener noreferrer" target="_blank">geometric deep learning</a>, or an architecture that can solve partial differential equations called <a href="https://zongyi-li.github.io/neural-operator/" rel="noopener noreferrer" target="_blank">neural operators</a>.</p><p>Currently, most companies are training their own models on their simulation data, catering to specific use cases. In GM’s aerodynamics implementation, there are different AI models for different types of cars: think SUVs versus sedans. But PhysicsX’s Corbo says his team is working on building more “foundational” physics models that can be applied across different scenarios.</p><p>Both <a href="https://arxiv.org/pdf/2001.08361" rel="noopener noreferrer" target="_blank">LLMs</a> and <a href="https://spectrum.ieee.org/solve-robotics" target="_self">robotics</a> have benefited from scaling laws, which describe how a system improves as the models increase in size or get trained on more data. In AI, models tend to improve quickly, in a nonlinear way. Along the way, the models also become more generalizable—extending them to new settings takes less and less fine-tuning to reach the same accuracy. Corbo says his team is now starting to see the same types of scaling laws for large physics models.</p><p>“What we’re seeing here is maybe a little bit unsurprising,” Corbo says, “but it’s also pretty incredible. And it’s given us the confidence to make these models bigger, because they perform a whole lot better, and they cover broader domains, and they have these really amazing emergent properties.”</p><p>Developing open standards for the data formats used in training, as well as the model architectures, should help develop these more powerful foundational models. That’s the goal of PhysicsX’s collaboration with Nvidia, and of Nvidia’s <a href="https://developer.nvidia.com/blog/physics-ml-platform-physicsnemo-is-now-open-source/" rel="noopener noreferrer" target="_blank">physicsNeMo</a> open source platform.</p><p>“The thing that we’re collaborating on is being able to compose architectures from building blocks,” Corbo says, making it easy for those in both academia and industry to reuse and build upon existing models.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A physics-based AI software being used to generate a 3D geometry of a data center server PCB ready to be run in computational fluid dynamics." class="rm-shortcode" data-rm-shortcode-id="617060afa65951af85acad5c9b9c8708" data-rm-shortcode-name="rebelmouse-image" id="70269" loading="lazy" src="https://spectrum.ieee.org/media-library/a-physics-based-ai-software-being-used-to-generate-a-3d-geometry-of-a-data-center-server-pcb-ready-to-be-run-in-computational-fl.jpg?id=65494125&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">A type of AI called a large physics model is used by an engineer to quickly generate heat flow in a 3D data center server design. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Neural Concept</small></p><h2>The long-term role of simulations and engineers</h2><p>While some are working on developing more powerful models, others are pushing to implement what’s already available into existing workflows, which is no easy task. “With any innovation, it’s not a straight line. There’s some steps forward and then some steps back and improvements that we find along the way. But that’s part of the joy of the innovation process and using new tools like this,” GM’s Strauss says.</p><p>This technology is still in the early stages, and it’s unclear what the final role of AI tools will be in the engineering workflow. For one, opinions vary on whether AI will replace simulations completely, or just reduce their use.</p><p>“We will never fully replace simulations,” Neural Concept’s von Tschammer says. “But the idea is to make a much smarter usage of simulation at the most major phase of developments, and you use AI to speed up the early design stages, where you need to explore a very wide set of options.”</p><p>PhysicsX’s Corbo begs to differ. “The whole idea is to take numerical simulation … out of the workflow,” he says, “and to move that to inference.”</p><p>Whatever the role of simulation will be, everyone in the field is adamant that human design engineers will continue to be in the driver’s seat, enabled by these newfangled tools  to do their best work. (After all, when has AI ever threatened to replace human labor?)</p><p>“What we’re seeing is that actually, these tools are empowering the engineers to be much more efficient,” von Tschammer says. “Before, these engineers would spend a lot of time on low-added-value tasks, whereas now these manual tasks from the past can be automated using these AI models, and the engineers can focus on taking the design decisions at the end of the day. We still need engineers more than ever.”</p>]]></description><pubDate>Thu, 09 Apr 2026 11:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/large-physics-models-design-engineering</guid><category>Physics-simulations</category><category>General-motors</category><category>Nvidia-gtc</category><category>Engineering-design</category><dc:creator>Dina Genkina</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/diagram-of-airflow-over-a-moving-sedan.jpg?id=65494121&amp;width=980"></media:content></item><item><title>Decentralized Training Can Help Solve AI’s Energy Woes</title><link>https://spectrum.ieee.org/decentralized-ai-training-2676670858</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/illustration-of-several-data-servers-interconnected-across-long-distances.jpg?id=65477795&width=1245&height=700&coordinates=0%2C156%2C0%2C157"/><br/><br/><p> <a href="https://spectrum.ieee.org/topic/artificial-intelligence/" target="_self">Artificial intelligence</a> harbors an enormous <a href="https://spectrum.ieee.org/topic/energy/" target="_self">energy</a> appetite. Such constant cravings are evident in the <a href="https://spectrum.ieee.org/ai-index-2025" target="_self">hefty carbon footprint</a> of the <a href="https://spectrum.ieee.org/tag/data-centers" target="_self">data centers</a> behind the AI boom and the steady increase over time of <a href="https://spectrum.ieee.org/tag/carbon-emissions" target="_self">carbon emissions</a> from training frontier <a href="https://spectrum.ieee.org/tag/ai-models" target="_self">AI models</a>.</p><p>No wonder big tech companies are warming up to <a href="https://spectrum.ieee.org/tag/nuclear-energy" target="_self">nuclear energy</a>, envisioning a future fueled by reliable, carbon-free sources. But while <a href="https://spectrum.ieee.org/nuclear-powered-data-center" target="_self">nuclear-powered data centers</a> might still be years away, some in the research and industry spheres are taking action right now to curb AI’s growing energy demands. They’re tackling training as one of the most energy-intensive phases in a model’s life cycle, focusing their efforts on decentralization.</p><p>Decentralization allocates model training across a network of independent nodes rather than relying on one platform or provider. It allows compute to go where the energy is—be it a dormant server sitting in a research lab or a computer in a <a href="https://spectrum.ieee.org/tag/solar-power" target="_self">solar-powered</a> home. Instead of constructing more data centers that require <a href="https://spectrum.ieee.org/tag/power-grid" target="_self">electric grids</a> to scale up their infrastructure and capacity, decentralization harnesses energy from existing sources, avoiding adding more power into the mix.</p><h2>Hardware in harmony</h2><p>Training AI models is a huge data center sport, synchronized across clusters of closely connected <a href="https://spectrum.ieee.org/tag/gpus" target="_self">GPUs</a>. But as <a href="https://spectrum.ieee.org/mlperf-trends" target="_self">hardware improvements struggle to keep up</a> with the swift rise in the size of <a href="https://spectrum.ieee.org/tag/large-language-models" target="_self">large language models</a>, even massive single data centers are no longer cutting it.</p><p>Tech firms are turning to the pooled power of multiple data centers—no matter their location. <a href="https://spectrum.ieee.org/tag/nvidia" target="_self">Nvidia</a>, for instance, launched the <a href="https://developer.nvidia.com/blog/how-to-connect-distributed-data-centers-into-large-ai-factories-with-scale-across-networking/" target="_blank">Spectrum-XGS Ethernet for scale-across networking</a>, which “can deliver the performance needed for large-scale single job AI training and inference across geographically separated data centers.” Similarly, <a href="https://spectrum.ieee.org/tag/cisco" target="_self">Cisco</a> introduced its <a href="https://blogs.cisco.com/sp/the-new-benchmark-for-distributed-ai-networking" target="_blank">8223 router</a> designed to “connect geographically dispersed AI clusters.”</p><p>Other companies are harvesting idle compute in <a href="https://spectrum.ieee.org/tag/servers" target="_self">servers</a>, sparking the emergence of a <a href="https://spectrum.ieee.org/gpu-as-a-service" target="_self">GPU-as-a-Service</a> business model. Take <a href="https://akash.network/" rel="noopener noreferrer" target="_blank">Akash Network</a>, a peer-to-peer <a href="https://spectrum.ieee.org/tag/cloud-computing" target="_self">cloud computing</a> marketplace that bills itself as the “Airbnb for data centers.” Those with unused or underused GPUs in offices and smaller data centers register as providers, while those in need of computing power are considered as tenants who can choose among providers and rent their GPUs.</p><p>“If you look at [AI] training today, it’s very dependent on the latest and greatest GPUs,” says Akash cofounder and CEO <a href="https://www.linkedin.com/in/gosuri" rel="noopener noreferrer" target="_blank">Greg Osuri</a>. “The world is transitioning, fortunately, from only relying on large, high-density GPUs to now considering smaller GPUs.”</p><h2>Software in sync</h2><p>In addition to orchestrating the <a href="https://spectrum.ieee.org/tag/hardware" target="_self">hardware</a>, decentralized AI training also requires algorithmic changes on the <a href="https://spectrum.ieee.org/tag/software" target="_self">software</a> side. This is where <a href="https://cloud.google.com/discover/what-is-federated-learning" rel="noopener noreferrer" target="_blank">federated learning</a>, a form of distributed <a href="https://spectrum.ieee.org/tag/machine-learning" target="_self">machine learning</a>, comes in.</p><p>It starts with an initial version of a global AI model housed in a trusted entity such as a central server. The server distributes the model to participating organizations, which train it locally on their data and share only the model weights with the trusted entity, explains <a href="https://www.csail.mit.edu/person/lalana-kagal" rel="noopener noreferrer" target="_blank">Lalana Kagal</a>, a principal research scientist at <a href="https://www.csail.mit.edu/" rel="noopener noreferrer" target="_blank">MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL)</a> who leads the <a href="https://www.csail.mit.edu/research/decentralized-information-group-dig" rel="noopener noreferrer" target="_blank">Decentralized Information Group</a>. The trusted entity then aggregates the weights, often by averaging them, integrates them into the global model, and sends the updated model back to the participants. This collaborative training cycle repeats until the model is considered fully trained.</p><p>But there are drawbacks to distributing both data and computation. The constant back-and-forth exchanges of model weights, for instance, result in high communication costs. Fault tolerance is another issue.</p><p>“A big thing about AI is that every training step is not fault-tolerant,” Osuri says. “That means if one node goes down, you have to restore the whole batch again.”</p><p>To overcome these hurdles, researchers at <a href="https://deepmind.google/" rel="noopener noreferrer" target="_blank">Google DeepMind</a> developed <a href="https://arxiv.org/abs/2311.08105" rel="noopener noreferrer" target="_blank">DiLoCo</a>, a distributed low-communication optimization <a href="https://spectrum.ieee.org/tag/algorithms" target="_self">algorithm</a>. DiLoCo forms what <a href="https://spectrum.ieee.org/tag/google-deepmind" target="_self">Google DeepMind</a> research scientist <a href="https://arthurdouillard.com/" rel="noopener noreferrer" target="_blank">Arthur Douillard</a> calls “islands of compute,” where each island consists of a group of <a href="https://spectrum.ieee.org/tag/chips" target="_self">chips</a>. Every island holds a different chip type, but chips within an island must be of the same type. Islands are decoupled from each other, and synchronizing knowledge between them happens once in a while. This decoupling means islands can perform training steps independently without communicating as often, and chips can fail without having to interrupt the remaining healthy chips. However, the team’s experiments found diminishing performance after eight islands.</p><p>An improved version, dubbed <a href="https://arxiv.org/abs/2501.18512" rel="noopener noreferrer" target="_blank">Streaming DiLoCo</a>, further reduces the bandwidth requirement by synchronizing knowledge “in a streaming fashion across several steps and without stopping for communicating,” says Douillard. The mechanism is akin to watching a video even if it hasn’t been fully downloaded yet. “In Streaming DiLoCo, as you do computational work, the knowledge is being synchronized gradually in the background,” he adds.</p><p>AI development platform <a href="https://www.primeintellect.ai/" rel="noopener noreferrer" target="_blank">Prime Intellect</a> implemented a variant of the DiLoCo algorithm as a vital component of its 10-billion-parameter <a href="https://www.primeintellect.ai/blog/intellect-1-release" rel="noopener noreferrer" target="_blank">INTELLECT-1</a> model trained across five countries spanning three continents. Upping the ante, <a href="https://0g.ai/" rel="noopener noreferrer" target="_blank">0G Labs</a>, makers of a decentralized AI <a href="https://spectrum.ieee.org/tag/operating-system" target="_self">operating system</a>, <a href="https://0g.ai/blog/worlds-first-distributed-100b-parameter-ai" rel="noopener noreferrer" target="_blank">adapted DiLoCo to train a 107-billion-parameter foundation model</a> under a network of segregated clusters with limited bandwidth. Meanwhile, popular <a href="https://spectrum.ieee.org/tag/open-source" target="_self">open-source</a> <a href="https://spectrum.ieee.org/tag/deep-learning" target="_self">deep learning</a> framework <a href="https://pytorch.org/projects/pytorch/" rel="noopener noreferrer" target="_blank">PyTorch</a> included DiLoCo in its <a href="https://meta-pytorch.org/torchft/" rel="noopener noreferrer" target="_blank">repository of fault-tolerance techniques</a>.</p><p>“A lot of engineering has been done by the community to take our DiLoCo paper and integrate it in a system learning over consumer-grade internet,” Douillard says. “I’m very excited to see my research being useful.”</p><h2>A more energy-efficient way to train AI</h2><p>With hardware and software enhancements in place, decentralized AI training is primed to help solve AI’s energy problem. This approach offers the option of training models “in a cheaper, more resource-efficient, more energy-efficient way,” says MIT CSAIL’s Kagal.</p><p>And while Douillard admits that “training methods like DiLoCo are arguably more complex, they provide an interesting trade-off of system efficiency.” For instance, you can now use data centers across far apart locations without needing to build ultrafast bandwidth in between. Douillard adds that fault tolerance is baked in because “the blast radius of a chip failing is limited to its island of compute.”</p><p>Even better, companies can take advantage of existing underutilized processing capacity rather than continuously building new energy-hungry data centers. Betting big on such an opportunity, Akash created its <a href="https://www.youtube.com/watch?v=zAj41xSNPeI" rel="noopener noreferrer" target="_blank">Starcluster program</a>. One of the program’s aims involves tapping into solar-powered homes and employing the desktops and laptops within them to train AI models. “We want to convert your home into a fully functional data center,” Osuri says.</p><p>Osuri acknowledges that participating in Starcluster will not be trivial. Beyond solar panels and devices equipped with consumer-grade GPUs, participants would also need to invest in <a href="https://spectrum.ieee.org/tag/batteries" target="_self">batteries</a> for backup power and redundant internet to prevent downtime. The Starcluster program is figuring out ways to package all these aspects together and make it easier for homeowners, including collaborating with industry partners to subsidize battery costs.</p><p>Back-end work is already underway to enable <a href="https://akash.network/roadmap/aep-60/" rel="noopener noreferrer" target="_blank">homes to participate as providers in the Akash Network</a>, and the team hopes to reach its target by 2027. The Starcluster program also envisions expanding into other solar-powered locations, such as schools and local community sites.</p><p>Decentralized AI training holds much promise to steer AI toward a more environmentally sustainable future. For Osuri, such potential lies in moving AI “to where the energy is instead of moving the energy to where AI is.”</p>]]></description><pubDate>Tue, 07 Apr 2026 14:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/decentralized-ai-training-2676670858</guid><category>Training</category><category>Ai-energy</category><category>Data-center</category><category>Large-language-models</category><dc:creator>Rina Diane Caballar</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/illustration-of-several-data-servers-interconnected-across-long-distances.jpg?id=65477795&amp;width=980"></media:content></item><item><title>ENIAC’s Architects Wove Stories Through Computing</title><link>https://spectrum.ieee.org/eniac-80th-anniversary-weaving</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/close-up-black-and-white-1940-s-image-of-a-woman-holding-a-metallic-brick-like-controller-with-large-knobs.jpg?id=65453792&width=1245&height=700&coordinates=0%2C187%2C0%2C188"/><br/><br/><p><em><em>This year marks the </em></em><a href="https://spectrum.ieee.org/eniac-80-ieee-milestone" target="_self"><em><em>80th anniversary of ENIAC</em></em></a><em><em>, the first general-purpose digital computer. The computer was built during World War II to speed up ballistics calculations, but its contributions to computing extend well beyond military applications.</em></em></p><div class="rm-embed embed-media"><iframe height="110px" id="noa-web-audio-player" src="https://embed-player.newsoveraudio.com/v4?key=q5m19e&id=https://spectrum.ieee.org/eniac-80th-anniversary-weaving&bgColor=F5F5F5&color=1b1b1c&playColor=1b1b1c&progressBgColor=F5F5F5&progressBorderColor=bdbbbb&titleColor=1b1b1c&timeColor=1b1b1c&speedColor=1b1b1c&noaLinkColor=556B7D&noaLinkHighlightColor=FF4B00&feedbackButton=true" style="border: none" width="100%"></iframe></div><p><em><em>Two of ENIAC’s key architects—John W. Mauchly, its co-inventor, and Kathleen “Kay” McNulty, one of the <a href="https://spectrum.ieee.org/eniac-woman-programmers" target="_blank">six original programmers</a>—married a few years after its completion and raised seven children together. Mauchly and McNulty’s grandchild Naomi Most </em></em><a href="https://youtu.be/XYEVmqGhVxo?si=fseDLKFz1W8meWR6&t=4515" rel="noopener noreferrer" target="_blank"><em><em>delivered a talk</em></em></a><em><em> as part of a celebration in honor of ENIAC’s anniversary on 15 February, which was held online and in-person at the American Helicopter Museum in West Chester, Pa. The following is adapted from that presentation.</em></em></p><p class="ieee-inbody-related">RELATED: <a href="https://spectrum.ieee.org/eniac-80-ieee-milestone" target="_blank">ENIAC, the First General-Purpose Digital Computer, Turns 80</a></p><p>There was a library at my grandparents’ farmhouse that felt like it went on forever. September light through the windows, beech leaves rustling outside on the stone porch, the sounds of cousins and aunts and uncles somewhere in the house. And in the corner of that library, an IBM personal computer.</p><p>When I spent summers there as a child, I didn’t yet know that the computer was closely tied to my family’s story.</p><p>My grandparents are known for their contributions to creating the Electronic Numerical Integrator and Computer, or ENIAC. But both were interested in more than just crunching numbers: My grandfather wanted to predict the weather. My grandmother wanted to be a good storyteller. </p><p>In Irish, the first language my grandmother Kathleen “Kay” McNulty ever spoke, a word existed to describe both of these impulses: <em><em>ríomh</em></em>.</p><p>I began to learn the Irish language myself five years ago, and I was struck by how certain words and phrases had multiple meanings. According to renowned Irish cultural historian Manchán Magan—from whom I took lessons—the word <em><em>ríomh</em></em> has at different times been used to mean to compute, but also <a href="https://www.making.ie/stories/irish-words-weaving" rel="noopener noreferrer" target="_blank">to weave, to narrate, or to compose a poem</a>. That one word that can tell the story of ENIAC, a machine with wires woven like thread that was built to compute, make predictions, and search for a signal in the noise. </p><h2>John Mauchly’s Weather-Prediction Ambitions</h2><p>Before working on ENIAC, John Mauchly <a href="https://fi.edu/en/news/case-files-john-w-mauchly-and-j-presper-eckert" rel="noopener noreferrer" target="_blank">spent years collecting rainfall data</a> across the United States. His favorite pastime was meteorology, and he wanted to find patterns in storm systems to predict the weather.</p><p>The Army, however, funded ENIAC to make simpler predictions: calculating ballistic trajectory tables. Start there, co-inventors J. Presper Eckert and Mauchly realized, and perhaps the weather would soon be computable.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Black and white 1960s image of two white men in suits looking at a wall of computer controls." class="rm-shortcode" data-rm-shortcode-id="7872d50df109149c936e400909defc38" data-rm-shortcode-name="rebelmouse-image" id="75108" loading="lazy" src="https://spectrum.ieee.org/media-library/black-and-white-1960s-image-of-two-white-men-in-suits-looking-at-a-wall-of-computer-controls.jpg?id=65428294&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Co-inventors John Mauchly [left] and J. Presper Eckert look at a portion of ENIAC on 25 November 1966. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Hulton Archive/Getty Images</small></p><p>Weather is a system unfolding through time, and a model of a storm is a story about how that system might unfold. There’s an old Irish saying related to this idea: <a href="https://daltai.com/is-maith-an-scealai-an-aimsir/" target="_blank"><em><em>Is maith an scéalaí an aimsir</em></em></a><em><em>.</em></em> Literally, “weather is a good storyteller.” But <em><em>aimsir</em></em> also means time. So the usual translation of this phrase into English becomes “time will tell.”</p><p>Mauchly wanted to <em><em>ríomh an aimsire</em></em>—to weave the weather into pattern, to compute the storm, to narrate the chaos. He realized that complex systems don’t reveal their full purpose at conception. They reveal it through <em><em>aimsir</em></em>—through weather, through time, through use.</p><h2>ENIAC’s First Programmers Were Weavers</h2><p>Kathleen “Kay” McNulty was born on 12 February 1921, in Creeslough, Ireland, on the night <a href="https://en.wikipedia.org/wiki/James_McNulty_(Irish_activist)" target="_blank">her father</a>—an IRA training officer—was arrested and imprisoned in Derry Gaol.</p><p>Family oral history holds that her people were weavers. She spoke only Irish until her family reached Philadelphia when she was 4 years old, entering American school the following year knowing virtually no English. She graduated in 1942 from Chestnut Hill College with a mathematics degree, was recruited to compute artillery firing tables by hand for the U.S. Army, and was then selected—along with <a href="https://spectrum.ieee.org/the-women-behind-eniac" target="_blank">five other women</a>—to program ENIAC.</p><p>They had no manual. They had only blueprints.</p><p>McNulty and her colleagues learned ENIAC and its quirks the way you learn a loom: by touch, by memory, by routing threads of electricity into patterns. They developed embodied knowledge the designers could only approximate. They could narrow a malfunction to a specific failed vacuum tube before any technician could locate it.</p><p>McNulty and Mauchly are also credited with conceiving the subroutine, the sequence of instructions that can be repeatedly recalled to perform a task, now essential in any programming. The subroutine was not in ENIAC’s blueprints, nor in the funding proposal. The concept emerged as highly determined people extended their imagination into the machine’s affordances.</p><p>The engineers designed the loom. Weavers discovered its true capabilities.</p><p>In 1950, four years after ENIAC was switched on, Mauchly’s dream was realized as it was used in the <a href="https://www.guinnessworldrecords.com/world-records/775520-first-computer-assisted-weather-forecast" target="_blank">world’s first computer-assisted weather forecast</a>. That was made possible after Klara von Neumann and Nick Metropolis reassembled and upgraded the ENIAC with a small amount of digital program memory. The programmers who transformed the math into operational code for the ENIAC were Norma Gilbarg, Ellen-Kristine Eliassen, and Margaret Smagorinsky. Their names are not as well-known as they should be.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Black and white 1940s image of three women operating a differential analyser in a basement." class="rm-shortcode" data-rm-shortcode-id="298168a77d38fd343eeb7d4bbfc219a7" data-rm-shortcode-name="rebelmouse-image" id="aacec" loading="lazy" src="https://spectrum.ieee.org/media-library/black-and-white-1940s-image-of-three-women-operating-a-differential-analyser-in-a-basement.jpg?id=65453828&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Before programming ENIAC, Kay McNulty [left] was recruited by the U.S. Army to compute artillery firing tables. Here, she and two other women, Alyse Snyder [center] and Sis Stump, operate a mechanical analog computer designed to solve differential equations in the basement of the University of Pennsylvania’s Moore School of Electrical Engineering.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">University of Pennsylvania</small></p><h2>Kay McNulty, Family Storyteller</h2><p>Kay married John Mauchly in 1948, describing him as “the greatest delight of my life. He was so intelligent and had so many ideas.... He was not only lovable, he was loving.” She spent the rest of her life ensuring he, Eckert, and the ENIAC programmers would be recognized.</p><p>When she died in 2006, I came to her funeral in shock, not fully knowing what I’d lost. As she drifted away, it was said, she had been reciting her prayers in Irish. This understanding made it quickly over to Creeslough, in County Donegal, and awaited me when I visited to honor her memory with the <a href="https://www.youtube.com/watch?v=zbkk2RJMW9g" target="_blank">dedication of a plaque</a> right there in the center of town.</p><p>In <a href="https://mathshistory.st-andrews.ac.uk/Extras/Mauchly_Antonelli_story" target="_blank">her own memoir</a>, she wrote: “If I am remembered at all, I would like to be remembered as my family storyteller.”</p><p>In Irish, the word for computer is <em><em>ríomhaire</em></em>. One who ríomhs. One who weaves, computes, and tells. My grandfather wanted to tell the story of the weather through computing. My grandmother wanted to be remembered as a storyteller. The language of her childhood already had a word that contained both of those ambitions.</p><h2>Computers as Narrative Engines</h2><p>When it was built, ENIAC looked like the back room of a textile production house. Panels. Switchboards. A room full of wires. Thread.</p><p>Thread does not tell you what it will become. We tend to think of computing as calculation—discrete and deterministic. But a model is a structured story about how something behaves.</p><p>Weather models, ballistic tables, economic forecasts, neural networks: These are all narrative engines, systems that take raw inputs and produce accounts of how the world might unfold. In complex systems, when parts are woven together through use, new structures arise that no one specified in advance.</p><p>Like ENIAC, the machines we are building now—the large models, the autonomous systems—are not merely calculators. They are looms.</p><p>Their most important properties will not be specified in advance. They will emerge through use, through the people who learn how to weave with them.</p><p>Through imagination.</p><p>Through <em><em>aimsir</em></em>.</p>]]></description><pubDate>Fri, 03 Apr 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/eniac-80th-anniversary-weaving</guid><category>Eniac</category><category>Weather-prediction</category><category>Computer-history</category><category>Ireland</category><dc:creator>Naomi Most</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/close-up-black-and-white-1940-s-image-of-a-woman-holding-a-metallic-brick-like-controller-with-large-knobs.jpg?id=65453792&amp;width=980"></media:content></item><item><title>The AI Data Centers That Fit on a Truck</title><link>https://spectrum.ieee.org/modular-data-center</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/overhead-view-of-two-data-center-pods-each-measuring-55-feet-long-by-12-5-feet-wide.jpg?id=65417343&width=1245&height=700&coordinates=0%2C187%2C0%2C188"/><br/><br/><p>A <a data-linked-post="2676577917" href="https://spectrum.ieee.org/5gw-data-center" target="_blank">traditional</a> data center protects the expensive hardware inside it with a “shell” constructed from steel and concrete. Constructing a data center’s shell is inexpensive compared to the cost of the hardware and infrastructure inside it, but it’s not trivial. It takes time for engineers to consider potential sites, apply for permits, and coordinate with construction contractors.</p><p>That’s a problem for those looking to quickly deploy AI hardware, which has led companies like <a href="https://duosedge.ai/home" target="_blank">Duos Edge AI</a> and <a href="https://www.lgcns.com/en" target="_blank">LG CNS</a> to respond with a more modular approach. They use pre-fabricated, self-contained boxes that can be deployed in months instead of years. The boxes can operate alone or in tandem with others, providing the option to add more if required.</p><p>“I just came back from Nvidia’s GTC, and a lot of [companies] are sitting on their deployment because their data centers aren’t ready, or they can’t find the space,” said <a href="https://www.linkedin.com/in/doug-recker/" rel="noopener noreferrer" target="_blank">Doug Recker</a>, CEO of Duos Edge AI. “We see the demand there, and we can deploy faster.” </p><h2>GPUs shipped straight to you</h2><p>Duos Edge AI’s modular compute pods are 55 feet long and 12.5 feet wide. Though they look similar to a shipping container, they’re actually a bit larger and designed primarily for transportation by truck. Each compute pod contains racks of GPUs much like those used in other data centers. Duos recently <a href="https://ir.duostechnologies.com/news-events/press-releases/detail/830/duos-technologies-group-executes-definitive-agreement-with" target="_blank">entered</a> a deal with AI infrastructure company Hydra Host to deploy four pods with 576 GPUs per pod. That’s a total of 2,304 GPUs, with the option to later double the deployment to 4,608 GPUs. </p><p>Modular data centers aren’t new for Duos; the company previously deployed edge data centers for rural customers, <a href="https://spectrum.ieee.org/rural-data-centers" target="_self">such as the Amarillo, Texas, school district</a>. However, the pods for the Hydra Host deployment will be upgraded to handle more intense AI workloads. They’ll contain more racks, draw more power, and use liquid cooling to keep the GPUs running efficiently. <br/><br/>Across the Pacific, Korean technology giant LG is taking a similar approach. The company’s CNS subsidiary, which provides IT infrastructure and services, <a href="https://www.koreatimes.co.kr/business/tech-science/20260305/lg-cns-unveils-container-based-ai-box-for-rapid-ai-data-center-expansion">has announced the AI Modular Data Center, which</a>, like the Duos unit, contains racks of GPUs and supporting hardware in a pre-fabricated enclosure.</p><p>Also like Duos’ deployment, LG’s AI Modular Data Center contains 576 Nvidia GPUs with the option to scale up in the future. “We are currently developing an expanded version that can support more than 4,600 GPUs within a single unit, with a service launch planned within this year,” said <a href="https://www.linkedin.com/in/heonhyeock-cho-29427b147/?originalSubdomain=kr" rel="noopener noreferrer" target="_blank">Heon Hyeock Cho</a>, vice president and head of the data center business unit at LG CNS. LG’s first Modular Data Center will roll out in the South Korean port city of Busan, where it could deploy up to 50 units.</p><p>LG and Duos are not alone. <a href="https://www.hpe.com/us/en/services/ai-mod-pod.html" rel="noopener noreferrer" target="_blank">Hewlett Packard Enterprise,</a> <a href="https://www.vertiv.com/en-emea/solutions/vertiv-modular-solutions/?utm_source=press-release&utm_medium=public-relations&utm_campaign=hpc-ai&utm_content=en-coolchip" rel="noopener noreferrer" target="_blank">Vertiv</a>, and <a href="https://www.se.com/ww/en/work/solutions/data-centers-and-networks/modular-data-center/" rel="noopener noreferrer" target="_blank">Schneider Electric</a> now have modular data centers available or in development. A <a href="https://www.grandviewresearch.com/industry-analysis/modular-data-center-market-report" target="_blank">report</a> from market research firm <a href="https://www.grandviewresearch.com/" target="_blank">Grand View Research</a> estimates that the market for modular data centers could more than double by 2030.</p><h2>On the grid, but under the radar</h2><p>A modular data center site is quite different from a traditional data center because there’s no need to construct a large steel-and-concrete shell. Instead, the site can be made ready by pouring a concrete pad. The pre-fabricated modules are delivered by truck, placed on the pad where desired, and then networked on-site.<br/><br/>Duos’ deployments, for instance, include power modules placed alongside the compute pods, and the pods are networked together with redundant fiber connections that allow the pods to operate in unison. Recker compared it to lining up school buses in a parking lot. “Everything is built off-site at a factory, and we can put it together like a jigsaw puzzle,” he said.</p><p>That simplicity is the point. Both Duos and LG CNS expect a modular data center can be deployed in about six months, compared to the roughly two or three years a conventional data center requires. Recker said that, for Duos, the turnaround is so quick that building the pre-fabricated unit isn’t always the constraint. While it’s possible to construct a pre-fabricated unit in 60 or 90 days, site preparation extends the timeline “because you can’t get the permits that fast.”</p><p>Modular data centers may also provide good value. Recker said a 5-megawatt modular deployment can be built for about $25 million, and that Duos’ cost per megawatt is roughly half what larger facilities charge. For Duos, savings are possible in part because its modular data centers can target smaller deployments where the permitting is less complex. Smaller, modular deployments also meet less resistance from local governments, which are increasingly skeptical about data center construction. </p><p>While Duos targets smaller deployments, LG hopes to go big. Its planned Busan campus of 50 AI Modular Data Centers suggests an ambition to achieve deployments that rival the capacity of conventional facilities. A site with 50 units would bring the total number of GPUs to over 28,000. Here, the benefits of a modular approach could stem mostly from scalability, as a modular data center could start small and grow as required.</p><p>“By adopting a modular approach, the AI Modular Data Center can be incrementally expanded through the combination of dozens of AI Boxes,” Cho said. “It’s enabling the construction of even hyperscale-level AI data centers.”</p>]]></description><pubDate>Mon, 30 Mar 2026 14:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/modular-data-center</guid><category>Data-center</category><category>Networking</category><category>Liquid-cooling</category><category>Ai</category><dc:creator>Matthew S. Smith</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/overhead-view-of-two-data-center-pods-each-measuring-55-feet-long-by-12-5-feet-wide.jpg?id=65417343&amp;width=980"></media:content></item><item><title>Facial Recognition Is Spreading Everywhere</title><link>https://spectrum.ieee.org/facial-recognition-gone-wrong</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/illustration-34-orange-women-icons-1-blue-man-icon-labels-for-skin-tone-and-gender-comparisons.jpg?id=65407585&width=1245&height=700&coordinates=0%2C116%2C0%2C117"/><br/><br/><p>Facial recognition technology (FRT) dates back 60 years. Just over a decade ago, deep-learning methods tipped the technology into more useful—<a href="https://spectrum.ieee.org/china-facial-recognition" target="_blank">and menacing</a>—territory. Now, retailers, your neighbors, and law enforcement are all storing your face and building up a fragmentary photo album of your life.</p><p>Yet the story those photos can tell inevitably has errors. FRT makers, like those of any diagnostic technology, must balance two types of errors: false positives and false negatives. There are three possible outcomes.</p><div class="ieee-sidebar-medium"><h3>Three Possible Outcomes</h3><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="White figures and an orange hooded figure, focusing on the hooded figure in a split design." class="rm-shortcode" data-rm-shortcode-id="8a762ebf2761a791f12500ed10596cc3" data-rm-shortcode-name="rebelmouse-image" id="f4d64" loading="lazy" src="https://spectrum.ieee.org/media-library/white-figures-and-an-orange-hooded-figure-focusing-on-the-hooded-figure-in-a-split-design.png?id=65407894&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">a) identifies the suspect, since the two images are of the same person, according to the software. Success!</small></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Abstract figures: orange hoodie enlarged, white, yellow, and orange on left, black background." class="rm-shortcode" data-rm-shortcode-id="3d130b8e4c73ee49898645524cecd1f6" data-rm-shortcode-name="rebelmouse-image" id="30881" loading="lazy" src="https://spectrum.ieee.org/media-library/abstract-figures-orange-hoodie-enlarged-white-yellow-and-orange-on-left-black-background.png?id=65407867&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">b) matches another person in the footage with the suspect’s probe image. A false positive, coupled with sloppy verification, could put the wrong person behind bars and lets the real criminal escape justice.</small></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Three white icons and one orange hoodie icon on left, large orange hoodie icon on right." class="rm-shortcode" data-rm-shortcode-id="4cdaa23680c5144a5c284fcd8cb6f3df" data-rm-shortcode-name="rebelmouse-image" id="fbc8f" loading="lazy" src="https://spectrum.ieee.org/media-library/three-white-icons-and-one-orange-hoodie-icon-on-left-large-orange-hoodie-icon-on-right.png?id=65407858&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">c) fails to find a match at all. The suspect may be evading cameras, but if cameras just have low-light or bad-angle images, this creates a false negative. This type of error might let a suspect off and raise the cost of the manhunt.</small></p></div><p>In best-case scenarios—such as comparing someone’s passport photo to a photo taken by a border agent—false-negative rates are <a href="https://face.nist.gov/frte/reportcards/11/clearviewai_003.html" target="_blank">around two in 1,000 and false positives are less than one in 1 million</a>.</p><p>In the rare event you’re one of those false negatives, a border agent might ask you to show your passport and take a second look at your face. But as people ask more of the technology, more ambitious applications could lead to more catastrophic errors. Let’s say that police are searching for a suspect, and they’re comparing an image taken with a security camera with a previous “mug shot” of the suspect.</p><p>Training-data composition, differences in how sensors detect faces, and intrinsic differences between groups, such as age, all affect an algorithm’s performance. The <a href="https://assets.publishing.service.gov.uk/media/693002a4cdec734f4dff4149/1a_Cognitec_NPL_Equitability_Report_October_25.pdf&sa=D&source=docs&ust=1774557264829489&usg=AOvVaw13R0ue8NITZ-0tPVLcJ8S-" target="_blank">United Kingdom estimated</a> that its FRT exposed some groups, such as women and darker-skinned people, to risks of misidentification as high as two orders of magnitude greater than it did to others.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Five faces arranged left to right, from easy to hard to recognize." class="rm-shortcode" data-rm-shortcode-id="ce19d3eb3745de15489274ebe5083f06" data-rm-shortcode-name="rebelmouse-image" id="3ab1e" loading="lazy" src="https://spectrum.ieee.org/media-library/five-faces-arranged-left-to-right-from-easy-to-hard-to-recognize.png?id=65407777&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Less clear photographs are harder for FRT to process.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">iStock</small></p><p>What happens with photos of people who aren’t cooperating, or vendors that train algorithms on biased datasets, or field agents who demand a swift match from a huge dataset? Here, things get murky.</p><div class="ieee-sidebar-medium"><h3>Facial Recognition Gone Wrong</h3><p><strong>THE NEGATIVES OF FALSE POSITIVES</strong></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Detroit Police SUV with American flag decal on side under bright sunlight." class="rm-shortcode" data-rm-shortcode-id="1a424f342f44dff48e8b6b05c79f5032" data-rm-shortcode-name="rebelmouse-image" id="c102c" loading="lazy" src="https://spectrum.ieee.org/media-library/detroit-police-suv-with-american-flag-decal-on-side-under-bright-sunlight.png?id=65407650&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">2020: <a href="https://quadrangle.michigan.law.umich.edu/issues/winter-2024-2025/flawed-facial-recognition-technology-leads-wrongful-arrest-and-historic&sa=D&source=docs&ust=1774557264902408&usg=AOvVaw3xUv5_o_zg1Fh0EScZ9lTW" target="_blank">Robert Williams’s wrongful arrest</a> cost him detention. The ensuing settlement requires Detroit police to enact policies that recognize FRT’s limits. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">iStock</small></p><p><strong>ALGORITHMIC BIAS</strong></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt='Red sign reads "Security cameras in use" with camera graphic.' class="rm-shortcode" data-rm-shortcode-id="014ac05f2fe587ca01643c64c750e331" data-rm-shortcode-name="rebelmouse-image" id="f4f1f" loading="lazy" src="https://spectrum.ieee.org/media-library/red-sign-reads-security-cameras-in-use-with-camera-graphic.png?id=65407620&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">2023: <a href="https://incidentdatabase.ai/cite/619/&sa=D&source=docs&ust=1774557264903427&usg=AOvVaw3fBw_78OyUB3Sa_cPpxmCi" target="_blank">Court bans Rite Aid from using facial recognition for five years</a> over its use of a racially biased algorithm. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">iStock</small></p><p><strong>TOO FAST, TOO FURIOUS?</strong></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Back of ICE officer in tactical gear facing a house." class="rm-shortcode" data-rm-shortcode-id="0004b023a075c21698cdf88cfd0b4106" data-rm-shortcode-name="rebelmouse-image" id="889f9" loading="lazy" src="https://spectrum.ieee.org/media-library/back-of-ice-officer-in-tactical-gear-facing-a-house.png?id=65407619&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">2026: U.S. immigration agents <a href="https://www.404media.co/ices-facial-recognition-app-misidentified-a-woman-twice/&sa=D&source=docs&ust=1774557264904407&usg=AOvVaw03DUrBl3YxN6c3uhHa611f" target="_blank">misidentify a woman they’d detained as two different women</a>. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">VICTOR J. BLUE/BLOOMBERG/GETTY IMAGES </small></p></div><p><span>Consider a busy trade fair using FRT to check attendees against a database, or gallery, of images of the 10,000 registrants, for example. Even at 99.9 percent accuracy you’ll get about a dozen false positives or negatives, which may be worth the trade-off to the fair organizers. But if police start using something like that across a city of 1 million people, the number of potential victims of mistaken identity rises, as do the stakes.</span></p><p><span>What if we ask FRT to tell us if the government has ever recorded and stored an image of a given person? That’s what U.S. Immigration and Customs Enforcement <a href="https://illinoisattorneygeneral.gov/News-Room/Current-News/001%20-%20Complaint%201.12.26.pdf?language_id=1" target="_blank">agents have done since June 2025</a>, using the Mobile Fortify app. The agency conducted more than 100,000 FRT searches in the first six months. The size of the potential gallery is at least <a href="https://sam.gov/opp/b016354c5bd045fa92e4886878747dc8/view" target="_blank">1.2 billion images</a>.</span></p><p><span>At that size, assuming even best-case images, the system is likely to return around 1 million false matches, but at a rate at least 10 times as high for darker-skinned people, depending on the subgroup.</span></p><p>Responsible use of this powerful technology would involve independent identity checks, multiple sources of data, and a clear understanding of the error thresholds, says computer scientist <a href="https://www.cics.umass.edu/about/directory/erik-learned-miller" target="_blank">Erik Learned-Miller</a> of the University of Massachusetts Amherst: “<a href="https://spectrum.ieee.org/joy-buolamwini" target="_blank">The care we take</a> in deploying such systems should be proportional to the stakes.”</p>]]></description><pubDate>Mon, 30 Mar 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/facial-recognition-gone-wrong</guid><category>Facial-recognition</category><category>Privacy</category><category>Surveillance</category><category>Machine-vision</category><category>Computer-vision</category><dc:creator>Lucas Laursen</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/illustration-34-orange-women-icons-1-blue-man-icon-labels-for-skin-tone-and-gender-comparisons.jpg?id=65407585&amp;width=980"></media:content></item><item><title>NYU’s Quantum Institute Bridges Science and Application</title><link>https://spectrum.ieee.org/nyu-quantum-institute</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/person-in-white-suit-working-with-semiconductor-equipment-in-a-lab.jpg?id=65322091&width=1245&height=700&coordinates=0%2C0%2C0%2C0"/><br/><br/><p><em>This sponsored article is brought to you by <a href="https://engineering.nyu.edu/" rel="noopener noreferrer" target="_blank">NYU Tandon School of Engineering</a>.</em></p><p>Within a 6 mile radius of New York University’s (NYU) campus, there are more than 500 tech industry giants, banks, and hospitals. This isn’t just a fact about real estate, it’s the foundation for advancing quantum discovery and application.</p><p>While the world races to harness quantum technology, NYU is betting that the ultimate advantage lies not solely in a lab, but in the dense, demanding, and hyper-connected urban ecosystem that surrounds it. With the launch of its <a href="https://www.nyu.edu/about/news-publications/news/2025/october/nyu-launches-quantum-institute-.html" rel="noopener noreferrer" target="_blank"><span>NYU Quantum Institute</span></a> (NYUQI), NYU is positioning itself as <a href="https://www.nyu.edu/about/news-publications/news/2025/october/top-quantum-scientists-convene-at-nyu.html" target="_blank">the central node</a> in this network; a “full stack” powerhouse built on the conviction that it has found the right place, and the right time, to turn quantum science into tangible reality.</p><p>Proximity advantage is essential because quantum science demands it. Globally, the quest for practical quantum solutions — whether for computing, sensing, or secure communications — has been stalled, in part, by fragmentation. Physicists and chemical engineers invent new materials, computer scientists develop new algorithms, and electrical engineers build new devices, but all three often work in isolated academic silos.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Three men pose at the 4th Annual NYC Quantum Summit 2025; attendees converse in the background." class="rm-shortcode" data-rm-shortcode-id="1dd6dfe45b73630bb9040545fcdfae7d" data-rm-shortcode-name="rebelmouse-image" id="33e2d" loading="lazy" src="https://spectrum.ieee.org/media-library/three-men-pose-at-the-4th-annual-nyc-quantum-summit-2025-attendees-converse-in-the-background.jpg?id=65322345&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Gregory Gabadadze, NYU’s dean for science, NYU physicist and Quantum Institute Director Javad Shabani, and Juan de Pablo, Anne and Joel Ehrenkranz Executive Vice President for Global Science and Technology and executive dean of the Tandon School of Engineering.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Veselin Cuparić/NYU</small></p><p><span>NYUQI’s premise is that breakthroughs happen “at the interfaces between different domains,” according to </span><a href="https://engineering.nyu.edu/faculty/juan-de-pablo" target="_blank"><span>Juan de Pablo</span></a><span>, Executive Vice President for Global Science and Technology at NYU and Executive Dean of the NYU Tandon School of Engineering. The Institute is built to actively force those necessary collisions — to integrate the physicists, engineers, materials scientists, computer scientists, biologists, and chemists vital to quantum research into one holistic operation. This institutional design ensures that the hardware built by one team can be immediately tested by software developed by another, accelerating progress in a way that isolated departments never could.</span></p><p class="pull-quote"><span>NYUQI’s premise is that breakthroughs happen at the interfaces between different domains. <strong>—Juan de Pablo, NYU Tandon School of Engineering</strong></span></p><p>NYUQI’s integrated vision is backed by a massive physical commitment to the city. The NYUQI is not just a theoretical concept; its collaborators will be housed in a renovated, <a href="https://www.nyu.edu/about/news-publications/news/2025/may/nyu-entering-long-term-lease-at-770-broadway.html" target="_blank"><span>million-square-foot facility</span></a> in the heart of Manhattan’s West Village, backed by a state-of-the-art <a href="https://engineering.nyu.edu/research/nanofab" target="_blank">Nanofabrication Cleanroom</a> in Brooklyn serving as a high-tech foundry. This is where the theoretical meets physical devices, allowing the Institute to test and refine the process from materials science to deployment.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt='NYU building exterior with "Science + Tech" signage, flags, and a passing yellow taxi.' class="rm-shortcode" data-rm-shortcode-id="605cc71d844927d3fb0a05fb086fedcf" data-rm-shortcode-name="rebelmouse-image" id="bceaa" loading="lazy" src="https://spectrum.ieee.org/media-library/nyu-building-exterior-with-science-tech-signage-flags-and-a-passing-yellow-taxi.jpg?id=65322352&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">NYUQI will be housed in a renovated, million-square-foot facility in the heart of Manhattan’s West Village.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Tracey Friedman/NYU</small></p><p><span>Leading this effort is NYUQI Director </span><a href="https://as.nyu.edu/faculty/javad-shabani.html" target="_blank"><span>Javad Shabani</span></a><span>, who, along with the other members, is turning the Institute into a hub for collaboration with private and public sector partners with quantum challenges that need solving. As de Pablo explains, “Anybody who wants to work on quantum with NYU, you come in through that door, and we’ll send you to the right place.” For New York’s vast ecosystem of tech giants and financial institutions, the NYUQI offers a resource they can’t build on their own: a cohesive team of experts in quantum phenomena, quantum information theory, communication, computing, materials, and optics, and a structured path to applying theoretical discoveries to advanced quantum technologies.</span></p><h2>Solving the Challenge of Quantum Research</h2><p><span>The NYUQI’s integrated structure is less about organizational management, and more about scientific requirement. </span><span>The challenge of quantum is that the hardware, the software, and the programming are inherently interconnected — each must be designed to work with the other. To solve this, the Institute focuses on three applications of quantum science: Quantum Computing, Quantum Sensing, and Quantum Communications.</span></p><p>For Shabani, this means creating an integrated environment that bridges discovery with experimentation, starting with the physical components all the way to quantum algorithm centers. That will include a fabrication facility in the new building in Manhattan, as well as the <a href="https://engineering.nyu.edu/news/chips-and-science-act-spurs-nanofab-cleanroom-ribbon-cutting-nyu-tandon-school-engineering" target="_blank"><span>NYU Nanofab</span></a> in Brooklyn directed by Davood Shahjerdi. New York Senators Charles Schumer and Kirsten Gillibrand recently secured <a href="https://www.nyu.edu/about/news-publications/news/2026/february/nyu-receives--1-million-in-funding-from-senators-schumer-and-gil.html" target="_blank">$1 million in congressionally-directed spending</a> to bring Thermal Laser Epitaxy (TLE) technology — which allows for atomic-level purity, minimal defects, and streamlined application of a diverse range of quantum materials — to NYU, marking the first time the equipment will be used in the U.S.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Two people hold semiconductor wafers during a presentation with audience taking photos." class="rm-shortcode" data-rm-shortcode-id="1a0dbca6c6bb8fb7dbf4d399689b2922" data-rm-shortcode-name="rebelmouse-image" id="d434c" loading="lazy" src="https://spectrum.ieee.org/media-library/two-people-hold-semiconductor-wafers-during-a-presentation-with-audience-taking-photos.jpg?id=65322354&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">NYU Nanofab manager Smiti Bhattacharya and Nanofab Director Davood Shahjerdi at the nanofab ribbon-cutting in 2023. The nanofab is the first academic cleanroom in Brooklyn, and serves as a prototyping facility for the NORDTECH Microelectronics Commons consortium.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">NYU WIRELESS</small></p><p>Tight control over fabrication, and can allow researchers to pivot quickly when a breakthrough in one area — say, finding a cheaper, more reliable material like silicon carbide — can be explored for use across all three applications, and offers unique access to academics and the private sector alike to sophisticated pieces of specialty equipment whose maintenance knowledge and costs make them all-but-impossible to maintain outside of the right staffing and environment.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="3D model of a laboratory layout, highlighting the Yellow Room in bright yellow." class="rm-shortcode" data-rm-shortcode-id="e7c1128703d96de919ed2ce440a97416" data-rm-shortcode-name="rebelmouse-image" id="62d58" loading="lazy" src="https://spectrum.ieee.org/media-library/3d-model-of-a-laboratory-layout-highlighting-the-yellow-room-in-bright-yellow.png?id=65322596&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The NYU Nanofab is Brooklyn’s first academic cleanroom, with a strategic focus on superconducting quantum technologies, advanced semiconductor electronics, and devices built from quantum heterostructures and other next-generation materials.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">NYU Nanofab</small></p><p><span>That speed and adaptability is the NYUQI’s competitive edge. It turns fragmented challenges into holistic solutions, positioning the Institute to solve real-world problems for its New York neighbors—from highly secure data transmission to next-generation drug discovery.</span></p><h2>Testing Quantum Communication in NYC</h2><p>The integrated approach also makes the NYUQI a testbed for the most critical near-term applications. Take Quantum Communications, which is essential for creating an “unhackable” quantum internet. In an industry first, NYU worked with the quantum start-up Qunnect to <a href="https://www.nyu.edu/about/news-publications/news/2023/september/nyu-takes-quantum-step-in-establishing-cutting-edge-tech-hub-in-.html" target="_blank"><span>send quantum information through standard telecom fiber</span></a> in New York City between Manhattan and Brooklyn through a 10-mile quantum networking link. Instead of simulating communication challenges in a lab, the NYUQI team is already leveraging NYU’s city-wide campus by utilizing existing infrastructure to test secure quantum transmission between Manhattan and Brooklyn. </p><p class="pull-quote">The NYUQI team is already leveraging NYU’s city-wide campus by utilizing existing infrastructure to test secure quantum transmission between Manhattan and Brooklyn.</p><p>This isn’t just theory; it is building a functioning prototype in the most demanding, dense urban environment  in the world. Real-time, real-world deployment is a critical component missing in other isolated institutions. When the NYUQI achieves results, the technology will be that much more readily available to the massive financial, tech, and communications organizations operating right outside their door.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Scientist in protective gear working in a laboratory with samples." class="rm-shortcode" data-rm-shortcode-id="d644b791788af64769a853d0516834e6" data-rm-shortcode-name="rebelmouse-image" id="dc2fb" loading="lazy" src="https://spectrum.ieee.org/media-library/scientist-in-protective-gear-working-in-a-laboratory-with-samples.jpg?id=65322378&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">NYUQI includes a state-of-the-art Nanofabrication Cleanroom in Brooklyn serving as a high-tech foundry.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">NYU Tandon</small></p><p><span>While the Institute has built the physical infrastructure and designed the necessary scientific architecture, its enduring contribution will be the specialized workforce it creates for the new quantum economy. This addresses the market’s greatest deficit: a lack of individuals trained not just in physics, but in the integrated, full-stack approach that quantum demands.</span></p><p>By creating a pipeline of 100 to 200 graduate and doctoral students who are encouraged to collaborate across Computing, Sensing, and Communications, the NYUQI is narrowing the skills gap. These will be future leaders who can speak the language of the physicist, the materials scientist, and the engineer simultaneously. This commitment to interdisciplinary talent is also fueled by the launch of the new Master of Science in Quantum Science & Technology program at NYU Tandon, positioning the university among a select group worldwide offering such a specialized degree.</p><p>Interdisciplinary education creates the shared language and understanding poised to make graduates coming from collaborations in the NYUQI extremely valuable in the current landscape. Quantum challenges are not just technical; they are managerial and philosophical as well. An engineer working with the NYUQI will understand the requirements of the nanofabrication cleanroom and the foundations of superconducting qubits for quantum computing, just as a physicist will understand the application needs of an industry partner like a large financial institution. In a field where the entire team must be able to communicate seamlessly, these are professionals truly equipped to rapidly translate discovery into deployable technology. Creating a talent pipeline at scale will provide a missing link that converts New York’s vast commercial energy into genuine quantum advantage.</p><h2>NYUQI: Building Talent, Technology, and Structure</h2><p><span>The vision for the NYUQI </span><span>is an act of strategic geography that plays directly into the sheer volume of opportunity and demand right outside their new facility. </span><span>By building the talent, the technology, and the structure necessary to capitalize on this dense environment, NYU is not just participating in the quantum race, it is actively steering it.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Conference room with attendees seated at round tables, facing a presenter on stage." class="rm-shortcode" data-rm-shortcode-id="f5e2ae16e0c5ebc4f0828d52ed639115" data-rm-shortcode-name="rebelmouse-image" id="02b7e" loading="lazy" src="https://spectrum.ieee.org/media-library/conference-room-with-attendees-seated-at-round-tables-facing-a-presenter-on-stage.jpg?id=65322370&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Attendees of NYU’s 2025 Quantum Summit.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Tracey Friedman/NYU</small></p><p>The initial hypothesis for the NYUQI was simple: the ultimate advantage lies in pursuing the science in the right place at the right time. Now, the institute will ensure that the next wave of scientific discovery, capable of solving previously intractable problems in finance, medicine, and security, will be conceived, built, and tested in the heart of New York City.</p>]]></description><pubDate>Fri, 27 Mar 2026 10:02:05 +0000</pubDate><guid>https://spectrum.ieee.org/nyu-quantum-institute</guid><category>Nyu-tandon</category><category>Quantum-computing</category><category>Quantum-internet</category><category>Semiconductors</category><category>Quantum-communications</category><dc:creator>Wiley</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/person-in-white-suit-working-with-semiconductor-equipment-in-a-lab.jpg?id=65322091&amp;width=980"></media:content></item><item><title>IEEE 802.11bn Delivers Ultra-High Reliability for Wi-Fi 8</title><link>https://content.knowledgehub.wiley.com/setting-new-performance-standards-with-ieee-802-11bn-an-in-depth-overview-of-wi-fi-8/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/logo-of-rohde-schwarz-with-slogan-make-ideas-real-and-stylized-rs-in-a-diamond-shape.png?id=65355284&width=980"/><br/><br/><p><span>A technical exploration of IEEE 802.11bn’s physical and MAC layer enhancements — including distributed resource units, enhanced long range, multi-AP coordination, and seamless roaming — that define Wi-Fi 8.</span></p><p><strong><span>What Attendees will Learn</span></strong></p><ol><li><span>Why Wi-Fi 8 prioritizes reliability over raw throughput — Understand how IEEE 802.11bn shifts the design philosophy from peak data-rate gains to ultra-high reliability.</span></li><li>How new physical layer features overcome uplink power limitations — Learn how distributed resource units spread tones across wider distribution bandwidths to boost per-tone transmit power, and how enhanced long range protocol data units use power-boosted preamble fields and frequency-domain duplication to extend uplink coverage.</li><li>How advanced MAC coordination reduces interference and latency — Examine multi-access point coordination schemes — coordinated beamforming, spatial reuse, time division multiple access, and restricted target wake time — alongside non-primary channel access and priority enhanced distributed channel access.</li><li>What seamless roaming and power management mean for next-generation deployments — Discover how seamless mobility domains eliminate reassociation delays during access point transitions, and how dynamic power save and multi-link power management let devices trade capability for battery life without sacrificing connectivity.</li></ol><p><a href="https://content.knowledgehub.wiley.com/setting-new-performance-standards-with-ieee-802-11bn-an-in-depth-overview-of-wi-fi-8/" target="_blank">Download this free whitepaper now!</a></p>]]></description><pubDate>Wed, 25 Mar 2026 14:22:07 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/setting-new-performance-standards-with-ieee-802-11bn-an-in-depth-overview-of-wi-fi-8/</guid><category>Wifi</category><category>Internet</category><category>Standards</category><category>Transmission</category><category>Type-whitepaper</category><dc:creator>Rohde &amp; Schwarz</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/65355284/origin.png"></media:content></item><item><title>Data Centers Are Transitioning From AC to DC</title><link>https://spectrum.ieee.org/data-center-dc</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/nvidia-s-high-compute-density-racks.jpg?id=65397940&width=1245&height=700&coordinates=0%2C469%2C0%2C469"/><br/><br/><p>Last week’s <a href="https://www.nvidia.com/gtc/" target="_blank">Nvidia GTC</a> conference highlighted new <a href="https://spectrum.ieee.org/nvidia-groq-3" target="_blank">chip</a> architectures to power AI. But as the chips become faster and more powerful, the remainder of data center <a data-linked-post="2674166715" href="https://spectrum.ieee.org/data-center-liquid-cooling" target="_blank">infrastructure</a> is playing catch-up. The power-delivery community  is responding: Announcements from <a href="https://www.prnewswire.com/news-releases/delta-exhibits-energy-saving-solutions-for-800-vdc-in-next-gen-ai-factories-and-digital-twin-applications-built-on-omniverse-at-nvidia-gtc-2026-302715850.html" rel="noopener noreferrer" target="_blank">Delta</a>,  <a href="https://www.eaton.com/us/en-us/company/news-insights/news-releases/2026/eaton-collaborates-with-nvidia-to-unveil-its-beam-rubin-dsx-platform.html" rel="noopener noreferrer" target="_blank">Eaton</a>, <a href="https://www.se.com/us/en/about-us/newsroom/news/press-releases/Schneider-Electric-teams-with-NVIDIA-to-develop-validated-blueprints-to-design-simulate-build-operate-and-maintain-gigawattscale-AI-Factories-69b82f61aa1027e04205d273/" target="_blank">Schneider Electric</a>, and <a href="https://www.vertiv.com/en-us/about/news-and-insights/corporate-news/2026/vertiv-brings-converged-physical-infrastructure-to-nvidia-vera-rubin-dsx-ai-factories/" rel="noopener noreferrer" target="_blank">Vertiv</a> showcased new designs for the AI era. Complex and inefficient AC-to-DC power conversions are gradually being replaced by DC configurations, at least in hyperscale data centers.</p><p>“While AC distribution remains deeply entrenched, advances in power electronics and the rising demands of AI infrastructure are accelerating interest in DC architectures,” says <a href="https://www.linkedin.com/in/solarchris/" target="_blank">Chris Thompson</a>, vice president of advanced technology and global microgrids at Vertiv.</p><h2>AC-to-DC Conversion Challenges</h2><p>Today, nearly all data centers are designed around AC utility power. The electrical path includes multiple conversions before power reaches the compute load. Power typically enters the data center as medium-voltage AC (1 to 35 kilovolts), is stepped down to low-voltage AC (480 or 415 volts) using a transformer, converted to DC inside an uninterruptible power supply (UPS) for battery storage, converted back to AC, and converted again to low-voltage DC (typically 54 V DC) at the server, supplying the DC power computing chips actually require.</p><p>“The double conversion process ensures the output AC is clean, stable, and suitable for data center servers,” says <a href="https://www.linkedin.com/in/luiz-fernando-huet-de-bacellar-b2112117/" target="_blank">Luiz Fernando Huet de Bacellar,</a> vice president of engineering and technology at Eaton.</p><p>That setup worked well enough for the amounts of power required by traditional data centers. Traditional data center computational racks draw on the order of 10 kW each. For AI, that is starting to approach 1 megawatt.  At that scale, the energy losses, current levels, and copper requirements of AC-to-DC conversions become increasingly difficult to justify. Every conversion incurs some power loss. On top of that, as the amount of power that needs to be delivered grows, the sheer size of the convertors, as well as the connector requirements of copper busbars, becomes untenable.<span> According to an Nvidia <a href="https://developer.nvidia.com/blog/nvidia-800-v-hvdc-architecture-will-power-the-next-generation-of-ai-factories/" target="_blank">blog</a>, a 1-MW rack</span><span> could require as much as 200 kilograms of copper busbar. For a 1-gigawatt data center, it could amount to 200,000 kg of copper. </span></p><h2>Benefits of High-Voltage DC Power</h2><p>By converting 13.8-kV AC grid power directly to 800 V DC at the data center perimeter, most intermediate conversion steps are eliminated. This reduces the number of fans and power-supply units, and leads to higher system reliability, lower heat dissipation, improved energy efficiency, and a smaller equipment footprint.</p><p>“Each power conversion between the electric grid or power source and the silicon chips inside the servers causes some energy loss,” says Bacellar.</p><p>Switching from 415-V AC to 800-V DC in electrical distribution enables 85 percent more power to be transmitted through the same conductor size. This happens because higher voltage reduces current demand, lowering resistive losses and making power transfer more efficient. Thinner conductors can handle the same load, reducing copper requirements by 45 percent, a 5 percent improvement in efficiency, and 30 percent lower total cost of ownership for gigawatt-scale facilities.</p><p>“In a high-voltage DC architecture, power from the grid is converted from medium-voltage AC to roughly 800-V DC and then distributed throughout the facility on a DC bus,” said Vertiv’s Thompson. “At the rack, compact DC-to-DC converters step that voltage down for GPUs and CPUs.”</p><p>A <a href="https://www.datacenter-asia.com/wp-content/uploads/2025/08/Omdia-Analysts-Summit-Omdia%E5%88%86%E6%9E%90%E5%B8%88%E5%B3%B0%E4%BC%9A.pdf" target="_blank">report</a> from technology advisory group <a href="https://omdia.tech.informa.com/" target="_blank">Omdia</a> claims that higher voltage DC data centers have already appeared in China. In the Americas, the <a href="https://www.linkedin.com/posts/sharada-yeluri_microsoft-meta-google-activity-7367974455052017666-nXV5/" target="_blank">Mt. Diablo Initiative</a> (a collaboration among <a href="https://www.meta.com/about/?srsltid=AfmBOoq7uBjCU2oG3oI6Ti8VQaMdaxhAcxXmXD-twy9OTi0cbmTqGKVQ" target="_blank">Meta</a>, <a href="https://www.microsoft.com/en-us" target="_blank">Microsoft</a>, and the <a href="https://www.opencompute.org/" target="_blank">Open Compute Project</a>) is a 400-V DC rack power distribution experiment.</p><h2>Innovations in DC Power Systems</h2><p>A handful of vendors are trying to get ahead of the game. Vertiv’s 800-V DC ecosystem that integrates with <a href="https://www.vertiv.com/en-us/about/news-and-insights/corporate-news/vertiv-develops-energy-efficient-cooling-and-power-reference-architecture-for-the-nvidia-gb300-nvl72/" target="_blank">Nvidia Vera Rubin Ultra Kyber platforms</a> will be commercially available in the second half of 2026. Eaton, too, is well advanced in its 800-V DC systems innovation courtesy of a medium-voltage solid-state transformer (SST) that will sit at the heart of DC power distribution system. Meanwhile Delta, has released 800-V DC in-row 660-kW power racks with a total of 480 kW of embedded battery backup units. And, <a href="https://www.solaredge.com/us/" target="_blank">SolarEdge</a> is hard at work on a 99%-efficient SST that will be paired with a native DC UPS and a DC power distribution layer.</p><p>But much of the industry is far behind. <a href="https://www.linkedin.com/in/pehughes/" target="_blank">Patrick Hughes</a>, senior vice president of strategy, technical, and industry affairs for the <a href="https://www.makeitelectric.org/" target="_blank">National Electrical Manufacturers Association</a>, says most innovation is happening at the 400-V DC level, though some are preparing 800-V DC. He believes the industry needs a complete, coordinated ecosystem, including power electronics, protection, connectors, sensing, and service‑safe components that scale together rather than in isolation. That, in turn, requires retooling manufacturing capacity for DC‑specific equipment, expanding semiconductor and materials supply, and clear, long‑term demand commitments that justify major capital investment across the value chain.</p><p>“Many are taking a cautious approach, offering limited or adapted solutions while waiting for clearer standards, safety frameworks, and customer commitments,” said Hughes. “Building the supply chain will hinge on stabilizing standards and safety frameworks so suppliers can design, certify, manufacture, and install equipment with confidence.”</p>]]></description><pubDate>Tue, 24 Mar 2026 16:00:05 +0000</pubDate><guid>https://spectrum.ieee.org/data-center-dc</guid><category>Data-centers</category><category>Power-electronics</category><category>Ai</category><dc:creator>Drew Robb</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/nvidia-s-high-compute-density-racks.jpg?id=65397940&amp;width=980"></media:content></item><item><title>What Will It Take to Build the World’s Largest Data Center?</title><link>https://spectrum.ieee.org/5gw-data-center</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/construction-symbols-on-yellow-background.png?id=65356154&width=1245&height=700&coordinates=0%2C973%2C0%2C974"/><br/><br/><p><strong>The undying thirst for </strong>smarter (historically, that means larger) AI models and greater adoption of the ones we already have has led to an explosion in <a href="https://epoch.ai/data/data-centers#data-insights" rel="noopener noreferrer" target="_blank">data-center construction projects</a>, unparalleled both in number and scale. Chief among them is Meta’s planned 5-gigawatt data center in Louisiana, called Hyperion, announced in June of 2025. Meta CEO Mark Zuckerberg said Hyperion will “cover a significant part of the footprint of Manhattan,” and the first phase—a 2-GW version—will be completed by 2030.</p><p>Though the project’s stated 5-GW scale is the largest among its peers, it’s just one of several dozen similar projects now underway. According to Michael Guckes, chief economist at construction-software company <a href="https://www.constructconnect.com/preconstruction-software?campaign=21011210878&group=161161401080&target=kwd-337013613104&matchtype=e&creative=760058507701&device=c&se_kw=constructconnect&utm_medium=ppc&utm_campaign=CC+Brand+2&utm_term=constructconnect&utm_source=adwords&hsa_ad=760058507701&hsa_kw=constructconnect&hsa_net=adwords&hsa_tgt=kwd-337013613104&hsa_grp=161161401080&hsa_src=g&hsa_ver=3&hsa_cam=21011210878&hsa_mt=e&hsa_acc=3324869874&gad_source=1&gad_campaignid=21011210878&gbraid=0AAAAADccs_biRlt8tR8-qu3h7Kja1Tzte&gclid=CjwKCAiA3-3KBhBiEiwA2x7FdCQc4sQOa0YZVFnCW9RF1tGkH2hDiowNrjM587XsXAv6Fb7Sdr1hgBoCNjEQAvD_BwE" rel="noopener noreferrer" target="_blank">ConstructConnect</a>, spending on data centers topped US $27 billion by July of 2025 and, once the full-year figures are tallied, will easily exceed $60 billion. Hyperion alone accounts for about a quarter of that.</p><p>For the engineers assigned to bring these projects to life, the mix of challenges involved represent a unique moment. The world’s largest tech companies are opening their wallets to pay for new innovations in compute, cooling, and <a data-linked-post="2674861846" href="https://spectrum.ieee.org/nvidia-rubin-networking" target="_blank">network</a> technology designed to operate at a scale that would’ve seemed absurd five years ago.</p><p>At the same time, the breakneck pace of building comes paired with serious problems. Modern data-center construction frequently requires an influx of temporary workers and sharply increases noise, traffic, pollution, and often local electricity prices. And the environmental toll remains a concern long after facilities are built due to the unprecedented 24/7 energy demands of AI data centers which, according to one recent study, <a href="https://www.nature.com/articles/s41893-025-01681-y" rel="noopener noreferrer" target="_blank">could emit the equivalent of tens of millions of tonnes of CO<span><sub>2</sub></span> annually</a> in the United States alone.</p><p>Regardless of these issues, large AI companies, and the engineers they hire, are going full steam ahead on giant data-center construction. So, what does it really take to build an unprecedentedly large data center?</p><h2>AI Rewrites Building Design</h2><p>The stereotypical data-center building rests on a reinforced concrete slab foundation. That’s paired with a steel skeleton and poured concrete wall panels. The finished building is called a “shell,” a term that implies the structure itself is a secondary concern. Meta has <a href="https://www.datacenterdynamics.com/en/news/meta-brings-data-centers-in-tents-to-gallatin-tennessee/" target="_blank">even used gigantic tents</a> to throw up temporary data centers.</p><p>Still, the scale of the largest AI data centers brings unique challenges. “The biggest challenge is often what’s under the surface. Unstable, corrosive, or expansive soils can lead to delays and require serious intervention,” says <a href="https://www.jacobs.com/our-people/meet-bob-haley" target="_blank">Robert Haley</a>, vice president at construction consulting firm <a href="https://www.jacobs.com/" target="_blank">Jacobs</a>.<a href="https://www.stantec.com/en/people/c/carter-amanda" target="_blank"> Amanda Carter</a>, a senior technical lead at <a href="https://www.stantec.com/en" target="_blank">Stantec</a>, said a soil’s thermal conductivity is also important, as most electrical infrastructure is placed underground. “If the soil has high thermal resistivity, it’s going to be difficult to dissipate [heat].” Engineers may take hundreds or thousands of soil samples before construction can begin.</p><h3>GPUs</h3><br/><img alt="Yellow microchip icon on a black background." class="rm-shortcode" data-rm-shortcode-id="9612db5baec52cce6fe11d703e52c7bc" data-rm-shortcode-name="rebelmouse-image" id="af54d" loading="lazy" src="https://spectrum.ieee.org/media-library/yellow-microchip-icon-on-a-black-background.png?id=65347639&width=980"/><p>Modern AI data centers often use <em><em>rack-scale</em></em> systems, such as the Nvidia GB200 NVL72, which occupy a single data-center rack. Each rack contains 72 GPUs, 36 CPUs, and up to 13.4 terabytes of GPU memory. The racks measure over 2.2 meters tall and weigh over one and a half tonnes, forcing AI data centers to use thicker concrete with more reinforcement to bear the load.</p><p>A single GB200 rack can use up to 120 kilowatts. If Hyperion meets its 5-gigawatt goals, the data-center campus could include over 41,000 rack-scale systems, for a total of more than 3 million GPUs. The final number of GPUs used by Hyperion is likely to be less than that, though only because future GPUs will be larger, more capable, and use more power.</p><h3>Money</h3><br/><img alt="Black hand and dollar symbol combined on an orange background." class="rm-shortcode" data-rm-shortcode-id="2ef34f3679a3b3135244243e46ae5630" data-rm-shortcode-name="rebelmouse-image" id="248eb" loading="lazy" src="https://spectrum.ieee.org/media-library/black-hand-and-dollar-symbol-combined-on-an-orange-background.png?id=65347751&width=980"/><p>According to ConstructConnect, spending on data centers neared US $27 billion through July of 2025 and, according to the latest data, will tally close to $60 billion through the end of the year. Meta’s Hyperion project is a big slice of the pie, at $10 billion.</p><p>Data-center spending has become an important prop for the construction industry, which is seeing reduced demand in other areas, such as residential construction and public infrastructure. ConstructConnect’s third quarter 2025 financial report stated that the quarter’s decline “would have been far more severe without an $11 billion surge in data center starts.”</p><h3></h3><br/><p>There’s apparently no shortage of eligible sites, however, as both the number of data centers under construction, and the money spent on them, has skyrocketed. The spending has allowed companies building data centers to throw out the rule book. Prior to the AI boom, most data centers relied on tried-and-true designs that prioritized inexpensive and efficient construction. Big tech’s willingness to spend has shifted the focus to speed and scale.</p><p>The loose purse strings open the door to larger and more robust prefabricated concrete wall and floor panels. <a href="https://www.linkedin.com/in/dougbevier/" target="_blank">Doug Bevier</a>, director of development at <a href="https://www.clarkpacific.com/" rel="noopener noreferrer" target="_blank">Clark Pacific</a>, says some concrete floor panels may now span up to 23 meters and need to handle floor loads up to 3,000 kilograms per square meter, <a href="https://codes.iccsafe.org/s/IBC2018/chapter-16-structural-design/IBC2018-Ch16-Sec1607.1" rel="noopener noreferrer" target="_blank">which is more than twice the load international building codes normally define for manufacturing and industry</a>. In some cases, the concrete panels must be custom-made for a project, an expensive step that the economics of pre-AI data centers rarely justified.</p><p>Simultaneously, the time scale for projects is also compressed: <a href="https://www.linkedin.com/in/jamiemcgrath365/" rel="noopener noreferrer" target="_blank">Jamie McGrath</a>, senior vice president of data-center operations at<a href="https://www.crusoe.ai/" rel="noopener noreferrer" target="_blank"> Crusoe</a>, says the company is delivering projects in “about 12 months,” compared to 30 to 36 months before. Not all projects are proceeding at that pace, but speed is universally a priority.</p><p>That makes it difficult to coordinate the labor and materials required. Meta’s Hyperion site, located in rural Richland Parish, Louisiana, is emblematic of this challenge. <a href="https://www.nola.com/news/business/meta-louisiana-ai-data-center/article_77f553ff-c272-4e6c-a775-60bbbee0b065.html" rel="noopener noreferrer" target="_blank">As reported by NOLA.com</a>, at least 5,000 temporary workers have flocked to the area, which has only about 20,000 permanent residents. These <a href="https://www.wsj.com/business/data-centers-are-a-gold-rush-for-construction-workers-6e3c5ce0?st=jr1y94" rel="noopener noreferrer" target="_blank">workers earn above-average wages</a> and bring a short-term boost for some local businesses, such as restaurants and convenience stores. However, they have also spurred complaints from residents about traffic and construction noise and pollution.</p><p>This friction with residents includes not only these obvious impacts, but <a href="https://youtu.be/DGjj7wDYaiI?si=aZocXHJe0IYUkJcl&t=175" rel="noopener noreferrer" target="_blank">also things you might not immediately suspect</a>, such as light pollution caused by around-the-clock schedules. Also significant are changes to local water tables and runoff, which can reduce water quality for neighbors who rely on well water. These issues have motivated a few U.S. cities <a href="https://www.atlantanewsfirst.com/2025/06/04/atlanta-tightens-restrictions-data-centers-bans-them-some-neighborhoods/" rel="noopener noreferrer" target="_blank">to enact data-center bans</a>.</p><h2>Data Centers Often Go BYOP (bring your own power)</h2><p>Meta’s Richland Parish site also highlights a problem that’s priority No. 1 for both AI data centers and their critics: power.</p><p>Data centers have always drawn large amounts of power, which nudged data-center construction to cluster in hubs where local utilities were responsive to their demands. Virginia’s electric utility, Dominion Energy, met demand with agreements to build new infrastructure, <a href="https://rmi.org/amazon-dominion-virginia-power-reach-breakthrough-renewable-energy-agreement/" rel="noopener noreferrer" target="_blank">often with a focus on renewable energy</a>.</p><p>The power demands of the largest AI data centers, though, have caught even the most responsive utilities off guard. A report from the Lawrence Berkeley National Laboratory, in California, estimated the entire U.S. data-center industry <a href="https://eta-publications.lbl.gov/sites/default/files/lbnl-1005775_v2.pdf" rel="noopener noreferrer" target="_blank">consumed an average load of roughly 8 GW of power in 2014</a>. Today, the largest AI data-center campuses are built to handle up to a gigawatt each, and Meta’s Hyperion is projected to require 5 GW.</p><p>“Data centers are exasperating issues for a lot of utilities,” says <a href="https://www.cleanegroup.org/staff/abbe-ramanan/" rel="noopener noreferrer" target="_blank">Abbe Ramanan</a>, project director at the Clean Energy Group, a Vermont-based nonprofit.</p><p>Ramanan explains that utilities often use “peaker plants” to cope with extra demand. They’re usually older, less efficient fossil-fuel plants which, because of their high cost to operate and carbon output, were due for retirement. But Ramanan says increased electricity demand <a href="https://www.eia.gov/todayinenergy/detail.php?id=61425" rel="noopener noreferrer" target="_blank">has kept them in service</a>.</p><p>Meta secured power for Hyperion by negotiating with Entergy, Louisiana’s electric utility, for construction of three new gas-turbine power plants. Two will be located near the Richland Parish site, while a third will be located in southeast Louisiana.</p><p>Entergy frames the new plants as a win for the state. “A core pillar of Entergy and Meta’s agreement is that Meta pays for the full cost of the utility infrastructure,” says <a href="https://www.linkedin.com/in/daniel-kline-068356ba/" rel="noopener noreferrer" target="_blank">Daniel Kline</a>, director of power-delivery planning and policy at Entergy. The utility expects that “customer bills will be lower than they otherwise would have been.” That would prove an exception, as <a href="https://www.bloomberg.com/graphics/2025-ai-data-centers-electricity-prices/?embedded-checkout=true" rel="noopener noreferrer" target="_blank">a recent report from Bloomberg found</a> electricity rates in regions with data centers are more likely to increase than in regions without.</p><h3>CO2</h3><br/><img alt="Diagram of CO2 molecule with black carbon and red oxygen atoms connected by lines." class="rm-shortcode" data-rm-shortcode-id="c9cf38ac7004d413b7fe5b8b577a3d3d" data-rm-shortcode-name="rebelmouse-image" id="3b1b0" loading="lazy" src="https://spectrum.ieee.org/media-library/diagram-of-co2-molecule-with-black-carbon-and-red-oxygen-atoms-connected-by-lines.png?id=65348689&width=980"/><p>Research <a href="https://www.nature.com/articles/s41893-025-01681-y" target="_blank">published in Nature</a> in 2025 projects that data-center emissions will range from 24 million to 44 million CO2-equivalent metric tonnes annually through 2030 in the United States alone. While some materials used in data centers, such as concrete, lead to significant emissions, the majority of these emissions will result from the high energy demands of AI servers.</p><p>Estimating the carbon emissions of Hyperion is difficult, as the project won’t be completed until 2030. Assuming that the three new natural gas plants that are planned for construction as part of the project produce emissions typical for their type, however, the plants could lead to full life-cycle emissions of between 4 million and 10 million metric tons of CO2 annually—roughly equivalent to the annual emissions of a country like <a href="https://www.worldometers.info/co2-emissions/co2-emissions-by-country/" target="_blank">Latvia</a>.</p><h3>Concrete</h3><br/><img alt="Silhouette of a cement truck on an orange background." class="rm-shortcode" data-rm-shortcode-id="060b1cd238b9de45274d6766069f3a14" data-rm-shortcode-name="rebelmouse-image" id="e6d68" loading="lazy" src="https://spectrum.ieee.org/media-library/silhouette-of-a-cement-truck-on-an-orange-background.png?id=65348696&width=980"/><p>Data centers are typically built from concrete, with steel used as a skeleton to reinforce and shape the concrete shell. While the foundation is often poured concrete, the walls and floors are most often built from prefabricated concrete panels that can span up to 23 meters. Floors use a reinforced T-shape, similar to a steel girder, measuring up to 1.2 meters across at its thickest point. The largest data centers include hundreds of these concrete panels.</p><p>The America Cement Association projects that the current surge in building<a href="https://mi.cement.org/PDF/Data_Center_Cement_Consumption.pdf" rel="noopener noreferrer" target="_blank"> will require 1 million tonnes of cement over the next three years</a>, though that’s still a tiny fraction of the overall cement industry,<a href="https://d9-wret.s3.us-west-2.amazonaws.com/assets/palladium/production/s3fs-public/media/files/mis-202507-cemen.pdf" rel="noopener noreferrer" target="_blank"> which weighed in at roughly 103 million tonnes in 2024</a>.</p><h3></h3><br/><p>The plants, which will generate a combined 2.26 GW, will use combined-cycle gas turbines that recapture waste heat from exhaust.<a href="https://www.ge.com/news/press-releases/ha-technology-now-available-industry-first-64-percent-efficiency" target="_blank"> This boosts thermal efficiency to 60 percent and beyond,</a> meaning more fuel is converted to useful energy. Simple-cycle turbines, by contrast, vent the exhaust, which lowers efficiency to around 40 percent.</p><p>Even so, total life-cycle emissions for the Hyperion plants could range from 4 million to over 10 million tonnes of CO2 each year, depending on how frequently the plants are put in use and the final efficiency benchmarks once built. On the high end, that’s as much CO2 as produced by over 2 million passenger cars. Fortunately, not all of Meta’s data centers take the same approach to power. The company has announced a plan to power Prometheus, a large data-center project in Ohio scheduled to come online before the end of 2026, <a href="https://about.fb.com/news/2026/01/meta-nuclear-energy-projects-power-american-ai-leadership/" target="_blank">with nuclear energy</a>.</p><p>But other big tech companies, spurred by the need to build data centers quickly, are taking a less efficient approach.</p><p>xAI’s Colossus 2, located in Memphis, is the most extreme example. <a href="https://www.climateandcapitalmedia.com/35-gas-turbines-no-permits-elon-musks-dirty-xai-secret/" rel="noopener noreferrer" target="_blank">The company trucked dozens of temporary gas-turbine generators to power the site</a> located in a suburban neighborhood. OpenAI, meanwhile, has gas turbines capable of generating up to 300 megawatts <a href="https://www.timesrecordnews.com/story/news/2025/10/14/water-electricity-concerns-addressed-by-stargate-data-center-leaders-in-abilene-texas/86585222007/" rel="noopener noreferrer" target="_blank">at its new Stargate data center in Abilene, Texas</a>, slated to open later in 2026. Both use simple-cycle turbines with a much lower efficiency rating than the combined-cycle plants Entergy will build to power Hyperion.</p><p>Demand for gas turbines is so intense, in fact, that <a href="https://www.spglobal.com/commodity-insights/en/news-research/latest-news/electric-power/052025-us-gas-fired-turbine-wait-times-as-much-as-seven-years-costs-up-sharply" rel="noopener noreferrer" target="_blank">wait times for new turbines are up to seven years</a>. Some data centers <a href="https://spectrum.ieee.org/ai-data-centers" target="_self">are turning toward refurbished jet engines</a> to obtain the turbines they need.</p><h2>AI Racks Tip the Scales</h2><p>The demand for new, reliable power is driven by the power-hungry GPUs inside modern AI data centers.</p><p>In January of 2025, Mark Zuckerberg announced in a post on Facebook that Meta planned to end 2025 <a href="https://techcrunch.com/2025/01/24/mark-zuckerberg-says-meta-will-have-1-3m-gpus-for-ai-by-year-end/" rel="noopener noreferrer" target="_blank">with at least 1.3 million GPUs in service</a>. OpenAI’s Stargate data center <a href="https://www.datacenterdynamics.com/en/news/openai-and-oracle-to-deploy-450000-gb200-gpus-at-stargate-abilene-data-center/" rel="noopener noreferrer" target="_blank">plans to use over 450,000 Nvidia GB200 GPUs</a>, and xAI’s Colossus 2, an expansion of Colossus, <a href="https://www.nextbigfuture.com/2025/09/xai-colossus-2-first-gigawatt-ai-training-data-center.html" rel="noopener noreferrer" target="_blank">is built to accommodate over 550,000 GPUs</a>.</p><p>GPUs, which remain by far the most popular for AI workloads, are bundled into human-scale monoliths of steel and silicon which, much like the data centers built to house them, are rapidly growing in weight, complexity, and power consumption.</p><h3>Memory</h3><br/><img alt="Outlined head with a microchip brain on blue background, symbolizing AI and technology." class="rm-shortcode" data-rm-shortcode-id="7cd8d3faff2d24fa591295b9efd9b1ba" data-rm-shortcode-name="rebelmouse-image" id="70372" loading="lazy" src="https://spectrum.ieee.org/media-library/outlined-head-with-a-microchip-brain-on-blue-background-symbolizing-ai-and-technology.png?id=65350865&width=980"/><p>In addition to raw compute performance, Nvidia GB200 NVL72 racks also require huge amounts of memory. An Nvidia GB200 NVL72 rack may include up to 13.4 terabytes of high-bandwidth memory, which implies a data-center campus at Hyperion’s scale will require at least several dozen petabytes.</p><p>The immense demand has sent memory prices soaring:<a href="https://wccftech.com/dram-prices-have-risen-by-a-whopping-172-this-year-alone/" rel="noopener noreferrer" target="_blank"> The price of DRAM, specifically DDR5, has increased 172 percent in 2025</a>.</p><h3>Power</h3><br/><img alt="" class="rm-shortcode" data-rm-shortcode-id="eaf0380400ba03875bf2ee910f35ab5d" data-rm-shortcode-name="rebelmouse-image" id="5bd7d" loading="lazy" src="https://spectrum.ieee.org/media-library/image.png?id=65350873&width=980"/><p>Hyperion is expected to use 5 gigawatts of power across 11 buildings, which works out to just under 500 megawatts per building, assuming each will be similar to its siblings. That’s enough to power roughly 4.2 million U.S. homes.</p><p>Just one Hyperion data center built at the Richland Parish site will consume twice as much power as xAI’s Colossus which, at the time of its completion in the summer of 2024, was among the largest data centers yet built.</p><h3></h3><br/><p>Nvidia’s <a href="https://www.nvidia.com/en-us/data-center/gb200-nvl72/" target="_blank">GB200 NVL72</a>—a rack-scale system—is currently a leading choice for AI data centers. A single GB200 rack contains 72 GPUs, 36 CPUs, and up to 17 terabytes of memory. It measures 2.2 meters tall, <a href="https://aivres.com/wp-content/uploads/KRS8000v3.1.pdf" target="_blank">tips the scales at up to </a>1,553 kilograms, and consumes about 120 kilowatts—as much as around 100 U.S. homes. And this, according to Nvidia, is just the beginning. The company anticipates future racks could <a href="https://www.tomshardware.com/tech-industry/nvidia-to-boost-ai-server-racks-to-megawatt-scale-increasing-power-delivery-by-five-times-or-more" target="_blank">consume up to a megawatt each</a>.</p><p><a href="https://www.linkedin.com/in/viktorpetik/?originalSubdomain=hr" target="_blank">Viktor Petik</a>, senior vice president of infrastructure solutions at<a href="https://www.vertiv.com/en-us/" rel="noopener noreferrer" target="_blank"> Vertiv</a>, says the rapid change in rack-scale AI systems has forced data centers to adapt. “AI racks consume far more power and weigh more than their predecessors,” says Petik. He adds that data centers must supply racks with multiple power feeds, without taking up extra space.</p><p>The new power demands from rack-scale systems have consequences that are reflected in the design of the data center—even its footprint.</p><p>In 2022 Meta broke ground on a new data center at a campus in Temple, Texas. According to <a href="https://semianalysis.com/" rel="noopener noreferrer" target="_blank">SemiAnalysis</a>, which studies AI data centers, construction began with the intent <a href="https://newsletter.semianalysis.com/p/datacenter-anatomy-part-1-electrical" rel="noopener noreferrer" target="_blank">to build the data center in an H-shaped configuration common to other Meta data centers</a>.</p><h3>LAND</h3><br/><img alt="Black location pin icon on orange background." class="rm-shortcode" data-rm-shortcode-id="a2b2e04f07bd0ed3f60e1f86029497af" data-rm-shortcode-name="rebelmouse-image" id="248cd" loading="lazy" src="https://spectrum.ieee.org/media-library/black-location-pin-icon-on-orange-background.png?id=65351137&width=980"/><h3></h3><br/><p>Meta CEO Mark Zuckerberg kicked off the buzz around Hyperion by saying it would cover a large chunk of Manhattan. Many took that to mean Hyperion would be a single building of that size, which isn’t correct. Hyperion will actually be a cluster of data centers—11 are currently planned—with over 370,000 square meters of floor space. That’s a lot smaller even than New York City’s Central Park, which covers 6 percent of Manhattan.</p><p>Meta has room to grow, however. The Richland Parish site spans 14.7 million square meters in total, which is about a quarter the area of Manhattan. And the 370,000 square meters of floor space Hyperion is expected to provide doesn’t include external infrastructure, such as the three new combined-cycle gas power plants Louisiana utility Entergy is building to power the project.</p><h3></h3><br/><img alt="Map with site layout and regional location in Louisiana, showing roads and distances." class="rm-shortcode" data-rm-shortcode-id="b0cc9253de57aefb96d39a9892c95fe5" data-rm-shortcode-name="rebelmouse-image" id="a41a4" loading="lazy" src="https://spectrum.ieee.org/media-library/map-with-site-layout-and-regional-location-in-louisiana-showing-roads-and-distances.png?id=65352088&width=980"/><h3></h3><br/><p><span>Construction was paused midway in December of 2022, however, </span><a href="https://www.datacenterdynamics.com/en/news/exclusive-after-meta-cancels-odense-data-center-expansion-other-projects-are-being-rescoped/" target="_blank">as part of a company-wide review of its data-center infrastructure</a><span>. Meta decided to knock down the structure it had built and start from scratch. The reasons for this decision were never made public, but analysts believe it was due to the old design’s inability to deliver sufficient electricity to new, power-hungry AI racks. Construction resumed in 2023.</span></p><p>Meta’s replacement ditches the H-shaped building for simple, long, rectangular structures, each flanked by rows of gas-turbine generators. While Meta’s plans are subject to change, Hyperion is currently expected to comprise 11 rectangular data centers, each packed with hundreds of thousands of GPUs, spread across the 13.6-square-kilometer Richland Parish campus.</p><h2>Cooling, and Connecting, at Scale</h2><p>Nvidia’s ultradense AI GPU racks are changing data centers not only with their weight, and power draw, but also with their intense cooling and bandwidth requirements.</p><p>Data centers traditionally use air cooling, but that approach has reached its limits. “Air as a cooling medium is inherently inferior,” says<a href="https://cde.nus.edu.sg/me/staff/lee-poh-seng/" target="_blank"> Poh Seng Lee</a>, head of <a href="https://blog.nus.edu.sg/coolestlab/" rel="noopener noreferrer" target="_blank">CoolestLAB</a>, a cooling research group at the National University of Singapore.</p><p>Instead, going forward, GPUs will rely on liquid cooling. However, that adds a new layer of complexity. “It’s all the way to the facilities level,” says Lee. “You need pumps, which we call a coolant distribution unit. The CDU will be connected to racks using an elaborate piping network. And it needs to be designed for redundancy.” On the rack, pipes connect to cold plates mounted atop every GPU; outside the data-center shell, pipes route through evaporation cooling units. Lee says retrofitting an air-cooled data center is possible but expensive.</p><p>The networking used by AI data centers is also changing to cope with new requirements. Traditional data centers were positioned near network hubs for easy access to the global internet. AI data centers, though, are more concerned with networks of GPUs.</p><p>These connections must sustain high bandwidth with impeccable reliability. Mark Bieberich, a vice president at network infrastructure company Ciena, says its latest fiber-optic transceiver technology,<a href="https://www.ciena.com/products/wavelogic/wavelogic-6" rel="noopener noreferrer" target="_blank"> WaveLogic 6</a>, can provide up to 1.6 terabytes per second of bandwidth per wavelength. A single fiber can support 48 wavelengths in total, and Ciena’s largest customers have hundreds of fiber pairs, placing total bandwidth in the thousands of terabits per second.</p><h3></h3><br/><img alt="a piece of land with a big platform in the middle." class="rm-shortcode" data-rm-shortcode-id="fb6adbcb1ff833934363d6f6ce9cf993" data-rm-shortcode-name="rebelmouse-image" id="63272" loading="lazy" src="https://spectrum.ieee.org/media-library/a-piece-of-land-with-a-big-platform-in-the-middle.jpg?id=65343457&width=980"/><p><span>This is a point where the scale of Meta’s Hyperion, and other large AI data centers, can be deceptive. It seems to imply the physical size of a single data center is what matters. But rather than being a single building,</span><a href="https://datacenters.atmeta.com/richland-parish-data-center/" target="_blank"> Hyperion is actually a set of buildings</a><span> connected by high-speed fiber-optics.</span></p><p>“Interconnecting data centers is absolutely essential,” says Bieberich. “You could think about it as one logical AI training facility, but with geographically distributed facilities.” Nvidia has taken to calling this “scale across,” to contrast it with the idea that data centers must “scale up” to larger singular buildings.</p><h2>The Big but Hazy Future</h2><p>The full scale of the challenges that face Hyperion, and other future AI data centers of similar scale, remain hazy. Nvidia has yet to introduce the rack-scale AI GPU systems it will host. How much power will it demand? What type of cooling will it require? How much bandwidth must be provided? These can only be estimated.</p><p>In the absence of details, the gravity of AI data-center design is pulled toward one certainty: It must be big. New data-center designers are rewriting their rule book to handle power, cooling, and network infrastructure at a scale that would’ve seemed ridiculous five years ago.</p><p>This innovation is fueled by big tech’s fat wallet, which shelled out tens of billions of dollars in 2025 alone, leading to<a href="https://hbr.org/2025/10/is-ai-a-boom-or-a-bubble" target="_blank"> questions about whether the spending is sustainable</a>. For the engineers in the trenches of data-center design, though, it’s viewed as an opportunity to make the impossible possible.</p><p> “I tell my engineers, this is peak. We’re being engineers. We’re being asked complicated questions,” says Stantec’s Carter. “We haven’t got to do that in a long time.” <span class="ieee-end-mark"></span></p><p><em>This article appears in the April 2026 print issue.</em></p>]]></description><pubDate>Tue, 24 Mar 2026 15:00:05 +0000</pubDate><guid>https://spectrum.ieee.org/5gw-data-center</guid><category>Ai</category><category>Power</category><category>Construction</category><category>Data-centers</category><category>Type-cover</category><dc:creator>Matthew S. Smith</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/construction-symbols-on-yellow-background.png?id=65356154&amp;width=980"></media:content></item><item><title>Transforming Data Science With NVIDIA RTX PRO 6000 Blackwell Workstation Edition</title><link>https://spectrum.ieee.org/nvidia-rtx-pro-6000-pny</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/computer-setup-with-a-monitor-displaying-forest-graphics-keyboard-mouse-and-a-sleek-cpu-design.png?id=65315285&width=1245&height=700&coordinates=0%2C91%2C0%2C92"/><br/><br/><p><em>This is a sponsored article brought to you by <a href="https://www.pny.com/" target="_blank">PNY Technologies</a>.</em></p>In today’s data-driven world, data scientists face mounting challenges in preparing, scaling, and processing massive datasets. Traditional CPU-based systems are no longer sufficient to meet the demands of modern AI and analytics workflows. <a href="https://www.pny.com/nvidia-rtx-pro-6000-blackwell-ws?iscommercial=true&utm_source=IEEE+Spectrum+Blog&utm_medium=RTX+PRO+6000+body&utm_campaign=Blackwell+Workstation&utm_id=RTX+PRO+6000" rel="noopener noreferrer" target="_blank">NVIDIA RTX PRO<sup>TM</sup> 6000 Blackwell Workstation Edition</a> offers a transformative solution, delivering accelerated computing performance and seamless integration into enterprise environments.<h2>Key Challenges for Data Science</h2><ul><li><strong>Data Preparation: </strong>Data preparation is a complex, time-consuming process that takes most of a data scientist’s time.</li><li><strong>Scaling: </strong>Volume of data is growing at a rapid pace. Data scientists may resort to downsampling datasets to make large datasets more manageable, leading to suboptimal results.</li><li><strong>Hardware: </strong>Demand for accelerated AI hardware for data centers and cloud service providers (CSPs) is exceeding supply. Current desktop computing resources may not be suitable for data science workflows.</li></ul><h2>Benefits of RTX PRO-Powered AI Workstations</h2><p>NVIDIA RTX PRO 6000 Blackwell Workstation Edition delivers ultimate acceleration for data science and AI workflows. These powerful and robust workstations enable real-time rendering, rapid prototyping, and seamless collaboration. With support for up to four <a href="https://www.pny.com/nvidia-rtx-pro-6000-blackwell-max-q?iscommercial=true&utm_source=IEEE+Spectrum+Blog&utm_medium=RTX+PRO+6000+Blackwell+Max-Q+body&utm_campaign=Blackwell+Workstation&utm_id=RTX+PRO+6000" rel="noopener noreferrer" target="_blank">NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition</a> GPUs, users can achieve data center-level performance right at their desk, making even the most demanding tasks manageable.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="61bf7564ac8304e10487689487367c94" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/jwxxgHsU1jA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">PNY is redefining professional computing with the ‪@NVIDIA‬ RTX PRO 6000 Blackwell Workstation Edition, the most powerful desktop GPU ever built. Engineered for unmatched compute power, massive memory capacity, and breakthrough performance, this cutting-edge solution delivers a quantum leap forward in workflow efficiency, enabling professionals to tackle the most demanding applications with ease.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">PNY</small></p><p>NVIDIA RTX PRO 6000 Blackwell Workstation Edition empowers data scientists to handle massive datasets, perform advanced visualizations, and support multi-user environments without compromise. It’s ideal for organizations scaling up their analytics or running complex models. NVIDIA RTX PRO 6000 Blackwell Workstation Edition is optimized for AI workflows, leveraging the NVIDIA AI software stack, including CUDA-X, and NVIDIA Enterprise software. These platforms enable zero-code-change acceleration for Python-based workflows and support over 100 AI-powered applications, streamlining everything from data preparation to model deployment.</p><p>Finally, NVIDIA RTX PRO 6000 Blackwell Workstation Edition offers significant advantages in security and cost control. By offloading compute from the data center and reducing reliance on cloud resources, organizations can lower expenses and keep sensitive data on-premises for enhanced protection.</p><h2>Accelerate Every Step of Your Workflow</h2><p>NVIDIA RTX PRO 6000 Blackwell Workstation Edition is designed to transform the entire data science pipeline, delivering end-to-end acceleration from data preparation to model deployment. With NVIDIA CUDA-X open-source data science cuDF library and other GPU-accelerated libraries, data scientists can process massive datasets at lightning speed, often achieving up to 50X faster performance compared to traditional CPU-based tools. This means tasks like cleaning data, managing missing values, and engineering features can be completed in seconds, not hours, allowing teams to focus on extracting insights and building better models.</p><p class="pull-quote">NVIDIA RTX PRO 6000 Blackwell Workstation Edition is designed to transform the entire data science pipeline, delivering end-to-end acceleration from data preparation to model deployment</p><p>Exploratory data analysis is elevated with advanced analytics and interactive visualizations, powered by NVIDIA CUDA-X and PyData libraries. These tools enable users to create expansive, responsive visualizations that enhance understanding and support critical decision-making. When it comes to model training, GPU-accelerated XGBoost slashes training times from weeks to minutes, enabling rapid iteration and faster time-to-market AI solutions.</p><p>NVIDIA RTX PRO 6000 Blackwell Workstation Edition streamlines collaboration and scalability. With NVIDIA AI Workbench, teams can set up projects, develop, and collaborate seamlessly across desktops, cloud platforms, and data centers. The unified software stack ensures compatibility and robustness, while enterprise-grade hardware maximizes uptime and reliability for demanding workflows.</p><p>By integrating these advanced capabilities, NVIDIA RTX PRO 6000 Blackwell Workstation Edition empowers data scientists to overcome bottlenecks, boost productivity, and drive innovation, making them an essential foundation for modern, enterprise-ready AI development.</p><h2>Performance Benchmarks</h2><p>NVIDIA’s cuDF library offers zero-code change acceleration for pandas, delivering up to 50X performance gains. For example, a join operation that takes nearly 5 minutes on CPU completes in just 14 seconds on GPU. Advanced group by operations drop from almost 4 minutes to just 4 seconds.</p><h2>Enterprise-Ready Solutions from PNY</h2><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" rel="float: left;" style="float: left;"> <img alt="Black PNY logo with stylized uppercase letters on a transparent background." class="rm-shortcode" data-rm-shortcode-id="247ffcd9e141f1fc61c5172c5440d97e" data-rm-shortcode-name="rebelmouse-image" id="170af" loading="lazy" src="https://spectrum.ieee.org/media-library/black-pny-logo-with-stylized-uppercase-letters-on-a-transparent-background.png?id=65315393&width=980"/></p><p>Available from leading OEM manufacturers, NVIDIA RTX PRO 6000 Blackwell Workstation Edition Series GPUs are specifically engineered to meet the rigorous demands of enterprise environments. These systems incorporate NVIDIA Connect-X networking, now available at PNY and a comprehensive suite of deployment and support tools, ensuring seamless integration with existing IT infrastructure.</p><p>Designed for scalability, the latest generation of workstations can tackle complex AI development workflows at scale for training, development, or inferencing. Enterprise-grade hardware maximizes uptime and reliability.</p><p><strong>To learn more about NVIDIA RTX PRO™ Blackwell solutions, </strong><strong>visit:</strong> <a href="https://www.pny.com/professional/software-solutions/blackwell-architecture?utm_source=IEEE+Spectrum+Blog&utm_medium=Blackwell+Desktop+GPUs+learn+more&utm_campaign=Blackwell+Workstation&utm_id=RTX+PRO+6000" target="_blank">NVIDIA RTX PRO Blackwell | PNY Pro | pny.com</a> or email <a href="mailto:gopny@pny.com" target="_blank">GOPNY@PNY.COM</a><strong></strong></p>]]></description><pubDate>Mon, 23 Mar 2026 13:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/nvidia-rtx-pro-6000-pny</guid><category>Artificial-intelligence</category><category>Computing</category><category>Data-science</category><category>Gpu-acceleration</category><category>Ai-workstations</category><category>Nvidia</category><dc:creator>PNY Technologies</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/computer-setup-with-a-monitor-displaying-forest-graphics-keyboard-mouse-and-a-sleek-cpu-design.png?id=65315285&amp;width=980"></media:content></item><item><title>Startups Bring Optical Metamaterials to AI Data Centers</title><link>https://spectrum.ieee.org/optical-metamaterials-ai-data-centers</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-hand-holding-a-microchip-between-thumb-and-forefinger.jpg?id=65322426&width=1245&height=700&coordinates=0%2C187%2C0%2C188"/><br/><br/><p><span>Light-warping physics made “invisibility cloaks” a possibility. Now two startups hope to harness the science underlying this advance to boost the bandwidth of data centers and speed artificial intelligence.</span></p><p>Roughly 20 years ago, scientists developed the <a href="https://www.science.org/doi/10.1126/science.1125907" target="_blank">first</a> <a href="https://www.science.org/doi/10.1126/science.1133628" target="_blank"> structures</a> capable of curving light around objects to conceal them. These are composed of optical <a href="https://spectrum.ieee.org/two-photon-lithography-3d-printing" target="_self">metamaterials</a>—materials with structures smaller than the wavelengths they are designed to manipulate, letting them bend light in unexpected ways.</p><p>The problem with optical cloaks? “There’s no market for them,” says Patrick Bowen, cofounder and CEO of photonic computing startup <a href="https://www.neurophos.com/" target="_blank">Neurophos</a> in Austin, Texas. For instance, each optical cloak typically works only on a single color of light instead of on all visible colors as you might want for stealth applications.</p><p>Now companies are devising more practical uses for the science behind cloaks, such as improving the switches that connect computers in data centers for AI and other cloud services. Increasingly, <a href="https://newsletter.semianalysis.com/p/google-apollo-the-3-billion-game" target="_blank">data centers are looking to use optical circuit switches </a>to overcome the bandwidth limits and power consumption of conventional electronic switches and networks that require converting data between light to electrons multiple times.</p><p class="ieee-inbody-related">RELATED:  <a href="https://spectrum.ieee.org/optical-interconnects-imec-silicon-photonics" target="_blank">Semiconductor Industry Closes in on 400 Gb/s Photonics Milestone</a></p><p>However, today’s optical switching technologies have drawbacks of their own. For instance, ones that depend on silicon photonics face problems with energy efficiency, while those that rely on <a href="https://spectrum.ieee.org/self-assembly" target="_self">microelectromechanical systems (MEMS)</a> can prove unreliable, says Sam Heidari, CEO of optical metasurface startup <a href="https://lumotive.com/" rel="noopener noreferrer" target="_blank">Lumotive</a> in Redmond, Wash.</p><p>Instead, <a href="https://www.nature.com/articles/s44287-024-00136-4" rel="noopener noreferrer" target="_blank">Lumotive has developed metamaterials with adjustable properties</a>. Its new microchip, which debuted 19 March, is covered with copper structures built using standard chipmaking techniques. Between these copper features are <a href="https://spectrum.ieee.org/metasurface-displays" target="_self">liquid crystal</a> elements. The structure of these elements are electronically programmable, just like in liquid crystal displays (LCDs), to alter the optical properties of the metamaterial chip.</p><p>The microchip can precisely steer, lens, shape, and split beams of light reflected off its surface. It can perform all the same functions as multiple optical components with no moving parts in a programmable way in real time, according to Lumotive. “Having no moving parts significantly improves reliability,” Heidari says.</p><p>“We had to go through a lot of R&D at the foundries to not only make our devices functional, but also commercially viable in terms of the right cost and right reliability,” Heidari says.</p><p>The company says its new chips are capable of manipulating not only the industry’s standard of 256 by 256 ports, but could scale up to 10,000 by 10,000. “We think this is game-changing for data centers,” Heidari says. Lumotive plans to launch its first optical switches at the end of 2026.</p><h2>Optical Computing With Metamaterials</h2><p>Similarly, Neurophos hopes its technology may be transformative for artificial intelligence. Since AI is proving energy hungry when run on conventional electronics, scientists are exploring <a href="https://spectrum.ieee.org/optical-neural-networks" target="_self">optical computing</a> as a low-power alternative by processing data with light instead of electrons.</p><p>However, optical processors in the works today are typically far too bulky to achieve a compute density competitive with the best modern electronic processors, Bowen says. Neurophos says it can use metamaterials to build optical modulators—the optical equivalent of a transistor—that are 1/10,000th the size of today’s designs using standard chipmaking processes. “It’s entirely CMOS,” Bowen says. “There are no exotic materials in it.”</p><p>When a laser beam encoding data shines on a Neurophos chip, the way in which each metamaterial element is configured alters the reflected beam to encode results from complex AI tasks. “We basically fit a 1,000- by-1,000 array of optical modulators on a tiny 5-by-5-millimeter area on a chip,” Bowen says. “If you wanted to do that with off-the-shelf silicon photonics, your chip would be a square meter in size.”</p><p>All in all, Bowen claims the Neurophos microchip will offer 50 times greater compute density and 50 times greater energy efficiency than Nvidia’s Blackwell-generation GPU. The company says that hyperscalers—the world’s biggest cloud service providers—will evaluate two upcoming proof-of-concept chips this year. Neurophos is targeting its first systems for early 2028, with production ramping mid-2028.</p>]]></description><pubDate>Thu, 19 Mar 2026 19:19:43 +0000</pubDate><guid>https://spectrum.ieee.org/optical-metamaterials-ai-data-centers</guid><category>Artificial-intelligence</category><category>Data-center</category><category>Optical-switch</category><category>Optical-computing</category><category>Metamaterial</category><category>Metamaterials</category><dc:creator>Charles Q. Choi</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-hand-holding-a-microchip-between-thumb-and-forefinger.jpg?id=65322426&amp;width=980"></media:content></item><item><title>ENIAC, the First General-Purpose Digital Computer, Turns 80</title><link>https://spectrum.ieee.org/eniac-80-ieee-milestone</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/wide-view-of-men-and-women-working-on-the-eniac-in-the-1940s-all-four-walls-from-floor-to-ceiling-host-different-pieces-of-t.jpg?id=65315846&width=1245&height=700&coordinates=0%2C187%2C0%2C188"/><br/><br/><p>Happy 80th anniversary, ENIAC! The <a href="https://penntoday.upenn.edu/news/penns-eniac-worlds-first-electronic-computer-turns-80" rel="noopener noreferrer" target="_blank">Electronic Numerical Integrator and Computer</a>, the first large-scale, general-purpose, programmable electronic digital computer, helped shape our world.</p><p>On 15 February 1946, ENIAC—developed in the <a href="https://facilities.upenn.edu/maps/locations/moore-school-building" rel="noopener noreferrer" target="_blank">Moore School of Electrical Engineering</a> at the <a href="https://www.upenn.edu/" rel="noopener noreferrer" target="_blank">University of Pennsylvania</a>, in Philadelphia—was publicly demonstrated for the first time. Although primitive by today’s standards, ENIAC’s purely electronic design and programmability were breakthroughs in computing at the time. ENIAC made high-speed, general-purpose computing practicable and laid the foundation for today’s machines.</p><p>On the eve of its unveiling, the <a href="https://www.war.gov/" rel="noopener noreferrer" target="_blank">U.S. Department of War</a> issued a<a href="https://americanhistory.si.edu/comphist/pr1.pdf" rel="noopener noreferrer" target="_blank">news release</a> hailing it as a new machine “expected to revolutionize the mathematics of engineering and change many of our industrial design methods.” Without a doubt, electronic computers have transformed engineering and mathematics, as well as practically every other domain, including politics and spirituality.</p><p>ENIAC’s success ushered the modern computing industry and laid the foundation for today’s digital economy. During the past eight decades, computing has grown from a niche scientific endeavor into an engine of economic growth, the backbone of billion-dollar enterprises, and a catalyst for global innovation. Computing has led to a chain of innovations and developments such as stored programs, semiconductor electronics, integrated circuits, networking, software, the Internet, and distributed large-scale systems.</p><h2>Inside the ENIAC</h2><p>The motivation for developing ENIAC was the <a href="https://www.pbs.org/wgbh/aso/databank/entries/dt45en.html" rel="noopener noreferrer" target="_blank">need for faster computation</a> during World War II. The U.S. military wanted to produce extensive artillery firing tables for field gunners to quickly determine settings for a specific weapon, a target, and conditions. Calculating the tables by hand took “<a href="https://cacm.acm.org/blogcacm/computers-were-originally-humans/" rel="noopener noreferrer" target="_blank">human computers</a>” several days, and the available mechanical machines were far too slow to meet the demand.</p><h3>80 Years of Electronic Computer Milestones </h3><br/><h4>1946</h4><p><a href="https://www.britannica.com/technology/ENIAC" rel="noopener noreferrer" target="_blank"><strong>ENIAC operational</strong></a></p><p>Birth of electronic computing</p><h4>1951</h4><p><a href="https://www.britannica.com/technology/UNIVAC" target="_blank"><strong>UNIVAC I</strong></a></p><p><a href="https://www.britannica.com/technology/UNIVAC" target="_blank"></a>Start of commercial computing</p><h4>1958</h4><p><a href="https://www.synopsys.com/glossary/what-is-integrated-circuit.html" target="_blank"><strong>Integrated circuit</strong></a></p><p>Foundation for modern computer hardware</p><h4>1964</h4><p><a href="https://www.ibm.com/history/system-360" rel="noopener noreferrer" target="_blank"><strong>IBM System/360</strong></a></p><p>Popular mainframe computer</p><h4>1970</h4><p><a href="https://en.wikipedia.org/wiki/PDP-11" rel="noopener noreferrer" target="_blank"><strong>Programmed Data Processor (PDP-11)</strong></a></p><p>Popular 16-bit minicomputer</p><h4>1971</h4><p><a href="https://computer.howstuffworks.com/microprocessor.htm" rel="noopener noreferrer" target="_blank"><strong>Intel 4004</strong></a></p><p>Beginning of the microprocessor and microcomputer era</p><h4>1975</h4><p><a href="https://en.wikipedia.org/wiki/Cray-1" rel="noopener noreferrer" target="_blank"><strong>Cray-1</strong></a></p><p>First supercomputer</p><h4>1977</h4><p><a href="https://www.stromasys.com/resources/vax-computer-systems-an-in-depth-guide/" rel="noopener noreferrer" target="_blank"><strong>VAX</strong></a></p><p>Popular 32-bit minicomputer</p><h4>1981</h4><p><a href="https://en.wikipedia.org/wiki/IBM_Personal_Computer" rel="noopener noreferrer" target="_blank"><strong>IBM PC</strong></a></p><p>Personal and small-business computing</p><h4>1989</h4><p><a href="https://home.cern/science/computing/birth-web" rel="noopener noreferrer" target="_blank"><strong>World Wide Web</strong></a></p><p>Digital communication, interaction, and transaction (e-commerce)</p><h4>2002</h4><p><a href="https://en.wikipedia.org/wiki/Amazon_Web_Services" rel="noopener noreferrer" target="_blank"><strong>Amazon Web Services</strong></a></p><p>Beginning of the cloud computing revolution</p><h4>2010</h4><p><a href="https://en.wikipedia.org/wiki/IPad" rel="noopener noreferrer" target="_blank"><strong>Apple iPad</strong></a></p><p>Handheld computer/tablet</p><h4>2010</h4><p><a href="https://www.ibm.com/think/topics/industry-4-0" rel="noopener noreferrer" target="_blank"><strong>Industry 4.0</strong></a></p><p>Delivered real-time decision-making, smart manufacturing, and logistics</p><h4>2016</h4><p><a href="https://www.livescience.com/55642-reprogrammable-quantum-computer-created.html" rel="noopener noreferrer" target="_blank"><strong>First reprogrammable quantum computer demonstrated</strong></a></p><p>Ignited interest in quantum computing</p><h4>2023</h4><p><a href="https://en.wikipedia.org/wiki/Generative_artificial_intelligence" rel="noopener noreferrer" target="_blank"><strong>Generative AI boom</strong></a></p><p>Widespread use of GenAI by individuals, businesses, and academia</p><h4>2026</h4><p><a href="https://penntoday.upenn.edu/news/penns-eniac-worlds-first-electronic-computer-turns-80" rel="noopener noreferrer" target="_blank"><strong>ENIAC’s 80th anniversary</strong></a></p><p>80 years of computing evolution</p><h3></h3><br/><p>In 1942 <a href="https://www.britannica.com/biography/John-Mauchly" target="_blank">John Mauchly</a>, an associate professor of electrical engineering at Penn’s Moore School, suggested using vacuum tubes to speed up computer calculations. Following up on his theory, the U.S. Army <a href="https://en.wikipedia.org/wiki/Ballistic_Research_Laboratory" target="_blank">Ballistic Research Laboratory</a>, which was responsible for providing artillery settings to soldiers in the field, commissioned Mauchly and his colleagues<a href="https://www.britannica.com/biography/J-Presper-Eckert-Jr" target="_blank"></a><a href="https://ethw.org/J._Presper_Eckert" rel="noopener noreferrer" target="_blank">J. Presper Eckert</a> and <a href="https://ethw.org/Adele_Katz_Goldstine" target="_blank">Adele Katz Goldstine</a>, to work on a new high-speed computer. Eckert was a lab instructor at Moore, and Goldstine became one of ENIAC’s programmers. It took them a year to design ENIAC and 18 months to build it.</p><p>ENIAC contained about 18,000 vacuum tubes, which were cooled by 80 air blowers, and measured 8 feet by 3 feet (2.44 meters by 0.91 meters), and was almost 100 feet long (30.5 meters). It filled a large 30 feet by 50 feet (9.14m by 15.24m) room and weighed 30 tons (30,000 kilograms or 30 megagrams). It consumed as much electricity as a small town.</p><p>Programming the machine was <a href="https://www.pbs.org/wgbh/aso/databank/entries/dt45en.html" target="_blank">difficult</a>. ENIAC did not have stored programs, so to reprogram the machine, operators manually reconfigured cables with switches and plugboards, a process that took several days.</p><p>By the 1950s, large universities either had acquired or built their own machines to rival ENIAC. The schools included <a href="https://www.cam.ac.uk/" rel="noopener noreferrer" target="_blank">Cambridge</a> (EDSAC), <a href="https://www.mit.edu/" rel="noopener noreferrer" target="_blank">MIT</a> (Whirlwind), and <a href="https://www.princeton.edu/" rel="noopener noreferrer" target="_blank">Princeton</a> (IAS). Researchers used the computers to model physical phenomena, solve mathematical problems, and perform simulations.</p><p>After almost nine years of operation, ENIAC officially was decommissioned on 2 October 1955.</p><p><a href="https://mitpress.mit.edu/9780262535175/eniac-in-action/" rel="noopener noreferrer" target="_blank"><em>ENIAC in Action: Making and Remaking the Modern Computer</em></a>, a book by <a href="https://uwm.edu/history/about/directory/haigh-thomas/" rel="noopener noreferrer" target="_blank">Thomas Haigh</a>, <a href="https://mitpress.mit.edu/author/mark-priestley-15374/" rel="noopener noreferrer" target="_blank">Mark Priestley</a>, and <a href="https://www.researchgate.net/scientific-contributions/Crispin-Rope-2045495041" rel="noopener noreferrer" target="_blank">Crispin Rope</a>,<em> </em>describes the design, construction, and testing processes and dives into its afterlife use. The book also outlines the complex relationship between ENIAC and its designers, as well as the revolutionary approaches to computer architecture.</p><p>In the early 1970s, there was a controversy over who invented the electronic computer and who would be assigned the patent. In 1973 <a href="https://en.wikipedia.org/wiki/Earl_R._Larson" rel="noopener noreferrer" target="_blank">Judge Earl Richard Larson</a> of U.S. District Court in Minnesota ruled in the <a href="https://en.wikipedia.org/wiki/Honeywell,_Inc._v._Sperry_Rand_Corp." rel="noopener noreferrer" target="_blank">Honeywell <em><em>v.</em></em> Sperry Rand</a> case that Eckert and Mauchly did not invent the automatic electronic digital computer but instead had derived their subject matter from a <a href="https://jva.cs.iastate.edu/operation.php" rel="noopener noreferrer" target="_blank">computer</a> prototyped in 1939 by <a href="https://history-computer.com/people/john-vincent-atanasoff-complete-biography/" rel="noopener noreferrer" target="_blank">John Vincent Atanasoff</a> and Clifford Berry at Iowa State College (now <a href="https://www.iastate.edu/" rel="noopener noreferrer" target="_blank">Iowa State University</a>). The ruling granted Atanasoff legal recognition as the inventor of the first electronic digital computer.</p><h2>IEEE’s ENIAC Milestone</h2><p>In 1987 IEEE<a href="https://ethw.org/Milestones:Electronic_Numerical_Integrator_and_Computer,_1946" rel="noopener noreferrer" target="_blank">designated ENIAC</a> as an IEEE Milestone, citing it as “a major advance in the history of computing” and saying the machine “established the practicality of large-scale electronic digital computers and strongly influenced the development of the modern, stored-program, general-purpose computer.”</p><p>The commemorative Milestone plaque is displayed at the Moore School, by the entrance to the classroom where ENIAC was built.</p><h3></h3><br/><p>“The ENIAC legacy heralded the computer age, transforming not only science and industry but also education, research, and human communication and interaction.”</p><h3></h3><br/><p><br/></p><p>A <a href="https://ieeexplore.ieee.org/document/476557" rel="noopener noreferrer" target="_blank">paper on the machine</a>, published in 1996 in <a href="https://ieeexplore.ieee.org/document/476557" rel="noopener noreferrer" target="_blank"><em>IEEE Annals of the History of Computing</em></a> and available in the <a href="https://ieeexplore.ieee.org/document/6461145" rel="noopener noreferrer" target="_blank">IEEE Xplore Digital Library</a>, is a valuable source of technical information.</p><p>“<a href="https://www.computer.org/csdl/magazine/an/2006/02/man2006020004/13rRUB6Sq2p" rel="noopener noreferrer" target="_blank">The Second Life of ENIAC</a><em>,”</em> an article published in the annals in 2006, covers a lesser-known chapter in the machine’s history, about how it evolved from a static system—configured and reconfigured through laborious cable plugging—into a precursor of today’s stored-program computers.</p><p>A classic <a href="https://www2.seas.gwu.edu/~mfeldman/csci1030/summer08/eniac2.pdf" rel="noopener noreferrer" target="_blank">history paper on ENIAC</a> was published in the December 1995 <a href="https://technologyandsociety.org/" rel="noopener noreferrer" target="_blank"><em>IEEE Technology and Society Magazine</em></a>.</p><p>The IEEE <a href="https://spectrum.ieee.org/ebooks/ieee-anniversary-book/" target="_self"><em>Inspiring Technology: 34 Breakthroughs</em></a> book, published in 2023, features an ENIAC chapter.</p><h2>The women behind ENIAC</h2><p>One of the most remarkable aspects of the ENIAC story is the pivotal role women played, according to the book <a href="https://www.amazon.com/Proving-Ground-Untold-Programmed-Computer/dp/1538718286" rel="noopener noreferrer" target="_blank"><em>Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer</em></a><em>, </em>highlighted in an <a href="https://spectrum.ieee.org/the-women-behind-eniac" target="_self">article</a> in <a href="https://spectrum.ieee.org/the-institute/" target="_self"><em>The Institute</em></a>. There were no “programmers” at that time; only schematics existed for the computer. Six women, known as the ENIAC 6, became the machine’s first programmers.</p><p>The ENIAC 6 were <a href="https://en.wikipedia.org/wiki/Kathleen_Antonelli" rel="noopener noreferrer" target="_blank">Kathleen Antonelli</a>, <a href="https://en.wikipedia.org/wiki/Jean_Bartik" rel="noopener noreferrer" target="_blank">Jean Bartik</a>, <a href="https://ethw.org/Betty_Holberton" rel="noopener noreferrer" target="_blank">Betty Holberton</a>, <a href="https://ethw.org/Marlyn_Meltzer" rel="noopener noreferrer" target="_blank">Marlyn Meltzer</a>, <a href="https://ethw.org/Frances_Spence" rel="noopener noreferrer" target="_blank">Frances Spence</a>, and <a href="https://ethw.org/Ruth_Teitelbaum" rel="noopener noreferrer" target="_blank">Ruth Teitelbaum</a>.</p><p>“These six women found out what it took to run this computer, and they really did incredible things,” a Penn professor, <a href="https://www.cis.upenn.edu/~mitch/" rel="noopener noreferrer" target="_blank">Mitch Marcus</a>, said in a <a href="https://www.phillyvoice.com/70-years-ago-six-philly-women-eniac-digital-computer-programmers/" rel="noopener noreferrer" target="_blank">2006 PhillyVoice article</a>. Marcus teaches in Penn’s computer and information science department.</p><p>In 1997 all six female programmers were<a href="https://www.witi.com/halloffame/298369/ENIAC-Programmers-Kathleen---/" rel="noopener noreferrer" target="_blank">inducted</a> into the <a href="https://www.witi.com/halloffame/" rel="noopener noreferrer" target="_blank">Women in Technology International Hall of Fame</a>, in Los Angeles.</p><p>Two other women contributed to the programming. Goldstine wrote ENIAC’s five-volume manual, and <a href="https://en.wikipedia.org/wiki/Kl%C3%A1ra_D%C3%A1n_von_Neumann" rel="noopener noreferrer" target="_blank">Klára Dán von Neumann</a>, wife of <a href="https://ethw.org/John_von_Neumann" rel="noopener noreferrer" target="_blank">John von Neumann</a>, helped train the programmers and debug and verify their code.</p><p>To honor the<a href="https://www.computer.org/volunteering/awards/pioneer/about-women-of-eniac" rel="noopener noreferrer" target="_blank">women of ENIAC</a>, the <a href="https://www.computer.org/" rel="noopener noreferrer" target="_blank">IEEE Computer Society</a> established the annual<a href="https://www.computer.org/volunteering/awards/pioneer" rel="noopener noreferrer" target="_blank">Computer Pioneer Award</a> in 1981. Eckert and Mauchly were among the award’s first recipients. In 2008 Bartik was honored with the award. Nominations are open to all professionals, regardless of gender.</p><h2>An ENIAC replica</h2><p>Last year a group of 80 autistic students, ages 12 to 16, from<a href="https://www.psacademyarizona.com/" rel="noopener noreferrer" target="_blank">PS Academy Arizona</a>, in Gilbert, <a href="https://www.msn.com/en-us/news/technology/how-80-autistic-students-built-an-amazing-replica-of-the-ginormous-eniac-computer/ar-AA1UMKKE" rel="noopener noreferrer" target="_blank">recreated the ENIAC</a> using 22,000 custom parts. It took the students almost six months to assemble.</p><p>A ceremony was held in January to display their creation. The full-scale <a href="https://www.theregister.com/2026/01/21/eniac_model_build/" rel="noopener noreferrer" target="_blank">replica features</a> actual-size panels made from layered cardboard and wood. Although all electronic components are simulated, they are not electrically active. The machine, illuminated by hundreds of LEDs, is accompanied by a soundtrack that simulates the deep hum of ENIAC’s transformers and the rhythmic clicking of relays.</p><p><strong></strong></p><h3></h3><br><img alt="A white woman using a computer-adding machine in the 1940\u2019s. The device resembles a bulky typewriter and prints large stacks of paper with tabulated answers." class="rm-shortcode" data-rm-shortcode-id="fea0fb9da93e75542fd5b85964251c33" data-rm-shortcode-name="rebelmouse-image" id="36a08" loading="lazy" src="https://spectrum.ieee.org/media-library/a-white-woman-using-a-computer-adding-machine-in-the-1940-u2019s-the-device-resembles-a-bulky-typewriter-and-prints-large-stack.jpg?id=65315890&width=980"/><h3></h3><br/><p>“Every major unit, accumulators, function tables, initiator, and master programmer is present and placed exactly where it was on the original machine,” Tom Burick, the teacher who mentored the project, said at the ceremony.</p><p>The replica, still on display at the school, is expected to be moved to a more permanent spot in the near future.</p><h2>ENIAC’s legacy</h2><p>ENIAC’s significance is both technical and symbolic. Technically, it marks the beginning of the chain of innovations that created today’s computational infrastructure. Symbolically, it made governments, militaries, universities, and industry view computation as a tool for improvement and for innovative applications that had previously been impossible. It marked a tectonic shift in the way humans approach problem-solving, modeling, and scientific reasoning.</p><p>The ENIAC legacy heralded the computer age, transforming not only science and industry but also education, research, and human communication and interaction.</p><p>As Eckert is reported to have said, “There are two epochs in computer history: Before ENIAC and After ENIAC.”</p><h2>Coevolution of programming languages</h2><p>The remarkable evolution of computer hardware during the past 80 years has been sparked by advances in programming languages—the essential drivers of computing.</p><p>From the manual rewiring of ENIAC to the orchestration of intelligent, distributed systems, programming languages have steadily evolved to make computers more powerful, expressive, and accessible.</p><h3>Lessons From Computing’s Remarkable Journey</h3><br/><p>Computing history teaches us that flexibility, accessibility, collaboration, sound governance, and forward thinking are essential for sustained technological progress. In a <a href="https://cacm.acm.org/blogcacm/what-past-computing-breakthroughs-teach-us-about-ai/" target="_blank">recent <em><em>Communications of the ACM</em></em> article</a>, <a href="https://www.linkedin.com/in/richa28gupta/" target="_blank">Richa Gupta</a> identified four historic shifts that led to computing’s rapid, transformative progress:</p><ol><li>Programmable machines taught us that flexibility is key; technologies that adapt and are repurposed scale better.</li><li>The Internet showed that connection and standard protocols drive explosive growth but also bring new risks such as data security issues, invasion of privacy, and misuse.</li><li>Personal computers illustrated that accessibility and usability matter more than raw power. When nonexperts can use a tool easily, adoption rises.</li><li>The open-source movement revealed that collaborative innovation accelerates growth and helps spot problems early.</li></ol></br><h2>Predictions for computing in the decades ahead</h2><p>The evolution of computing will continue along multiple trajectories, with the emphasis moving from generalization to specialization (for AI, graphics, security, and networking), from monolithic system design to modular integration, and from performance-centric metrics alone to energy efficiency and sustainability as primary objectives.</p><p>Increasingly, security will be built into hardware by design. Computing paradigms will expand beyond traditional deterministic models to embrace probabilistic, approximate, and hybrid approaches for certain tasks.</p><p>Those developments will usher in a new era of computing and a new class of applications.</p><p><em>This article was updated on 22 April 2026.</em></p>]]></description><pubDate>Wed, 18 Mar 2026 18:00:05 +0000</pubDate><guid>https://spectrum.ieee.org/eniac-80-ieee-milestone</guid><category>Ieee-history</category><category>Eniac</category><category>Computing</category><category>Computers</category><category>History-of-technology</category><category>Type-ti</category><dc:creator>San Murugesan</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/wide-view-of-men-and-women-working-on-the-eniac-in-the-1940s-all-four-walls-from-floor-to-ceiling-host-different-pieces-of-t.jpg?id=65315846&amp;width=980"></media:content></item><item><title>Wanted: Europe’s Missing Cloud Provider</title><link>https://spectrum.ieee.org/europe-cloud-sovereignty</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/abstract-pixelation-of-the-european-union-s-flag.jpg?id=65298877&width=1245&height=700&coordinates=0%2C156%2C0%2C157"/><br/><br/><p>Looming over the <a href="https://spectrum.ieee.org/free-space-optical-link-taara" target="_self">internet lasers</a> and <a href="https://www.pcmag.com/news/hands-on-with-oukitel-wp63-mwc-2026" rel="noopener noreferrer" target="_blank">firestarting phones</a> companies were touting at Mobile World Congress in Barcelona this month, was a more nebulous but much larger announcement: a pan-European cloud called <a href="https://www.euronews.com/next/2026/03/03/europe-unites-to-build-sovereign-cloud-and-ai-infrastructure-to-stop-reliance-on-us" rel="noopener noreferrer" target="_blank">EURO-3C</a>.</p><p>EURO-3C’s backers—Spanish telecoms giant Telefónica, dozens of other European companies, and the European Commission (EC)—aim to fill a gap. U.S.-based cloud giants dominate in the EU, and European policymakers want their growing portfolio of digital government services on a “sovereign cloud” under full EU control.</p><p>But the EU lacks a real equivalent to the likes of AWS or Microsoft Azure. Indeed, any effort to build one will inevitably run up against the same U.S. cloud giants.</p><p>Just four U.S.-based hyperscalers—AWS, Microsoft Azure, Google Cloud, and IBM Cloud—together account for<a href="https://www.ceps.eu/disk-backup-to-the-cloud-is-a-gaping-vulnerability-in-the-eus-security/" rel="noopener noreferrer" target="_blank"> some 70 percent of EU cloud services</a>. This is despite the fact that the 2018 U.S. <a href="https://en.wikipedia.org/wiki/CLOUD_Act" rel="noopener noreferrer" target="_blank">CLOUD Act</a> allows U.S. federal law enforcement—at least in theory—to compel U.S.-based firms to hand over data that’s stored abroad. </p><h2>Who Do You Trust?</h2><p>But those hypothetical risks to digital services have become more real as transatlantic relations have soured under the second Trump administration. The U.S. has <a href="https://www.cbc.ca/news/politics/greenland-us-trump-canada-governor-general-mary-simon-9.7119074" rel="noopener noreferrer" target="_blank">openly threatened</a> to invade an EU member state and <a href="https://euobserver.com/19745/eu-rejects-us-claims-of-censorship-over-tech-rules-after-visa-bans/" rel="noopener noreferrer" target="_blank">sanctioned</a> a European Commissioner for passing legislation the White House dislikes. </p><p>After the White House sanctioned the Netherlands-based International Criminal Court in February 2025, Court staffers <a href="https://apnews.com/article/icc-trump-sanctions-karim-khan-court-a4b4c02751ab84c09718b1b95cbd5db3" rel="noopener noreferrer" target="_blank">claimed</a> Microsoft locked the Court’s chief prosecutor out of his email (Microsoft<a href="https://www.politico.eu/article/microsoft-did-not-cut-services-international-criminal-court-president-american-sanctions-trump-tech-icc-amazon-google/" rel="noopener noreferrer" target="_blank"> has denied this</a>). Around the same time, the U.S. <a href="https://kyivindependent.com/us-threatens-to-shut-off-starlink-if-ukraine-wont-sign-minerals-deal-sources-tell-reuters/" rel="noopener noreferrer" target="_blank">reportedly threatened</a> to sever EU ally Ukraine’s access to crucial Starlink satellite internet as leverage during trade negotiations.</p><p>“The geopolitical risk isn’t just the most extreme form of a doomsday ‘kill switch’ where Washington turns off Europe’s internet,” says <a href="https://fermigier.com/" rel="noopener noreferrer" target="_blank">Stéfane Fermigier</a> of <a href="https://euro-stack.com/pages/about" rel="noopener noreferrer" target="_blank">EuroStack</a>, an industry group that supports European digital independence. “It is the selective degradation of services and a total lack of retaliatory leverage.”</p><p>What, then, is the EU to do? <a href="https://blog.datacenter-paris.com/2026/01/24/liste-des-datacenters-secnumcloud-en-france-hebergement-souverain-pour-donnees-sensibles/" rel="noopener noreferrer" target="_blank">France</a> offers an example. Even before 2025, France implemented <a href="https://www.spscommerce.com/eur/blog/what-is-secnumcloud-and-does-my-company-need-to-qualify/" rel="noopener noreferrer" target="_blank">harsh restrictions</a> on non-EU cloud providers in public services—providers must locate data in the EU, rely on EU-based staff, and may not have majority non-EU shareholders. Now, EU policymakers are following France’s lead.</p><p>In October 2025, the EC issued a two-part <a href="https://commission.europa.eu/document/09579818-64a6-4dd5-9577-446ab6219113_en" rel="noopener noreferrer" target="_blank">framework</a> for judging cloud providers bidding for public-sector contracts. In the first part, the framework lays out a sort of sovereignty ladder. The more that a provider is subject to EU law, the higher its sovereignty level on this ladder. Any prospective bidder must first meet a certain level, depending on the tender.</p><p>Qualifying bidders then move to the second part, where their “sovereignty” is scored in more detail. Using too much proprietary software; over-relying on supply chains from outside the EU; having non-EU support staff; liability to non-EU laws like the CLOUD Act: All hurt a bidder’s score. </p><p>The framework was created for <a href="https://commission.europa.eu/news-and-media/news/commission-moves-forward-cloud-sovereignty-eur-180-million-tender-2025-10-10_en" rel="noopener noreferrer" target="_blank">one tender</a>, but observers say it sets a major precedent. Cloud providers bidding for state contracts across Europe may need to follow it, and it may influence legislation on both national and EU-wide levels.</p><h2>A Question of Scale</h2><p>Who, then, will receive high marks? At the moment, the answer is not simple. The EU cloud scene is quite fragmented. Numerous modest EU providers offer “sovereign cloud” services—such as Deutsche Telekom’s T-Systems, OVHcloud, and Scaleway—but <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/poi3.358" rel="noopener noreferrer" target="_blank">none are on the scale</a> of AWS or Google Cloud.</p><p>Inertia is on the side of the U.S. cloud giants, which can invest in their infrastructure and services on a far grander scale than their European counterparts. Some U.S. providers <a href="https://aws.amazon.com/blogs/security/aws-european-sovereign-cloud-achieves-first-compliance-milestone-soc-2-and-c5-reports-plus-seven-iso-certifications/" rel="noopener noreferrer" target="_blank">now offer</a> cloud services they say comply with the Commission’s “cloud sovereignty” demands.</p><p>Some European observers, like EuroStack, <a href="https://euro-stack.com/blog/2025/10/cloud-sovereignty-framework-comparison" rel="noopener noreferrer" target="_blank">say</a> such promises are hollow so long as a provider’s parent company is subject to the likes of the CLOUD Act and loopholes in the Commission’s process remain open. An AWS spokesperson told <em>IEEE </em><em>Spectrum</em> it had not disclosed any non-US enterprise or government data to the U.S. government under the CLOUD Act; a Google spokesperson said that its most sensitive EU offerings “are subject to local laws, not U.S. law”.</p><p>Even if a project like EURO-3C can offer a large-scale alternative, the U.S. cloud giants have another sort of inertia. Many developers—and many public purchasers of their services—will need convincing to leave behind a familiar environment.</p><p>“If you look at AWS, you look at Google, they’ve created some super technology. It’s very convenient, it’s easy to use,” says <a href="https://nl.linkedin.com/in/arnoldjuffer" rel="noopener noreferrer" target="_blank">Arnold Juffer</a>, CEO of the Netherlands-based cloud provider <a href="https://nebul.com/" rel="noopener noreferrer" target="_blank">Nebul</a>. “Once you’re in that platform, in that ecosystem, it’s very hard to get out.”</p><p><a href="https://bisi.org.uk/martyna-chmura" rel="noopener noreferrer" target="_blank">Martyna Chmura</a>, an analyst at the Bloomsbury Intelligence and Security Institute, a London-based think tank, sees some EU developers taking a mixed approach. “Many organizations are already moving toward multicloud setups, using European or sovereign providers for sensitive workloads while still relying on hyperscalers for certain services,” she says.</p><p>In that case, the EU’s top-down demands may encourage developers to use EU providers for sensitive applications—like government services, transport, autonomous vehicles, and some industrial automation—even if it’s inconvenient in the short term, or if it causes even more fragmentation of the EU cloud scene. “Running systems across different platforms can increase integration costs and make security and data governance more complicated. In some cases, organisations could lose some of the efficiency and cost advantages that come from using large hyperscale platforms,” Chmura says.</p><p>“Overall, the EU appears willing to accept some of these trade-offs,” Chmura says.</p>]]></description><pubDate>Tue, 17 Mar 2026 11:00:06 +0000</pubDate><guid>https://spectrum.ieee.org/europe-cloud-sovereignty</guid><category>Cloud-computing</category><category>Data-security</category><category>Data-privacy</category><dc:creator>Rahul Rao</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/abstract-pixelation-of-the-european-union-s-flag.jpg?id=65298877&amp;width=980"></media:content></item><item><title>With Nvidia Groq 3, the Era of AI Inference Is (Probably) Here</title><link>https://spectrum.ieee.org/nvidia-groq-3</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-man-in-all-black-presents-in-front-of-a-large-screen-which-compares-a-large-rectangular-chip-labelled-rubin-gpu-with-a-square.jpg?id=65298681&width=1245&height=700&coordinates=0%2C156%2C0%2C157"/><br/><br/><p>This week, over 30,000 people are descending upon San Jose, Calif., to attend<a href="https://www.nvidia.com/gtc/" rel="noopener noreferrer" target="_blank">Nvidia GTC</a>, the so-called Superbowl of AI—a nickname that may or may not have been coined by Nvidia. At the main event Jensen Huang, Nvidia CEO, took the stage to announce (among other things) a new line of<a href="https://spectrum.ieee.org/nvidia-rubin-networking" target="_self">next-generation Vera Rubin</a> chips that represent a first for the GPU giant: a chip designed specifically to handle AI inference. The Nvidia Groq 3 language processing unit (LPU) incorporates intellectual property Nvidia<a href="https://groq.com/newsroom/groq-and-nvidia-enter-non-exclusive-inference-technology-licensing-agreement-to-accelerate-ai-inference-at-global-scale" rel="noopener noreferrer" target="_blank">licensed</a> from the startup Groq last Christmas Eve for US $20 billion.</p><p>“Finally, AI is able to do productive work, and therefore the inflection point of inference has arrived,” Huang told the crowd. “AI now has to think. In order to think, it has to inference. AI now has to do; in order to do, it has to inference.”</p><p>Training and inference tasks have distinct computational requirements. While training can be done on huge amounts of data at the same time and can take weeks, inference must be run on a user’s query when it comes in. Unlike training, inference doesn’t require running costly<a href="https://spectrum.ieee.org/what-is-deep-learning/backpropagation" target="_self">backpropagation</a>. With inference, the most important thing is low latency—users expect the chatbot to answer quickly, and for thinking or reasoning models, inference runs many times before the user even sees an output.</p><p>Over the past few years, inference-specific chip startups were experiencing a sort of Cambrian explosion, with different companies exploring distinct approaches to speed up the task. The startups include<a href="https://www.d-matrix.ai/" rel="noopener noreferrer" target="_blank">D-matrix</a>, with digital in-memory compute;<a href="https://www.etched.com/" rel="noopener noreferrer" target="_blank">Etched</a>, with an ASIC for transformer inference;<a href="https://rain.ai/" rel="noopener noreferrer" target="_blank">RainAI</a>, with neuromorphic chips;<a href="https://en100.enchargeai.com/" rel="noopener noreferrer" target="_blank">EnCharge</a>, with analog in-memory compute;<a href="https://www.tensordyne.ai/" rel="noopener noreferrer" target="_blank">Tensordyne</a>, with logarithmic math to make AI computations more efficient;<a href="https://furiosa.ai/" rel="noopener noreferrer" target="_blank">FuriosaAI</a>, with hardware optimized for tensor operation rather than vector-matrix multiplication, and others.</p><p>Late last year, it looked like Nvidia had picked one of the winners among the crop of inference chips when it announced its deal with Groq. The Nvidia Groq 3 LPU reveal came a mere two and a half months after, highlighting the urgency of the growing inference market.</p><h2>Memory bandwidth and data flow</h2><p>Groq’s approach to accelerating inference relies on interleaving processing units with memory units on the chip. Instead of relying on high-bandwidth memory (HBM) situated next to GPUs, it leans on SRAM memory integrated within the processor itself. This design greatly simplifies the flow of data through the chip, allowing it to proceed in a streamlined, linear fashion.</p><p>“The data actually flows directly through the SRAM,”<a href="https://www.linkedin.com/in/markheaps/" rel="noopener noreferrer" target="_blank">Mark Heaps</a> said at the Supercomputing conference in 2024. Heaps was a chief technology evangelist at Groq at the time and is now director of developer marketing at Nvidia. “When you look at a multicore GPU, a lot of the instruction commands need to be sent off the chip, to get into memory and then come back in. We don’t have that. It all passes through in a linear order.”</p><p>Using SRAM allows that linear data flow to happen exceptionally fast, leading to the low latency required for inference applications. “The LPU is optimized strictly for that extreme low latency token generation,” says<a href="https://www.linkedin.com/in/ian-buck-19201315/" rel="noopener noreferrer" target="_blank">Ian Buck</a>, VP and general manager of hyperscale and high-performance computing at Nvidia.</p><p>Comparing the Rubin GPU and Groq 3 LPU side by side highlights the difference. The Rubin GPU has access to a whopping 288 gigabytes of HBM and is capable of 50 quadrillion floating-point operations per second (petaFLOPS) of 4-bit computation. The Groq 3 LPU contains a mere 500 megabytes of SRAM memory and is capable of 1.2 petaFLOPS of 8-bit computation. On the other hand, while the Rubin GPU has a memory bandwidth of 22 terabytes per second, at 150 TB/s the Groq 3 LPU is seven times as fast. The lean, speed-focused design is what allows the LPU to excel at inference.</p><p>The new inference chip underscores the ongoing trend of AI adoption, which shifts the computational load from just building ever bigger models to actually using those models at scale. “Nvidia’s announcement validates the importance of SRAM-based architectures for large-scale inference, and no one has pushed SRAM density further than d-Matrix,” says d-Matrix CEO Sid Sheth. He’s betting that data center customers will want a variety of processors for inference. “The winning systems will combine different types of silicon and fit easily into existing data centers alongside GPUs.”</p><p>Inference-only chips may not be the only solution. Late last week, <a href="https://press.aboutamazon.com/aws/2026/3/aws-and-cerebras-collaboration-aims-to-set-a-new-standard-for-ai-inference-speed-and-performance-in-the-cloud" rel="noopener noreferrer" target="_blank">Amazon Web Services</a> said that it will deploy a new kind of inferencing system in its data centers. The system is a combination of AWS’s Tranium <a href="https://spectrum.ieee.org/amazon-ai" target="_self">AI accelerator </a>and <a href="https://spectrum.ieee.org/cerebras-chip-cs3" target="_self">Cerebras Systems’ third generation computer CS-3</a>, which is built around the <a href="https://spectrum.ieee.org/cerebrass-giant-chip-will-smash-deep-learnings-speed-barrier" target="_self">largest single chip</a> ever made. The two-part system is meant to take advantage of a technique called inference disaggregation. It separates inference into two parts—processing the prompt, called prefill, and generating the output, called decode. Prefill is inherently parallel, computationally intensive, and doesn’t need much memory bandwidth, while decode is a more serial process that needs a lot of memory bandwidth. Cerebras has maximized the memory bandwidth issue by building 44 GB of SRAM on its chip connected by a 21 PB/s network. </p><p><span>Nvidia, too, intends to take advantage of inference disaggregation in its new compute rack, called the Nvidia <a href="https://developer.nvidia.com/blog/inside-nvidia-groq-3-lpx-the-low-latency-inference-accelerator-for-the-nvidia-vera-rubin-platform/" target="_blank">Groq 3 LPX</a>. Each tray within the rack will house 8 Groq 3 LPUs. The LPX will split the inference task with a <a href="https://www.nvidia.com/en-us/data-center/vera-rubin-nvl72/" target="_blank">Vera Rubin NVL72</a>, Nvidia’s existing GPU and CPU rack.</span> The prefill and the more computationally intensive parts of the decode are done on Vera Rubin, while the final part is done on the Groq 3 LPU, leveraging the strengths of each chip. “We’re in volume production now,” Huang said.</p><p><strong>Correction on 4/8/26: </strong>a previous version of this article incorrectly stated that the Nvidia Groq 3 LPX contains a Vera Rubin chip in each tray. In fact, each tray contains 8 Groq 3 LPUs and no Vera Rubins, but the whole rack is designed to work in concert with an NVL72 rack, which houses Vera Rubin chips. </p><p><em>This article appears in the May 2026 print issue as “<span>The Era of AI Inference Is Almost Here</span>.”</em></p>]]></description><pubDate>Mon, 16 Mar 2026 21:04:33 +0000</pubDate><guid>https://spectrum.ieee.org/nvidia-groq-3</guid><category>Inferencing</category><category>Nvidia</category><category>Gpus</category><category>Processors</category><category>Ai</category><dc:creator>Dina Genkina</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-man-in-all-black-presents-in-front-of-a-large-screen-which-compares-a-large-rectangular-chip-labelled-rubin-gpu-with-a-square.jpg?id=65298681&amp;width=980"></media:content></item><item><title>Intel Demos Chip to Compute With Encrypted Data</title><link>https://spectrum.ieee.org/fhe-intel</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/overhead-view-of-intel-s-computing-chip-called-heracles.jpg?id=65174073&width=1245&height=700&coordinates=0%2C156%2C0%2C157"/><br/><br/><div class="ieee-summary"><h2>Summary</h2><ul><li><a href="#fhe">Fully homomorphic encryption (FHE)</a> allows computing on encrypted data without decryption, but it’s currently slow on standard CPUs and GPUs.</li><li>Intel’s Heracles chip accelerates FHE tasks up to <a href="#faster">5,000 times as fast as</a> top Intel server CPUs.</li><li>Heracles uses a <a href="#heracles">3-nanometer FinFET technology and high-bandwidth memory</a>, enabling efficient encrypted computing at scale.</li><li>Startups and Intel are <a href="#commercial">racing to commercialize FHE accelerators</a>, with potential applications in AI and secure data processing.</li></ul></div><p><span>Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer?</span></p><p>There is a way to do computing on encrypted data without ever having it decrypted. It’s called <a href="https://spectrum.ieee.org/homomorphic-encryption" target="_blank">fully homomorphic encryption,</a> or FHE. But there’s a rather large catch. It can take thousands—even tens of thousands—of times as long to compute on today’s CPUs and GPUs than simply working with the decrypted data.</p><p>So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the <a href="https://www.isscc.org/" target="_blank">IEEE International Solid-State Circuits Conference</a> (ISSCC) in San Francisco, <a href="https://www.intel.com/content/www/us/en/homepage.html" target="_blank">Intel</a> demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU.</p><p>Startups are racing to beat Intel and each other to commercialization. But <a href="https://www.linkedin.com/in/sanu-mathew-4073742/" target="_blank">Sanu Mathew,</a> who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. “Heracles is the first hardware that works at scale,” he says.</p><p>The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel’s most advanced, 3-nanometer FinFET technology. And it’s flanked inside a liquid-cooled package by two 24-gigabyte <a href="https://spectrum.ieee.org/dram-shortage" target="_blank">high-bandwidth memory </a>chips—a configuration usually seen only in GPUs for training AI.</p><p class="ieee-inbody-related">RELATED: <a href="https://spectrum.ieee.org/how-to-compute-with-data-you-cant-see" target="_blank">How to Compute with Data You Can’t See</a></p><p>In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side.</p><p>On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn’t something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles.</p><p>Looking back on the five-year journey to bring the Heracles chip to life, <a href="https://www.linkedin.com/in/ro-cammarota-a226b817/" target="_blank">Ro Cammarota</a>, who led the project at Intel until last December and is now at University of California, Irvine, says “We have proven and delivered everything that we promised.”</p><h2>FHE Data Expansion</h2><p class="rm-anchors" id="fhe">FHE is fundamentally a mathematical transformation, sort of like the Fourier transform. It encrypts data using a quantum-computer-proof algorithm, but, crucially, uses corollaries to the mathematical operations usually used on unencrypted data. These corollaries achieve the same ends on the encrypted data.<strong></strong></p><p>One of the main things holding such secure computing back is the explosion in the size of the data once it’s encrypted for FHE, <a href="https://www.linkedin.com/in/anupamgolder/" target="_blank">Anupam Golder</a>, a research scientist at Intel’s circuits research lab, told engineers at ISSCC. “Usually, the size of cipher text is the same as the size of plain text, but for FHE it’s orders of magnitude larger,” he said.</p><p>While the sheer volume is a big problem, the kinds of computing you need to do with that data is also an issue. FHE is all about very large numbers that must be computed with precision. While a CPU can do that, it’s very slow going—integer addition and multiplication take about 10,000 more clock cycles in FHE. Worse still, CPUs aren’t built to do such computing in parallel. Although GPUs excel at parallel operations, precision is not their strong suit. (In fact, from generation to generation, GPU designers have devoted more and more of the chip’s resources to <a href="https://spectrum.ieee.org/nvidia-gpu" target="_blank">computing less-and-less-precise numbers</a>.)</p><p>FHE also requires some oddball operations with names like “twiddling” and “automorphism,” and it relies on a compute-intensive noise-cancelling process called bootstrapping. None of these things are efficient on a general-purpose processor. So, while clever algorithms and libraries of software cheats have been developed over the years, the need for a hardware accelerator remains if FHE is going to tackle large-scale problems, says Cammarota.</p><h2>The Labors of Heracles</h2><p class="rm-anchors" id="heracles">Heracles was initiated under a <span>Defense Advanced Research Projects Agency</span> (DARPA) program five years ago to accelerate FHE using purpose-built hardware. It was developed as “a whole system-level effort that went all the way from theory and algorithms down to the circuit design,” says Cammarota.</p><p>Among the first problems was how to compute with numbers that were larger than even the 64-bit words that are today a CPU’s most precise. There are ways to break up these gigantic numbers into chunks of bits that can be calculated independently of each other, providing a degree of parallelism. Early on, the Intel team made a big bet that they would be able to make this work in smaller, 32-bit chunks, yet still maintain the needed precision. This decision gave the Heracles architecture some speed and parallelism, because the 32-bit arithmetic circuits are considerably smaller than 64-bit ones, explains Cammarota.</p><p>At Heracles’s heart are 64 compute cores—called tile-pairs—arranged in an eight-by-eight grid. These are what are called single instruction multiple data (SIMD) compute engines designed to do the polynomial math, twiddling, and other things that make up computing in FHE and to do them in parallel. An on-chip 2D mesh network connects the tiles to each other with wide, 512-byte, buses.</p><p class="ieee-inbody-related">RELATED: <a href="https://spectrum.ieee.org/homomorphic-encryption-llm" target="_blank">Tech Keeps Chatbots From Leaking Your Data</a></p><p>Important to making encrypted computing efficient is feeding those huge numbers to the compute cores quickly. The sheer amount of data involved meant linking 48-GB-worth of expensive high-bandwidth memory to the processor with 819-GB-per-second connections. Once on the chip, data musters in 64 megabytes of cache memory—somewhat more than an Nvidia <a href="https://spectrum.ieee.org/nvidias-next-gpu-shows-that-transformers-are-transforming-ai" target="_blank">Hopper-generation GPU</a>. From there it can flow through the array at 9.6 terabytes per second by hopping from tile-pair to tile-pair.</p><p>To ensure that computing and moving data don’t get in each other’s way, Heracles runs three synchronized streams of instructions simultaneously, one for moving data onto and off of the processor, one for moving data within it, and a third for doing the math, Golder explained.</p><p class="rm-anchors" id="faster">It all adds up to some massive speedups, according to Intel. Heracles—operating at 1.2 gigahertz—takes just 39 microseconds to do FHE’s critical math transformation, a 2,355-fold improvement over an Intel Xeon CPU running at 3.5 GHz. Across seven key operations, Heracles was 1,074 to 5,547 times as fast.</p><p>The differing ranges have to do with how much data movement is involved in the operations, explains Mathew. “It’s all about balancing the movement of data with the crunching of numbers,” he says.</p><h2>FHE Competition</h2><p class="rm-anchors" id="commercial">“It’s very good work,” <a href="https://www.linkedin.com/in/kurt-rohloff/" target="_blank">Kurt Rohloff</a>, chief technology officer at FHE software firm <a href="https://dualitytech.com/platform/technology-fully-homomorphic-encryption/" target="_blank">Duality Technology</a>, says of the Heracles results. Duality was part of a team that developed a competing accelerator design under the same DARPA program that brought forth Intel’s Heracles. “When Intel starts talking about scale, that usually carries quite a bit of weight.”</p><p>Duality’s focus is less on new hardware than on software products that do the kind of encrypted queries that Intel demonstrated at ISSCC. At the scale in use today “there’s less of a need for [specialized] hardware,” says Rohloff. “Where you start to need hardware is emerging applications around deeper machine-learning-oriented operations like neural net, LLMs, or semantic search.”</p><p>Last year, Duality demonstrated an <a href="https://spectrum.ieee.org/homomorphic-encryption-llm" target="_self">FHE-encrypted language model called BERT</a>. Like more famous LLMs such as ChatGPT, BERT is a transformer model. However it’s only one-tenth the size of even the most compact LLMs.</p><p><a href="https://www.linkedin.com/in/barrus/" target="_blank">John Barrus</a>, vice president of product at Dayton, Ohio–based <a href="https://niobiummicrosystems.com/" target="_blank">Niobium Microsystems</a>, an FHE chip startup <a href="https://www.galois.com/" target="_blank">spun out</a> of another DARPA competitor, agrees that encrypted AI is a key target of FHE chips. “There are a lot of smaller models that, even with FHE’s data expansion, will run just fine on accelerated hardware,” he says.</p><p>With no stated commercial plans from Intel, Niobium expects its chip to be “the world’s first commercially viable FHE accelerator, designed to enable encrypted computations at speeds practical for real-world cloud and AI infrastructure.” Although it hasn’t announced when a commercial chip will be available, last month the startup revealed that it had inked a deal worth 10 billion South Korean won (US $6.9 million) with Seoul-based chip design firm <a href="https://semifive.com/" target="_blank">Semifive</a> to develop the FHE accelerator for fabrication using Samsung Foundry’s 8-nanometer-process technology.</p><p>Other startups including <a href="https://cornami.com/" target="_blank">Cornami</a>,  <a href="https://www.fabriccryptography.com/" target="_blank">Fabric Cryptography</a>, and <a href="https://optalysys.com/" target="_blank">Optalysys</a> have been working on chips to accelerate FHE. Optalysys CEO <a href="https://optalysys.com/people/" target="_blank">Nick New</a> says Heracles hits about the level of speedup you could hope for using an all-digital system. “We’re looking at pushing way past that digital limit,” he says. His company’s approach is to use the physics of a photonic chip to do FHE’s compute-intensive transform steps. That photonics chip is on its seventh generation, he says, and among the next steps is to 3D integrate it with custom silicon to do the nontransform steps and coordinate the whole process. A full 3D-stacked commercial chip could be ready in two or three years, says New.</p><p>While competitors develop their chips, so will Intel, says Mathew. It will be improving on how much the chip can accelerate computations by fine-tuning the software. It will also be trying out more massive FHE problems, and exploring hardware improvements for a potential next generation. “This is like the first microprocessor…the start of a whole journey,” says Mathew.</p>]]></description><pubDate>Tue, 10 Mar 2026 13:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/fhe-intel</guid><category>Privacy</category><category>Intel</category><category>Encryption</category><category>Homomorphic-encryption</category><category>Hardware-acceleration</category><category>Isscc</category><dc:creator>Samuel K. Moore</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/overhead-view-of-intel-s-computing-chip-called-heracles.jpg?id=65174073&amp;width=980"></media:content></item><item><title>Finite-Element Approaches to Transformer Harmonic and Transient Analysis</title><link>https://content.knowledgehub.wiley.com/solving-harmonic-and-transient-challenges-in-transformers-using-integrateds-faraday/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/logo-of-integrated-engineering-software-with-pixelated-geometric-design-and-text.png?id=65106417&width=980"/><br/><br/><p>Explore structured finite-element methodologies for analyzing transformer behavior under harmonic and transient conditions — covering modelling, solver configuration, and result validation techniques.</p><p><strong>What Attendees will Learn</strong><span></span></p><ol><li>How FEM enables pre-fabrication performance evaluation — Assess magnetic field distribution, current behavior, and turns-ratio accuracy through simulation rather than physical testing.</li><li><span>How harmonic analysis uncovers saturation and imbalance — Identify high-flux regions and current asymmetries that analytical methods may not capture.</span></li><li><span>How transient simulations characterize dynamic response — Examine time-domain current waveforms, inrush behavior, and multi-cycle stabilization.</span></li><li><span>How modelling choices affect simulation fidelity — Understand the impact of coil definitions, winding configurations, solver type, and material models on accuracy.</span></li></ol><p><span><a href="https://content.knowledgehub.wiley.com/solving-harmonic-and-transient-challenges-in-transformers-using-integrateds-faraday/" target="_blank">Download this free whitepaper now!</a><br/></span></p>]]></description><pubDate>Tue, 10 Mar 2026 10:00:03 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/solving-harmonic-and-transient-challenges-in-transformers-using-integrateds-faraday/</guid><category>Type-whitepaper</category><category>Transformers</category><category>Finite-element-analysis</category><category>Harmonic</category><dc:creator>Integrated Engineering Software</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/65106417/origin.png"></media:content></item><item><title>Entomologists Use a Particle Accelerator to Image Ants at Scale</title><link>https://spectrum.ieee.org/3d-scanning-particle-accelerator-antscan</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/four-grey-3d-models-of-ants-shown-up-close-in-high-detail-two-larger-ants-tower-above-two-smaller-ones-in-the-front-the-larges.jpg?id=65150255&width=1245&height=700&coordinates=0%2C62%2C0%2C63"/><br/><br/><p>Move over, Pixar. The ants that animators once morphed into googly-eyed caricatures in films such as <em>A Bug’s Life</em> and <em>Antz</em> just received a meticulously precise anatomical reboot.</p><p><a href="https://doi.org/10.1038/s41592-026-03005-0" rel="noopener noreferrer" target="_blank">Writing today in <em>Nature Methods</em></a>, an international team of entomologists, accelerator physicists, computer scientists, and biological-imaging specialists describe a new 3D atlas of ant morphology.</p><p>Dubbed Antscan, the platform features micrometer-resolution reconstructions that lay bare not only the <a href="https://spectrum.ieee.org/festo-bionic-ants-and-butterflies" target="_self">insects’ armored exoskeletons</a> but also their muscles, nerves, digestive tracts, and needlelike stingers poised at the ready.</p><p>Those high-resolution images—spanning 792 species across 212 genera and covering the bulk of described ant diversity—are now available free of charge through an <a href="http://www.antscan.info" rel="noopener noreferrer" target="_blank">interactive online portal</a>, where anyone can rotate, zoom, and virtually “dissect” the insects from a laptop.</p><p>“Antscan is exciting!” says <a href="https://experts.mcmaster.ca/people/curric7" rel="noopener noreferrer" target="_blank">Cameron Currie</a>, an evolutionary biologist at McMaster University in Hamilton, Ont., Canada, who was not involved in the research. “It provides an outstanding resource for comparative work across ants.”</p><h2>Digital Access to Natural History Collections</h2><p>It also provides broader access to natural history collections.</p><p>No longer must these vast archives of preserved life be confined to drawers and jars in museums scattered around the world, available only to specialists able to visit in person. All these specimens can now be explored digitally by anyone with an internet connection, adding fresh scientific value to museum holdings.</p><p>“The more people that access and work with the stuff in our museums, whether it’s physically or digitally, the greater value they add,” says <a href="https://www.floridamuseum.ufl.edu/blackburn-lab/personnel/principal-investigator/" rel="noopener noreferrer" target="_blank">David Blackburn</a>, the curator of herpetology at the Florida Museum of Natural History who, like Currie, was not involved in the research.</p><p>Some of those people may be professional myrmecologists (scientists who specialize in the study of ants) and fourmiculture (ant-farming) enthusiasts. But others may be schoolteachers, video-game designers, tattoo artists, or curious members of the public.</p><p>“It is an extremely rich dataset that can be used for a number of different applications in science, but  also for the arts and outreach and education.” says <a href="https://www.oist.jp/image/julian-katzke" rel="noopener noreferrer" target="_blank"><span>Julian Katzke</span></a>, an entomologist at the National Museum of Natural History in Washington, D.C.</p><p>Card-carrying members of <em>IEEE</em> should find plenty to explore in Antscan as well, says <a href="https://entomology.umd.edu/people/evan-economo" target="_blank">Evan Economo</a>, a biodiversity scientist at the University of Maryland in College Park who, along with Katzke, co-led the project. <span>With the dataset now publicly available and standardized at scale, “I would really like to see these big libraries of organismal form one day be useful for people in robotics and engineering, so they can mine these data for new kinds of biomechanical designs,” he says.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Various 3D renderings of an ant soldier. First, the outward appearance. Followed by cross sectional slices of its body. One shows the internal structures of the ant, with space predominantly occupied by muscles. Another shows the same view, but with muscles removed, which highlights the digestive tract and nervous system. Lastly, zoomed-in renderings inside the ant's brain, gut and sting apparatus are shown with labels." class="rm-shortcode" data-rm-shortcode-id="672fbae791e49ff86839c1593eccc48d" data-rm-shortcode-name="rebelmouse-image" id="33b51" loading="lazy" src="https://spectrum.ieee.org/media-library/various-3d-renderings-of-an-ant-soldier-first-the-outward-appearance-followed-by-cross-sectional-slices-of-its-body-one-show.jpg?id=65150295&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">These renderings reveal different structures within the body of an army ant (<i>Eciton hamatum</i>) subsoldier, based on Antscan data.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..."><a href="https://doi.org/10.1038/s41592-026-03005-0" target="_blank">Katzke et al.</a></small></p><h2>Advancements in Ant Imaging Technology</h2><p>Researchers have been digitizing natural history collections for years: photographing drawers of pinned specimens, building surface-level models from overlapping image stacks, and using computed tomography (CT) to scan select species one at a time. But those efforts are typically slow, piecemeal, and often limited to external features.</p><p>To capture entire organisms, inside and out, Economo and his team—then based at the Okinawa Institute of Science and Technology in Japan and including former lab members Katzke and <a href="https://www.museumfuernaturkunde.berlin/en/research/research/dynamics-nature/center-integrative-biodiversity-discovery" target="_blank">Francisco Hita Garcia</a> (now at the Museum für Naturkunde in Berlin)—built an automated imaging pipeline that effectively turned a particle accelerator into an anatomical assembly line.</p><p>They scoured museum collections around the world for ant specimens—workers, queens, and males alike—and sent some 2,200 preserved samples through a pair of micro-CT beamlines at the Karlsruhe Institute of Technology’s synchrotron <a href="https://www.ibpt.kit.edu/KIT_Light_Source.php" target="_blank">light source facility</a> in Germany.<strong></strong></p><p>There, biological imaging specialist <a href="https://www.ips.kit.edu/2890_5177.php" target="_blank">Thomas van de Kamp</a> oversaw the operation, as intense X-ray beams swept through each specimen and high-speed detectors recorded thousands of projection images from multiple angles. Robotic handlers moved vials of alcohol-preserved ants into position, one after another, all in a matter of days.</p><p>Software then reconstructed 200-plus terabytes of data generated into 3D volumes, with neural networks helping to automate the identification and analysis of anatomical structures.</p><p>Similar large-scale digitization efforts—such as the <a href="https://www.floridamuseum.ufl.edu/overt/" target="_blank">openVertebrate Project</a>, led by the Florida Museum of Natural History’s Blackburn, which involved <a href="https://academic.oup.com/bioscience/article/74/3/169/7615104" target="_blank">scanning thousands</a> of birds, fish, mammals, reptiles, and amphibians—have begun transforming how biologists study anatomy. But applying conventional micro-CT at comparable scale to insects, which are smaller and harder to scan at useful resolutions, required a leap in speed and throughput.</p><p>That’s where the synchrotron came in. By harnessing a particle accelerator to generate extraordinarily bright, coherent X-rays, the team was able to capture high-resolution internal anatomy in seconds, without the lengthy staining or other preprocessing steps often required for soft-tissue contrast in standard lab scanners.</p><p>“It is an impressive piece of work,” says <a href="https://www.nms.ac.uk/profile/dr-vladimir-blagoderov" target="_blank">Vladimir Blagoderov</a>, principal curator of invertebrates at the National Museums Scotland in Edinburgh, who was not involved in the research. “This project adds an industrial dimension to CT scanning by combining robotics, standardized sampling, automated image-processing pipelines, and machine learning.”</p><p>The sheer taxonomic breadth of the Antscan dataset now makes it possible to spot patterns across the entire ant family tree, as Economo and his colleagues have already demonstrated.</p><p>In a separate paper published last December, for example, the researchers drew on the newly generated scans to measure how much ants invest in their outer protective casing. Reporting in <em>Science Advances,</em> they showed that species with lighter, less costly cuticles <a href="https://www.science.org/doi/10.1126/sciadv.adx8068" target="_blank"><span><span>tend to form larger colonies and diversify more rapidly</span></span></a> over evolutionary time.</p><p>In their latest study, the Antscan team  turned to a different evolutionary question: The distribution of a biomineral “armor” layer <a href="https://www.nature.com/articles/s41467-020-19566-3" target="_blank">first described</a> by Currie and his colleagues in 2020 in a Central American leaf-cutter ant. A quick sweep through the Antscan database revealed that this coating—which absorbs X-rays and is visible as a bright sheath over the cuticle—is not an oddity confined to one species.</p><p>Instead, it is common among fungus-farming ants, the evolutionary lineage from which leaf-cutting ants arose roughly 20 million years ago, but largely absent in most other branches of the ant tree. (Currie’s team independently confirmed the pattern using X-ray diffraction, a technique that can precisely reveal a material’s mineral composition, as the group <a href="https://www.biorxiv.org/content/10.64898/2026.02.07.704540v1" target="_blank">reported last month in a preprint</a> posted to <em>bioRxiv</em>.)</p><p>Those are only early demonstrations of what the database can do, though. And with AI tools increasingly capable of parsing enormous, information-rich data troves, the real analytical power of Antscan may still lie ahead, says <a href="https://agsci.colostate.edu/agbio/gillette-museum/museum-staff/" target="_blank">Marek Borowiec</a>, director of the C.P. Gillette Museum of Arthropod Diversity at Colorado State University, who has chronicled <a href="https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.13901" target="_blank">the rise of <span>deep learning tools</span></a> in ecology and evolution.</p><p>“The full advantage of this dataset will be realized when these methods are deployed,” he says.</p><h2>Transforming Morphology with Antscan</h2><p>The ambitions behind Antscan extend well beyond ant biology. Economo and his colleagues see it as a blueprint for digitizing, standardizing, and scaling anatomy itself.<br/></p><p>Just as <a href="https://spectrum.ieee.org/whole-genome-sequencing" target="_self">large-scale sequencing projects</a> and genomic databases transformed the study of DNA over the past two decades, they hope Antscan will catalyze a comparable shift for morphology. <span>“This is kind of like having a genome for shape,” Economo says.</span></p><p>Museum collections house millions of alcohol-preserved insects and other small invertebrates—beetles, flies, wasps, spiders, crustaceans—many of them representing rare or extinct populations. Following the Antscan playbook, each could be converted into a high-resolution library of “<a data-linked-post="2655774779" href="https://spectrum.ieee.org/climate-models" target="_blank">digital twins.</a>“</p><p>In each case, synchrotron micro-CT would offer a rapid way to peer inside fragile specimens without cutting them open, capturing both hard exoskeleton and soft tissue in exquisite detail across vast swaths of biological diversity.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="a20837327321eee6ad3fab098e4da2e3" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/neYh_KITjGE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-photo-credit" placeholder="Add Photo Credit..."><a href="https://www.youtube.com/watch?v=neYh_KITjGE" target="_blank">Antscan/YouTube</a></small></p><p><span>Beam time at major synchrotron facilities is scarce and fiercely competitive, a practical bottleneck for any effort to digitize biodiversity at scale, notes National Museums Scotland’s Blagoderov. What’s more, “even once the scans exist, the downstream burden is nontrivial: M</span><span>oving, storing, and processing hundreds of terabytes of data can become a bottleneck in its own right,” he says.</span></p><p>But if access can be secured and the computational infrastructure scaled to match, such efforts could transform natural history museums from static repositories into dynamic digital biomes.</p><p>That transformation may prove especially important at a time of accelerating species loss on Earth. By capturing organisms in extraordinary detail, resources like Antscan create a permanent, high-resolution record of life’s architecture—an anatomical time capsule that can be queried and revisited long after fragile specimens degrade or wild populations vanish.</p><p>And should Pixar ever greenlight <em>A Bug’s Life 2 </em>(suggested title: <em>Even Buggier</em>),<em> </em>the studio’s character designers may not need to take much artistic license at all. Thanks to a particle accelerator and a small cadre of dedicated scientists, the reference models are already in hand—rendered not in animation software but in micrometer-perfect anatomical form.</p><p><em>This article appears in the May 2026 print issue as “<span>Particle Accelerator Helps Digitize Ant Anatomies</span>.”</em></p>]]></description><pubDate>Thu, 05 Mar 2026 10:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/3d-scanning-particle-accelerator-antscan</guid><category>Machine-learning</category><category>Insects</category><category>Particle-accelerator</category><category>Computed-tomography</category><dc:creator>Elie Dolgin</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/four-grey-3d-models-of-ants-shown-up-close-in-high-detail-two-larger-ants-tower-above-two-smaller-ones-in-the-front-the-larges.jpg?id=65150255&amp;width=980"></media:content></item><item><title>Watershed Moment for AI–Human Collaboration in Math</title><link>https://spectrum.ieee.org/ai-proof-verification</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/overlapping-circles-in-red-blue-green-and-yellow-gradients-on-a-beige-background.png?id=65559644&width=1245&height=700&coordinates=0%2C187%2C0%2C188"/><br/><br/><p><span>When Ukrainian mathematician </span><a href="https://people.epfl.ch/maryna.viazovska?lang=en" target="_blank">Maryna Viazovska</a><span> received a </span><a href="https://www.mathunion.org/imu-awards/fields-medal/fields-medals-2022" target="_blank">Fields Medal</a><span>—widely regarded as the Nobel Prize for mathematics—in July 2022,</span><span> it was big news. Not only was she the second woman to accept the honor in the award’s 86-year history, but she collected the medal just months after her country had been invaded by Russia. Nearly four years later, Viazovska is making waves again. <a href="https://www.math.inc/sphere-packing" target="_blank">Today</a>, in </span><span>a collaboration between humans and AI, Viazovska’s proofs have been formally verified, signaling rapid progress in AI’s abilities to <a href="https://spectrum.ieee.org/ai-math-benchmarks" target="_blank">assist</a> with mathemat</span><span>ical research. </span></p><p><span>“These new results seem very, very impressive, and definitely signal some rapid progress in this direction,” says AI-reasoning expert and Princeton University postdoc <a href="https://ai.princeton.edu/news/2025/ai-lab-welcomes-associate-research-scholars" target="_blank">Liam Fowl</a>, who was not involved in the work.</span></p><p>In her Fields Medal–winning research, Viazovska had tackled two versions of the sphere-packing problem, which asks: How densely can identical circles, spheres, et cetera, be packed in <em>n</em>-dimensional space? In two dimensions, the honeycomb is the best solution. In three dimensions, spheres stacked in a pyramid are optimal. But after that, it becomes exceedingly difficult to find the best solution, and to prove that it is in fact the best. </p><p>In 2016, Viazovska solved the problem in two cases. By using powerful mathematical functions known as (quasi-)modular forms, she proved that a symmetric arrangement known as E<sub>8</sub> is the <a href="https://annals.math.princeton.edu/articles/keyword/sphere-packing" target="_blank">best 8-dimensional packing</a>, and soon after proved with collaborators that another sphere packing called the <a href="https://annals.math.princeton.edu/2017/185-3/p08" target="_blank">Leech lattice is best in 24 dimensions</a>. Though seemingly abstract, this result has potential to help solve everyday problems related to dense sphere packing, including <a data-linked-post="2650280110" href="https://spectrum.ieee.org/novel-error-correction-code-opens-a-new-approach-to-universal-quantum-computing" target="_blank">error-correcting codes</a> used by smartphones and space probes.</p><p>The proofs were verified by the mathematical community and deemed correct, leading to the Fields Medal recognition. But formal verification—the ability of a proof to be verified by a computer—is another beast altogether. Since 2022, much <a href="https://cacm.acm.org/research/formal-reasoning-meets-llms-toward-ai-for-mathematics-and-verification/" target="_blank">progress</a> has been made in AI-assisted formal proof verification. </p><h2>Serendipity leads to formalization project</h2><p>A few years later, a chance meeting in Lausanne, Switzerland, between third-year undergraduate <a href="https://thefundamentaltheor3m.github.io/" target="_blank">Sidharth Hariharan</a> and Viazovska would reignite her interest in sphere-packing proofs. Though still very early in his career, Hariharan was already becoming adept at formalizing proofs.</p><p>“Formal verification of a proof is like a rubber stamp,” Fowl says. “It’s a kind of bona fide certification that you know your statements of reasoning are correct.”</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A woman with braided hair and a green dress looks into the distance." class="rm-shortcode" data-rm-shortcode-id="bf78890ea597cff48693535651423f0a" data-rm-shortcode-name="rebelmouse-image" id="ee2ee" loading="lazy" src="https://spectrum.ieee.org/media-library/a-woman-with-braided-hair-and-a-green-dress-looks-into-the-distance.jpg?id=65559313&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Maryna Viazovska, now the Chair of Number Theory at École Polytechnique Fédérale de Lausanne, in Switzerland, was awarded the Fields Medal in 2022 for solving the sphere-packing problem in eight and 24 dimensions.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Fred Merz/EPFL</small></p><p>Hariharan told Viazovska how he had been using the process of formalizing proofs to learn and really understand mathematical concepts. In response, Viazovska expressed an interest in formalizing her proofs, largely out of curiosity. From this, in March 2024 the <a href="https://thefundamentaltheor3m.github.io/Sphere-Packing-Lean/" target="_blank">Formalising Sphere Packing in Lean</a> project was born. <span>Lean is a popular programming language and “proof assistant” that allows mathematicians to write proofs that are then verified for absolute correctness by a computer.</span></p><p>A collaboration formed to write a human-readable “blueprint” that could be used to map the 8-dimensional proof’s various constituents and figure out which of them had and had not been formalized and/or proven, and then prove and formalize those missing elements in Lean. </p><p><span>“We had been building the project’s repository for about 15 months when we enabled public access in June 2025,” recalls Hariharan, now a first-year Ph.D. student at Carnegie Mellon University. “Then, in late October we heard from Math, Inc. for the first time.”</span></p><h2>The AI speedup</h2><p><a href="https://www.math.inc/" target="_blank">Math, Inc.</a> is a startup developing Gauss, an AI specifically designed to automatically formalize proofs. “It’s a particular kind of language model called a reasoning agent that’s meant to interleave both traditional natural-language reasoning and fully formalized reasoning,” explains <a href="https://jesse-michael-han.github.io/" target="_blank">Jesse Han</a>, Math, Inc. CEO and cofounder. “So it’s able to conduct literature searches, call up tools, and use a computer to write down Lean code, take notes, spin up verification tooling, run the Lean compiler, et cetera.”</p><p>Math, Inc. first hit the headlines when it announced that Gauss had completed a <a href="https://mathstodon.xyz/@tao/111847680248482955" target="_blank">Lean formalization of the strong <span>prime number theorem</span> (PNT)</a> in three weeks last summer, a task that Fields Medalist <a href="https://terrytao.wordpress.com/" target="_blank">Terence Tao</a> and <a href="https://sites.math.rutgers.edu/~alexk/" target="_blank">Alex Kontorovich</a> had been working on. Similarly, Math, Inc. contacted Hariharan and colleagues to say that Gauss had proven several facts related to their sphere-packing project.</p><p>“They told us that they had finished 30 “sorrys,” which meant that they proved 30 intermediate facts that we wanted proved,” explains Hariharan. A proportion of these sorrys were shared with the project team and merged with their own work. “One of them helped us identify a typo in our project, which we then fixed,” adds Hariharan. “So it was a pretty fruitful collaboration.”</p><h2>From 8 to 24 dimensions</h2><p>But then, radio silence followed. Math, Inc. appeared to lose interest. However, while Hariharan and colleagues continued their labor of love, Math, Inc. was building a new and improved version of Gauss. “We made a research breakthrough sometime mid-January that produced a much stronger version of Gauss,” says Han. “This new version reproduced our three-week PNT result in two to three days.”</p><p>Days later, the new Gauss was steered back to the sphere-packing formalization. Working from the invaluable preexisting blueprint and work that Hariharan and collaborators had shared, Gauss not only autoformalized the 8-dimensional case, but also found and fixed a typo in the published paper, all in the space of five days.</p><p>“When they reached out to us in late January saying that they finished it, to put it very mildly, we were very surprised,” says Hariharan. “But at the end of the day, this is technology that we’re very excited about, because it has the capability to do great things and to assist mathematicians in remarkable ways.”</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="A laptop with sphere packing code in the foreground, with an autumn sunset at Carnegie Mellon in the background. " class="rm-shortcode" data-rm-shortcode-id="1dd0742602809b330ce11552ae9d6d3f" data-rm-shortcode-name="rebelmouse-image" id="898fd" loading="lazy" src="https://spectrum.ieee.org/media-library/a-laptop-with-sphere-packing-code-in-the-foreground-with-an-autumn-sunset-at-carnegie-mellon-in-the-background.jpg?id=65106120&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Hariharan was working on sphere-packing proof verification as the sun was setting behind Carnegie Mellon’s Hamerschlag Hall.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Sidharth Hariharan</small></p><p>The 8-dimensional sphere-packing proof formalization alone, <a href="https://leanprover.zulipchat.com/#narrow/channel/113486-announce/topic/Sphere.20Packing.20Milestone/with/575354368" target="_blank">announced on February 23</a>, represents a watershed moment for autoformalization and AI–human collaboration. But <a target="_blank"></a><a href="https://math.inc/sphere-packing" target="_blank">today, Math, Inc. revealed</a><span> </span>an even more impressive accomplishment: Gauss has autoformalized Viazovska’s 24-dimensional sphere-packing proof—all 200,000+ lines of code of it—in just two weeks. </p><p>There are commonalities between the 8- and 24-dimensional cases in terms of the foundational theory and overall architecture of the proof, meaning some of the code from the 8-dimensional case could be refactored and reused. However, Gauss had no preexisting blueprint to work from this time. “And it was actually significantly more involved than the 8-dimensional case, because there was a lot of missing background material that had to be brought on line surrounding many of the properties of the Leech lattice, in particular its uniqueness,” explains Han.</p><p>Though the 24-dimensional case was an automated effort, both Han and Hariharan acknowledge the many contributions from humans that laid the foundations for this achievement, regarding it as a collaborative endeavor overall between humans and AI.</p><p>But for Han, it represents even more: the beginning of a revolutionary transformation in mathematics, where extremely large-scale formalizations are commonplace. “A programmer used to be someone who punched holes into cards, but then the act of programming became separated from whatever material substrate was used for recording programs,” he concludes. “I think the end result of technology like this will be to free mathematicians to do what they do best, which is to dream of new mathematical worlds.”</p><p><em>This article appears in the May 2026 print issue as “<span>A Watershed Moment for AI-Human Collaboration in Math</span>.”</em></p>]]></description><pubDate>Mon, 02 Mar 2026 18:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/ai-proof-verification</guid><category>Mathematics</category><category>Ai-reasoning</category><category>Large-language-models</category><category>Ai</category><dc:creator>Benjamin Skuse</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/overlapping-circles-in-red-blue-green-and-yellow-gradients-on-a-beige-background.png?id=65559644&amp;width=980"></media:content></item><item><title>How Quantum Data Can Teach AI to Do Better Chemistry</title><link>https://spectrum.ieee.org/quantum-chemistry</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/illustration-of-a-human-head-in-profile-with-a-spiral-upon-which-human-figures-are-walking-overlaid-on-an-image-of-an-atom.png?id=63744636&width=1245&height=700&coordinates=0%2C371%2C0%2C371"/><br/><br/><p><strong>Sometimes a visually compelling</strong> metaphor is all you need to get an otherwise complicated idea across. In the summer of 2001, a Tulane physics professor named <a href="https://sse.tulane.edu/john-p-perdew-phd" rel="noopener noreferrer" target="_blank">John P. Perdew</a> came up with a banger. He wanted to convey the hierarchy of computational complexity inherent in the behavior of electrons in materials. He called it “<a href="https://pubs.aip.org/aip/acp/article-abstract/577/1/1/573973/Jacob-s-ladder-of-density-functional?redirectedFrom=fulltext" rel="noopener noreferrer" target="_blank">Jacob’s Ladder</a>.” He was appropriating an idea from the Book of Genesis, in which Jacob dreamed of a ladder “set up on the earth, and the top of it reached to heaven. And behold the angels of God ascending and descending on it.”</p><p>Jacob’s Ladder represented a gradient and so too did Perdew’s ladder, not of spirit but of computation. At the lowest rung, the math was the simplest and least computationally draining, with materials represented as a smoothed-over, cartoon version of the atomic realm. As you climbed the ladder, using increasingly more intensive mathematics and compute power, descriptions of atomic reality became more precise. And at the very top, nature was perfectly described via impossibly intensive computation—something like what God might see.</p><p>With this metaphor in mind, we propose to extend Jacob’s Ladder beyond Perdew’s version, to encompass <em><em>all</em></em> computational approaches to simulating the behavior of electrons. And instead of climbing rung by rung toward an unreachable summit, we have an idea to <em><em>bend</em></em> the ladder so that even the very top lies within our grasp. Specifically, we at Microsoft envision a hybrid approach. It starts with using quantum computers to generate exquisitely accurate data about the behavior of electrons—data that would be prohibitively expensive to compute classically. This quantum-generated data will then train AI models running on classical machines, which can predict the properties of materials with remarkable speed. By combining quantum accuracy with AI-driven speed, we can ascend Jacob’s Ladder faster, designing new materials with novel properties and at a fraction of the cost.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Graph comparing computational cost and simulation accuracy: Classical, DFT, Coupled, Quantum+AI." class="rm-shortcode" data-rm-shortcode-id="d3175e47f1efce66722968991732929d" data-rm-shortcode-name="rebelmouse-image" id="13461" loading="lazy" src="https://spectrum.ieee.org/media-library/graph-comparing-computational-cost-and-simulation-accuracy-classical-dft-coupled-quantum-ai.png?id=65172435&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">At the base of Jacob’s Ladder are classical models that treat atoms as simple balls connected by springs—fast enough to handle millions of atoms over long times but with the lowest precision. Moving up along the black line, semiempirical methods add some quantum mechanical calculations. Next are approximations based on Hartree-Fock (HF) and density functional theory (DFT), which include full quantum behavior of individual electrons but model their interactions in an averaged way. The greater accuracy requires significant computing power, which limits them to simulating molecules with no more than a few hundred atoms. At the top are coupled-cluster and full configuration interaction (FCI) methods—exquisitely accurate but, at the moment, restricted to tiny molecules or subsets of electrons due to the large computational costs involved. Quantum computing can bend the accuracy-versus-cost curve at the top [orange line], making highly accurate calculations feasible for large systems. AI, trained on this quantum-accurate data, can flatten this curve [purple line], enabling rapid predictions for similar systems at a fraction of the cost of classical computing.</small></p><p>In our approach, the base of Jacob’s Ladder still starts with classical models that treat atoms as simple balls connected by springs—models that are fast enough to handle millions of atoms over long times, but with the lowest precision. As we ascend the ladder, some quantum mechanical calculations are added to semiempirical methods. Eventually, we’ll get to the full quantum behavior of individual electrons but with their interactions modeled in an averaged way; this greater accuracy requires significant compute power, which means you can only simulate molecules of no more than a few hundred atoms. At the top will be the most computationally intensive methods—prohibitively expensive on classical computers but tractable on quantum computers.</p><p>In the coming years, quantum computing and AI will become critical tools in the pursuit of new materials science and chemistry. When combined, their forces will multiply. We believe that by using quantum computers to train AI on quantum data, the result will be hyperaccurate AI models that can reach ever higher rungs of computational complexity without the prohibitive computational costs.</p><p>This powerful combination of quantum computing and AI could unlock unprecedented advances in chemical discovery, materials design, and our understanding of complex reaction mechanisms. Chemical and materials innovations already play a vital—if often invisible—role in our daily lives. These discoveries shape the modern world: new drugs to help treat disease more effectively, improving health and extending life expectancy; everyday products like toothpaste, sunscreen, and cleaning supplies that are safe and effective; cleaner fuels and longer-lasting batteries; improved fertilizers and pesticides to boost global food production; and biodegradable plastics and recyclable materials to shrink our environmental footprint. In short, chemical discovery is a behind-the-scenes force that greatly enhances our everyday lives.</p><h3></h3><br/><div class="rblad-ieee_in_content"></div><p>The potential is vast. Anywhere AI is already in use, this new quantum-enhanced AI could drastically improve results. These models could, for instance, scan for previously unknown catalysts that could fix atmospheric carbon and so mitigate climate change. They could discover novel chemical reactions to turn waste plastics into useful raw materials and remove toxic “forever chemicals” from the environment. They could uncover new battery chemistries for safer, more compact energy storage. They could supercharge drug discovery for personalized medicine.</p><p>And that would just be the beginning. We believe quantum-enhanced AI will open up new frontiers in materials science and reshape our ability to understand and manipulate matter at its most fundamental level. Here’s how.</p><h2>How Quantum Computing Will Revolutionize Chemistry</h2><p>To understand how quantum computing and AI could help bend Jacob’s Ladder, it’s useful to look at the classical approximation techniques that are currently used in chemistry. In atoms and molecules, electrons interact with one another in complex ways called electron correlations. These correlations are crucial for accurately describing chemical systems. Many computational methods, such as <a href="https://www.synopsys.com/glossary/what-is-density-functional-theory.html" target="_blank">density functional theory</a> (DFT) or the <a href="https://insilicosci.com/hartree-fock-method-a-simple-explanation/" target="_blank">Hartree-Fock method</a>, simplify these interactions by replacing the intricate correlations with averaged ones, assuming that each electron moves within an average field created by all other electrons. Such approximations work in many cases, but they can’t provide a full description of the system.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="a woman stirs a white powder inside a glove box." class="rm-shortcode" data-rm-shortcode-id="c0e1bdeb8e874740173f3f02c62eb308" data-rm-shortcode-name="rebelmouse-image" id="40c54" loading="lazy" src="https://spectrum.ieee.org/media-library/a-woman-stirs-a-white-powder-inside-a-glove-box.jpg?id=63745112&width=980"/> </p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="The second shows white powder in test tubes." class="rm-shortcode" data-rm-shortcode-id="5ac7a16946b97de61047d14b9ff28eb7" data-rm-shortcode-name="rebelmouse-image" id="2b1dd" loading="lazy" src="https://spectrum.ieee.org/media-library/the-second-shows-white-powder-in-test-tubes.jpg?id=63745094&width=980"/> </p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="shows a gloved hand holding a silvery disc close to an electronic apparatus." class="rm-shortcode" data-rm-shortcode-id="f3e77cc9b1b4502b2fab5ed6a3cf10f5" data-rm-shortcode-name="rebelmouse-image" id="98787" loading="lazy" src="https://spectrum.ieee.org/media-library/shows-a-gloved-hand-holding-a-silvery-disc-close-to-an-electronic-apparatus.jpg?id=63745089&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">A joint project between Microsoft and Pacific Northwest National Laboratory used AI and high-performance computing to identify potential materials for battery electrolytes. The most promising were synthesized [top and middle] and tested [bottom] at PNNL. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Dan DeLong/Microsoft</small></p><p>Electron correlation is particularly important in systems where the electrons are strongly interacting—as in materials with unusual electronic properties, like high-temperature superconductors—or when there are many possible arrangements of electrons with similar energies—such as compounds containing certain metal atoms that are crucial for catalytic processes.</p><p>In these cases, the simplified approach of DFT or Hartree-Fock breaks down, and more sophisticated methods are needed. As the number of possible electron configurations increases, we quickly reach an “exponential wall” in computational complexity, beyond which classical methods become infeasible.</p><p>Enter the quantum computer. Unlike classical bits, which are either on or off, qubits can exist in superpositions—effectively coexisting in multiple states simultaneously. This should allow them to represent many electron configurations at once, mirroring the complex quantum behavior of correlated electrons. Because quantum computers operate on the same principles as the electron systems they will simulate, they will be able to accurately simulate even strongly correlated systems—where electrons are so interdependent that their behavior must be calculated collectively.</p><h2>AI’s Role in Advancing Computational Chemistry</h2><p>At present, even the computationally cheap methods at the bottom of Jacob’s Ladder are slow, and the ones higher up the ladder are slower still. AI models have emerged as powerful accelerators to such calculations because they can serve as emulators that predict simulation outcomes without running the full calculations. The models can speed up the time it takes to solve problems up and down the ladder by orders of magnitude.</p><p>This acceleration opens up entirely new scales of scientific exploration. In 2023 and 2024, we collaborated with researchers at <a href="https://www.pnnl.gov/" target="_blank">Pacific Northwest National Laboratory</a> (PNNL) on using <a href="https://arxiv.org/abs/2401.04070" rel="noopener noreferrer" target="_blank">advanced AI models</a> to evaluate over 32 million potential battery materials, looking for safer, cheaper, and more environmentally friendly options. This enormous pool of candidates would have taken about 20 years to explore using traditional methods. And yet, within less than a week, <a href="https://spectrum.ieee.org/ai-battery-material" target="_blank">that list was narrowed</a> to 500,000 stable materials and then to 800 highly promising candidates. Throughout the evaluation, the AI models replaced expensive and time-consuming quantum chemistry calculations, in some cases delivering insights half a million times as fast as would otherwise have been the case.</p><p>We then used high-performance computing (HPC) to validate the most promising materials with DFT and AI-accelerated molecular dynamics simulations. The PNNL team then spent about nine months synthesizing and testing one of the candidates—a solid-state electrolyte that uses sodium, which is cheap and abundant, and some other materials, with 70 percent less lithium than conventional lithium-ion designs. The team then built a prototype solid-state battery that they tested over a range of temperatures.</p><p>This potential battery breakthrough isn’t unique. AI models have also dramatically accelerated research in <a href="https://science.nasa.gov/earth/ai-open-science-climate-change/" rel="noopener noreferrer" target="_blank">climate science</a>, <a href="https://www.sciencedirect.com/science/article/pii/S3050585225000217" rel="noopener noreferrer" target="_blank">fluid dynamics</a>, <a href="https://www.simonsfoundation.org/2024/08/26/astrophysicists-use-ai-to-precisely-calculate-universes-settings/" rel="noopener noreferrer" target="_blank">astrophysics</a>, <a href="https://www.nature.com/articles/s44222-025-00349-8" rel="noopener noreferrer" target="_blank">protein design</a>, and <a href="https://www.nature.com/articles/d41586-025-00602-5" rel="noopener noreferrer" target="_blank">chemical and biological discovery</a>. By replacing traditional simulations that can take days or weeks to run, AI is reshaping the pace and scope of scientific research across disciplines.</p><p>However, these AI models are only as good as the quality and diversity of their training data. Whether sourced from high-fidelity simulations or carefully curated experimental results, these data must accurately represent the underlying physical phenomena to ensure reliable predictions. Poor or biased data can lead to misleading outcomes. By contrast, high-quality, diverse datasets—such as those full-accuracy quantum simulations—enable models to generalize across systems and uncover new scientific insights. This is the promise of using quantum computing for training AI models.</p><h2>How to Accelerate Chemical Discovery</h2><p>The real breakthrough will come from strategically combining quantum computing’s and AI’s unique strengths. AI already excels at learning patterns and making rapid predictions. Quantum computers, which are still being scaled up to be practically useful, will excel at capturing electron correlations that classical computers can only approximate. So if you train classical models on quantum-generated data, you’ll get the best of both worlds: the accuracy of quantum delivered at the speed of AI.</p><p>As we learned from the Microsoft-PNNL collaboration on electrolytes, AI models alone can greatly speed up chemical discovery. In the future, quantum-accurate AI models will tackle even bigger challenges. Consider the basic discovery process, which we can think of as a funnel. Scientists begin with a vast pool of candidate molecules or materials at the wide-mouthed top, narrowing them down using filters based on desired properties—such as boiling point, conductivity, viscosity, or reactivity. Crucially, the effectiveness of this screening process depends heavily on the accuracy of the models used to predict these properties. Inaccurate predictions can create a “leaky” funnel, where promising candidates are mistakenly discarded or poor ones are mistakenly advanced.</p><p>Quantum-accurate AI models will dramatically improve the precision of chemical-property predictions. They’ll be able to help identify “first-time right” candidates, sending only the most promising molecules to the lab for synthesis and testing—which will save both time and cost.</p><p>Another key aspect of the discovery process is understanding the chemical reactions that govern how new substances are formed and behave. Think of these reactions as a network of roads winding through a mountainous landscape, where each road represents a possible reaction step, from starting materials to final products. The outcome of a reaction depends on how quickly it travels down each path, which in turn is determined by the energy barriers along the way—like mountain passes that must be crossed. To find the most efficient route, we need accurate calculations of these barrier heights, so that we can identify the lowest passes and chart the fastest path through the reaction landscape.</p><p>Even small errors in estimating these barriers can lead to incorrect predictions about which products will form. Case in point: A slight miscalculation in the energy barrier of an environmental reaction could mean the difference between labeling a compound a “forever chemical” or one that safely degrades over time.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="70e0b9b540bc0e061b38252e88243293" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/X1aWMYukuUk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> </p><p>Accurate modeling of reaction rates is also essential for designing catalysts—substances that speed up and steer reactions in desired directions. Catalysts are crucial in industrial chemical production, carbon capture, and biological processes, among many other things. Here, too, quantum-accurate AI models can play a transformative role by providing the high-fidelity data needed to predict reaction outcomes and design better catalysts.</p><p>Once trained, these AI models, powered by quantum-accurate data, will revolutionize computational chemistry by delivering quantum-level precision. And once the AI models, which run on classical computers, are trained with quantum computing data, researchers will be able to run high-accuracy simulations on laptops or desktop computers, rather than relying on massive supercomputers or future quantum hardware. By making advanced chemical modeling more accessible, these tools will democratize discovery and empower a broader community of scientists to tackle some of the most pressing challenges in health, energy, and sustainability.</p><h2>Remaining Challenges for AI and Quantum Computing</h2><p>By now, you’re probably wondering: When will this transformative future arrive? It’s true that<strong> </strong>quantum computers still struggle with <a href="https://spectrum.ieee.org/quantum-error-correction" target="_blank">error rates</a> and limited lifetimes of usable qubits. And they still need to scale to the size required for meaningful chemistry simulations. Meaningful chemistry simulations beyond the reach of classical computation will require hundreds to thousands of high-quality qubits with error rates of around 10<span><sup>-15</sup></span>, or one error in a quadrillion operations. Achieving this level of reliability will require fault tolerance through redundant encoding of quantum information in logical qubits, each consisting of hundreds of physical qubits, thus requiring a total of about a million physical qubits. Current AI models for chemical-property predictions may not have to be fully redesigned. We expect that it will be sufficient to start with models pretrained on classical data and then fine-tune them with a few results from quantum computers.</p><p> Despite some open questions, the potential rewards in terms of scientific understanding and technological breakthroughs make our proposal a compelling direction for the field. The quantum computing industry has begun to move beyond the early noisy prototypes, and high-fidelity quantum computers with low error rates could be possible <a href="https://www.darpa.mil/research/programs/quantum-benchmarking-initiative" target="_blank">within a decade</a>.</p><p>Realizing the full potential of quantum-enhanced AI for chemical discovery will require focused collaboration between chemists and materials scientists who understand the target problems, experts in quantum computing who are building the hardware, and AI researchers who are developing the algorithms. Done right, quantum-enhanced AI could start to tackle the world’s toughest challenges—from climate change to disease—years ahead of anyone’s expectations. <span class="ieee-end-mark"></span></p>]]></description><pubDate>Mon, 02 Mar 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/quantum-chemistry</guid><category>Quantum-computing</category><category>Quantum-chemistry</category><category>Drug-discovery</category><category>Batteries</category><category>Ai-models</category><category>Microsoft</category><dc:creator>Chi Chen</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/illustration-of-a-human-head-in-profile-with-a-spiral-upon-which-human-figures-are-walking-overlaid-on-an-image-of-an-atom.png?id=63744636&amp;width=980"></media:content></item><item><title>Letting Machines Decide What Matters</title><link>https://spectrum.ieee.org/ai-new-physics</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/large-particle-detector-with-circular-structure-person-standing-below.png?id=65005476&width=1245&height=700&coordinates=0%2C104%2C0%2C105"/><br/><br/><p>In the time it takes you to read this sentence, the <a href="https://spectrum.ieee.org/tag/large-hadron-collider" target="_blank">Large Hadron Collider</a> (LHC) will have smashed billions of particles together. In all likelihood, it will have found exactly what it found yesterday: more evidence to support the <a href="https://home.cern/science/physics/standard-model" rel="noopener noreferrer" target="_blank">Standard Model</a> of particle physics.</p><p>For the engineers who built this 27-kilometer-long ring, this consistency is a triumph. But for theoretical physicists, it has been rather frustrating. As <a href="https://spectrum.ieee.org/u/matthew-hutson" rel="noopener noreferrer" target="_blank">Matthew Hutson</a> reports in “<a data-linked-post="2675068613" href="https://spectrum.ieee.org/particle-physics-ai" target="_blank">AI Hunts for the Next Big Thing in Physics</a>,” the field is currently gripped by a quiet crisis. In an email discussing his reporting, Hutson explains that the Standard Model, which describes the known elementary particles and forces, is not a complete picture. “So theorists have proposed new ideas, and experimentalists have built giant facilities to test them, but despite the gobs of data, there have been no big breakthroughs,” Hutson says. “There are key components of reality we’re completely missing.”</p><p>That’s why researchers are turning artificial intelligence loose on particle physics. They aren’t simply asking AI to comb through accelerator data to confirm existing theories, Hutson explains. They’re asking AI to point the way toward theories that they’ve never imagined. “Instead of looking to support theories that humans have generated,” he says, “unsupervised AI can highlight anything out of the ordinary, expanding our reach into unknown unknowns.” By asking AI to flag anomalies in the data, researchers hope to find their way to “<a href="https://en.wikipedia.org/wiki/Physics_beyond_the_Standard_Model" target="_blank">new physics</a>” that extends the Standard Model. </p><p>On the surface, this article might sound like another “AI for <em><em>X</em></em>” story. As <em><em>IEEE</em></em> <em><em>Spectrum</em></em>’s AI editor, I get a steady stream of pitches for such stories: AI for drug discovery, AI for farming, AI for wildlife tracking. Often what that really means is faster data processing or automation around the edges. Useful, sure, but incremental.</p><p>What struck me in Hutson’s reporting is that this effort feels different. Instead of analyzing experimental data after the fact, the AI essentially becomes part of the instrument, scanning for subtle patterns and deciding in real time what’s interesting. At the LHC, detectors record 40 million collisions per second. There’s simply no way to preserve all that data, so engineers have always had to build filters to decide which events get saved for analysis and which are discarded; nearly everything is thrown away. </p><p>Now those split-second decisions are increasingly handed to machine learning systems running on field-programmable gate arrays (FPGAs) connected to the detectors. The code must run on the chip’s limited logic and memory, and compressing a neural network into that hardware isn’t easy. Hutson describes one theorist pleading with an engineer, “Which of my algorithms fits on your bloody FPGA?”</p><p>This moment is part of a much older pattern. As Hutson writes in the article, new instruments have opened doors to the unexpected throughout the history of science. Galileo’s telescope <a href="https://www.nasa.gov/general/415-years-ago-astronomer-galileo-discovers-jupiters-moons/" target="_blank">revealed moons circling Jupiter</a>. Early microscopes exposed entire worlds of “<a href="https://hekint.org/2018/10/23/van-leeuwenhoeks-discovery-of-animalcules/" target="_blank">animalcules</a>” swimming around. These better tools didn’t just answer existing questions; they made it possible to ask new ones.</p><p>If there’s a crisis in particle physics, in other words, it may not just be about missing particles. It’s about how to look beyond the limits of the human imagination. Hutson’s story suggests that AI might not solve the mysteries of the universe outright, but it could change how we search for answers.</p>]]></description><pubDate>Sun, 01 Mar 2026 11:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/ai-new-physics</guid><category>Large-hadron-collider</category><category>Lhc</category><category>Particle-physics</category><category>Fpga</category><category>Machine-learning</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/large-particle-detector-with-circular-structure-person-standing-below.png?id=65005476&amp;width=980"></media:content></item><item><title>A Shapeshifting Supercomputer May Be More Energy Efficient</title><link>https://spectrum.ieee.org/reconfigurable-supercomputer</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/sandia-national-laboratories-supercomputer-spectra.jpg?id=65068162&width=1245&height=700&coordinates=0%2C62%2C0%2C63"/><br/><br/><p>Late last year, Sandia National Laboratories started testing an unusual type of supercomputer. Unlike conventional <a data-linked-post="2658957928" href="https://spectrum.ieee.org/europe-s-exascale-supercomputer" target="_blank">supercomputers</a>, which consist of large interconnected clusters of CPUs and GPUs, this machine incorporates reconfigurable accelerators that optimize their operations for the particular computation that’s being run. This new architecture, which is similar to field-programmable gate arrays (<a data-linked-post="2650233096" href="https://spectrum.ieee.org/painless-fpga-programming" target="_blank">FPGAs</a>), is built by startup NextSilicon. A key benefit of the approach is that it doesn’t require a software rewrite: the hardware optimizes itself for the software, not vice versa. </p><p>Spectra, which incorporates 128 <a href="https://www.nextsilicon.com/" rel="noopener noreferrer" target="_blank">NextSilicon</a> Maverick-2 accelerators, is still in the investigative phase, says program leader and Sandia senior scientist <a href="https://www.sandia.gov/ccr/staff/james-h-laros/" rel="noopener noreferrer" target="_blank">James Laros</a>. NextSilicon, which has headquarters in Tel Aviv and Minneapolis, claims its accelerators generally use half as much power as Nvidia’s Blackwell while offering a quadruple speed advantage. The power and speed vary depending on the particular application.</p><h2>The Power of Flexibility</h2><p>NextSilicon CEO Elad Raz says typical architectures predict the next instruction then fetch and cache data. “What if you can remove all that overhead?” he wondered. “A lot of people are trying to build a new CPU or a better GPU. Other companies have a software solution,” says Raz. “I wanted to build something with software and hardware collaborating together.”</p><p>The company’s Maverick-2 first runs the application on a CPU and identifies which operations run most frequently. Then, it reconfigures the chip to schedule its work in a way that optimizes data flow. Instead of back-and-forth fetching of data, he says, “you can generate a pipeline.” </p><p>And a key advantage of the company’s design is that users do not have to rewrite their software in order to run it more efficiently on the system. The hardware adapts to the software, not vice versa.</p><p>Most of the applications Sandia runs are constrained by memory bandwidth, says Laros. “What if we can go faster because we don’t have to go back to the main memory?” That’s the potential of the Spectra architecture.</p><p><span>Raz says Maverick uses half as much power as Nvidia’s Blackwell and can perform HPCG, a supercomputing benchmark, twice as fast; it performs PageRank, another benchmark, 10 times as fast. Sandia scientists are currently assessing Spectra’s performance when running molecular dynamics simulations, which predict the movements of atoms and are widely used in physics and materials science, and other core codes used by the U.S. Department of Energy. “Where it will provide a benefit is if we can get better performance for types of apps that don’t run well on GPUs, or if we can get the same performance with better energy efficiency,” Laros says.</span></p><h2>Supporting the Mission Through Experimentation</h2><p>Sandia performs computer simulations to maintain the United States’ nuclear arsenal. “We’ve replaced testing with simulation and computing,” Laros says. Because of the high stakes of this mission, the lab has to “make sure we’re not putting all our eggs in one basket,” he says. If a company whose technology the U.S. government relies on for nuclear stockpile stewardship should go out of business, the government needs to have alternatives. “We maintain a pipeline of overlapping technologies,” Laros says.</p><p>Spectra is part of Sandia’s <a href="https://vanguard.sandia.gov/" target="_blank">Vanguard</a> program, which allows the government to partner with startups to test out and help develop early-stage high-performance computing technologies. “The goal is to test them for our advanced simulation and computing mission codes,” Laros says.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Close-up several wires plugged into the back of a server rack." class="rm-shortcode" data-rm-shortcode-id="b59df3bc1edf177d617ec79d9a0a2e49" data-rm-shortcode-name="rebelmouse-image" id="99b63" loading="lazy" src="https://spectrum.ieee.org/media-library/close-up-several-wires-plugged-into-the-back-of-a-server-rack.jpg?id=65068423&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Penguin Solutions integrated the thermal-management and power-distribution systems for Spectra, and led the installation at Sandia National Laboratories. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Craig Fritz/Sandia National Laboratories</small></p><p>Sandia runs a large portion of its code on CPUs, says Laros. They’ve adopted GPU based systems built by Nvidia as well. These systems offer speed advantages, but they require lab staff to port their code. “It took us hundreds of hours,” says Laros. And there are important scientific simulations that don’t run well on GPUs, including Monte Carlo methods, which can be used to assess complex risks.</p><p>Laros says it’s unusual right now to find a computing startup focusing on high-performance scientific computing—“It’s all about AI” these days, he says. Next Silicon is developing hardware that the company hopes will have advantages for both, thanks to its promised power savings. Power availability is a major constraint on large-scale AI data centers today. Raz hopes NextSilicon’s accelerators will offer an advantage by enabling more efficient performance for a given amount of electricity consumption.</p> The Vanguard program allows the government to test the potential of risky technologies. “You’re going to fail once in a while,” says Laros. “Our goal is to do very advanced technology discovery. We prove it out. Other labs and other commercial industries will follow.”]]></description><pubDate>Fri, 27 Feb 2026 14:23:28 +0000</pubDate><guid>https://spectrum.ieee.org/reconfigurable-supercomputer</guid><category>Computing</category><category>Sandia-national-laboratories</category><category>Energy-efficiency</category><category>Supercomputing</category><dc:creator>Katherine Bourzac</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/sandia-national-laboratories-supercomputer-spectra.jpg?id=65068162&amp;width=980"></media:content></item><item><title>AI Is Acing Math Exams Faster Than Scientists Write Them</title><link>https://spectrum.ieee.org/ai-math-benchmarks</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/line-graph-demonstrates-how-google-deepminds-aletheia-ai-scores-at-least-5-percent-higher-on-ph-d-math-exercises-than-the-lat.jpg?id=65007034&width=1245&height=700&coordinates=0%2C62%2C0%2C63"/><br/><br/><p><span>Mathematics is often regarded as the ideal domain for measuring AI progress effectively. Math’s step-by-step logic is easy to track, and its definitive, automatically verifiable answers remove any human or subjective factors. But AI systems are improving at such a pace that math </span><a href="https://spectrum.ieee.org/melanie-mitchell" target="_self">benchmarks are struggling to keep up</a><span>.</span></p><p>Way back in November 2024, nonprofit research organization Epoch AI quietly released <a href="https://doi.org/10.48550/arXiv.2411.04872" target="_blank">FrontierMath</a>. A standardized, rigorous benchmark, FrontierMath was designed to measure the mathematical reasoning capabilities of the latest AI tools.</p><p>“It’s a bunch of really hard math problems,” explains <a href="https://epoch.ai/team" target="_blank">Greg Burnham</a>, Epoch AI senior researcher. “Originally, it was 300 problems that we now call tiers 1–3, but having seen AI capabilities really speed up, there was a feeling that we had to run to stay ahead, so now there’s a special challenge set of extra carefully constructed problems that we call tier 4.”</p><p>To a rough approximation, tiers 1–4 go from advanced undergraduate through to early postdoc-level mathematics. When introduced, state-of-the-art AI models were unable to solve more than 2 percent of the problems FrontierMath contained. <a href="https://epoch.ai/frontiermath/tiers-1-4" target="_blank">Fast forward to today</a>: The best publicly available AI models, such as GPT-5.2 and Claude Opus 4.6, are solving over 40 percent of FrontierMath’s 300 tier 1–3 problems, and over 30 percent of the 50 tier 4 problems.</p><h2>AI takes on Ph.D.-level mathematics</h2><p>And this dizzying pace of advancement is showing no signs of abating. For example, just recently <a href="https://deepmind.google/blog/accelerating-mathematical-and-scientific-discovery-with-gemini-deep-think/" target="_blank">Google DeepMind announced</a> that Aletheia, an experimental AI system derived from Gemini Deep Think, <a href="https://doi.org/10.48550/arXiv.2601.23245" target="_blank">achieved publishable Ph.D.-level research results</a>. Though obscure mathematically—it was calculated with certain structure constants in arithmetic geometry called eigenweights—the result is significant in terms of AI development.</p><p>“They’re claiming it was essentially autonomous, meaning a human wasn’t guiding the work, and it’s publishable,” Burnham says. “It’s definitely at the lower end of the spectrum of work that would get a mathematician excited, but it’s new—it’s something we truly haven’t really seen before.”</p><p>To place this achievement in context, every FrontierMath problem has a known answer that a human has derived. Though a human could probably have achieved Aletheia’s result “if they sat down and steeled themselves for a week,” says Burnham, no human had ever done so.</p><p>Aletheia’s results and other recent achievements by AI mathematicians point to new, tougher benchmarks being needed to understand AI capabilities—and fast, because existing ones will soon become irrelevant. “There are easier math benchmarks that are already obsolete, several generations of them,” says Burnham. “FrontierMath will probably saturate [Ed. note: This means that state-of-the-art AI models score 100 percent] within the next two years—could be faster.”</p><h2>The First Proof challenge</h2><p>To begin to address this problem, on 6 February, a group of 11 highly distinguished mathematicians <a href="https://doi.org/10.48550/arXiv.2602.05192" rel="noopener noreferrer" target="_blank">proposed the First Proof challenge</a>, a set of 10 extremely difficult math questions that arose naturally in the authors’ research processes, and whose proofs are roughly five pages or less and had not been shared with anyone. <a href="https://1stproof.org/" rel="noopener noreferrer" target="_blank">The First Proof challenge</a> was a preliminary effort to assess the capabilities of AI systems in solving research-level math questions on their own.</p><p>Generating serious buzz in the math community, professional and amateur mathematicians, and teams including OpenAI, all stepped up to the challenge. But by the time the authors <a href="https://codeberg.org/tgkolda/1stproof/src/branch/main/2026-02-batch/FirstProofSolutionsComments.pdf" rel="noopener noreferrer" target="_blank">posted the proofs</a> on 14 February, no one had submitted correct solutions to all 10 problems.</p><p>In fact, far from it. The authors themselves only solved two of the 10 problems using Gemini 3.0 Deep Think and ChatGPT 5.2 Pro. And most outside submissions fared little better, apart from OpenAI and a small Aletheia team at Google DeepMind. With “limited human supervision,” OpenAI’s most advanced internal AI system <a href="https://openai.com/index/first-proof-submissions/" rel="noopener noreferrer" target="_blank">solved five of the 10 problems</a>, with Aletheia achieving similar outcomes—results met with a spectrum of emotions by different members of the mathematics community, from awe to disappointment. The team behind First Proof plans an even tougher <a href="https://1stproof.org/" rel="noopener noreferrer" target="_blank">second round on 14 March</a>.</p><h2>A new frontier for AI</h2><p>“I think First Proof is terrific: It’s as close as you could realistically get to putting an AI system in the shoes of a mathematician,” says Burnham. Though he admires how First Proof tests AI’s mathematical utility for a wide range of mathematics and mathematicians, Epoch AI has its own new approach to testing—<a href="https://epoch.ai/frontiermath/open-problems" rel="noopener noreferrer" target="_blank">FrontierMath: Open Problems</a>. Uniquely, the pilot benchmark consists of 16 open problems (with more to follow) from research mathematics that professional mathematicians have tried and failed to solve. Since Open Problems’ <a href="https://epochai.substack.com/p/introducing-frontiermath-open-problems" rel="noopener noreferrer" target="_blank">release on 27 January</a>, none have been solved by an AI.</p><p>“With Open Problems, we’ve tried to make it more challenging,” says Burnham. “The baseline on its own would be publishable, at least in a specialty journal.” What’s more, each question is designed so that it can be automatically graded. “This is a bit counterintuitive,” Burnham adds. “No one knows the answers, but we have a computer program that will be able to judge whether the answer is right or not.”</p><p>Burnham sees First Proof and Open Problems as being complementary. “I would say understanding AI capabilities is a more-the-merrier situation,” he adds. “AI has gotten to the point where it’s—in some ways—better than most Ph.D. students, so we need to pose problems where the answer would be at least moderately interesting to some human mathematicians, not because AI was doing it but because it’s mathematics that human mathematicians care about.”</p>]]></description><pubDate>Wed, 25 Feb 2026 16:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/ai-math-benchmarks</guid><category>Ai-benchmarks</category><category>Mathematics</category><category>Large-language-models</category><category>Artificial-intelligence</category><dc:creator>Benjamin Skuse</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/line-graph-demonstrates-how-google-deepminds-aletheia-ai-scores-at-least-5-percent-higher-on-ph-d-math-exercises-than-the-lat.jpg?id=65007034&amp;width=980"></media:content></item></channel></rss>