<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>IEEE Spectrum</title><link>https://spectrum.ieee.org/</link><description>IEEE Spectrum</description><atom:link href="https://spectrum.ieee.org/feeds/feed.rss" rel="self"></atom:link><language>en-us</language><lastBuildDate>Fri, 13 Mar 2026 16:14:29 -0000</lastBuildDate><item><title>Video Friday: These Robots Were Born to Run</title><link>https://spectrum.ieee.org/legged-modular-robot</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/rolling-cannon-distant-cityscape-trees-and-water.gif?id=65282014&width=1200&height=400&coordinates=0%2C147%2C0%2C147"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://2026.ieee-icra.org/">ICRA 2026</a>: 1–5 June 2026, VIENNA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="8vksx1zsg7q"><em>All legged robots deployed “in the wild” to date were given a body plan that was predefined by human designers and could not be redefined in situ. The manual and permanent nature of this process has resulted in very few species of agile terrestrial robots beyond familiar four-limbed forms. Here, we introduce highly athletic modular building blocks and show how they enable the automatic design and rapid assembly of novel agile robots that can “hit the ground running” in unstructured outdoor environments.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="508a07a4b7d915c6cfd07081bdc63e86" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/8VKSx1zSg7Q?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://robotics.northwestern.edu/news-events/index.html" target="_blank">Northwestern UniversityCenter for Robotics and Biosystems</a> ] [ <a href="https://www.pnas.org/doi/10.1073/pnas.2519129123">Paper</a> ] via [ <a href="https://gizmodo.com/these-self-configuring-modular-robots-may-one-day-rule-the-world-2000731381">Gizmodo</a> ] </p><div class="horizontal-rule"></div><p class="rm-anchors" id="l2q3kpl4mjq">If you were going to develop the ideal urban delivery robot more or less from scratch, it would be this.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ba83a841b32a7807384eeb10bc2c6b03" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/l2q3kPl4mJQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.rivr.ai/rivr-two">RIVR</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="cadtjepdbfc">Don’t get me wrong, there are some clever things going on here, but I’m still having a lot of trouble seeing where the unique, sustainable value is for a <a data-linked-post="2666662286" href="https://spectrum.ieee.org/humanoid-robots" target="_blank">humanoid robot</a> performing these sorts of tasks.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="b6313fcff2b0315bed664e00897cf53a" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/CAdTjePDBfc?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.figure.ai/news/helix-02-living-room-tidy">Figure</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="xyhob9__qk0">One of those things that you don’t really think about as a human, but is actually pretty important.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="53ef3877dae03acc90a17fd9dcba1e6b" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/xYhOb9__Qk0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://arxiv.org/abs/2602.05760">Paper</a> ] via [ <a href="https://rsl.ethz.ch/" target="_blank">ETH Zurich</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="wi6u8bvofvc"><em>We propose TRIP-Bag (Teleoperation, Recording, Intelligence in a Portable Bag), a portable, puppeteer-style teleoperation system fully contained within a commercial suitcase, as a practical solution for collecting high-fidelity manipulation data across varied settings.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4d39bbac7f62958700b13bfd53bc8bfd" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Wi6U8bvoFvc?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://uiuckimlab.github.io/TRIP-Bag-pages/">KIMLAB</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="nuouwhuzpwq"><em>We propose an open-vocabulary semantic exploration system that enables robots to maintain consistent maps and efficiently locate (unseen) objects in semi-static real-world environments using LLM-guided reasoning.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="1724903fcd3e1d57df45824508205a87" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/nUouwHUZPWQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.tum.de/en/news-and-events/all-news/press-releases/details/search-robot-thinks-for-itself">TUM</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="vrxamllkjko">That’s it folks, we have no need for real pandas anymore—if we ever did in the first place. Be honest, what has a <a data-linked-post="2675288239" href="https://spectrum.ieee.org/robot-martial-arts" target="_blank">panda</a> done for you lately?</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="af51d3f513d68d80617dd0b62738a1bb" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/VRxAMLlkjko?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.magiclab.top/en/">MagicLab</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="uhd6o6dem_o"><em>RoboGuard is a general-purpose guardrail for ensuring the safety of LLM-enabled robots. RoboGuard is configured offline with high-level safety rules and a robot description, reasons about how these safety rules are best applied in robot’s context, then synthesizes a plan that maximally follows user preferences while ensuring safety.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bfc2abc33b815af7c16c37617a485a87" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/uhd6O6DEM_o?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://robo-guard.github.io/">RoboGuard</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="5ekki51q1sk"><em>In this demonstration, a small team responds to a (simulated) radiation contamination leak at a real nuclear reactor facility. The team deploys their reconfigurable robot to accompany them through the facility. As the station is suddenly plunged into darkness, the robot’s camera is hot-swapped to thermal so that it can continue on. Upon reaching the approximate location of the contamination, the team installs a Compton gamma-ray camera and pan-tilt illuminating device. The robot autonomously steps forward, locates the radiation source, and points it out with the illuminator.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="7928c582f10167b05ca04c694c729b67" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/5ekKI51q1Sk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://ieeexplore.ieee.org/document/11078050">Paper</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="zcnmhsg5bpw"><em>On March 6th, 2025, the Robomechanics Lab at CMU was flooded with 4 feet of black water (i.e. mixed with sewage). We lost most of the robots in the lab, and as a tribute my students put together this “In Memoriam” video. It includes some previously unreleased robots and video clips.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="e1739161841c3f7f5fb2ae563d8b15bc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/zcnMHsg5Bpw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.cmu.edu/me/robomechanicslab/">Carnegie Mellon University Robomechanics Lab</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="i3goczr4ya0">There haven’t been a lot of successful <a data-linked-post="2650267089" href="https://spectrum.ieee.org/your-kid-wants-a-thymio-ii-education-robot" target="_blank">education robots</a>, but here’s one of them.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="971e1ccf67faed9fa1a9a5292d6b5b49" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/i3goCzr4YA0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://sphero.com/collections/all/products/rvr?variant=42004659142701">Sphero</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="35i9m-jc0oc">The opening keynote from the 2025 Silicon Valley Humanoids Summit: “Insights Into Disney’s Robotic Character Platform,” by Moritz Baecher, Director, Zurich Lab, Disney Research.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="a7fc3671608ce481554dac55c022d319" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/35i9M-jc0Oc?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://humanoidssummit.com/">Humanoids Summit</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 13 Mar 2026 16:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/legged-modular-robot</guid><category>Robotics</category><category>Humanoid-robots</category><category>Video-friday</category><category>Modular-robots</category><category>Robot-videos</category><category>Quadruped-robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/gif" url="https://spectrum.ieee.org/media-library/rolling-cannon-distant-cityscape-trees-and-water.gif?id=65282014&amp;width=980"></media:content></item><item><title>Raquel Urtasun on Level-4 Autonomous Trucks</title><link>https://spectrum.ieee.org/level-4-autonomous-trucks</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-adult-white-woman-with-short-dark-hair-and-crossed-arms-leaning-her-back-against-the-door-of-a-semi-truck.jpg?id=65278377&width=1200&height=400&coordinates=0%2C729%2C0%2C730"/><br/><br/><p><a href="https://en.wikipedia.org/wiki/Raquel_Urtasun" rel="noopener noreferrer" target="_blank"><span>Raquel Urtasun</span></a> has spent 16 years in the <a href="https://spectrum.ieee.org/self-driving-cars-2662494269" target="_self"><span>self-driving space</span></a>, long enough to  <span>navigate</span> every <span>metaphorical  glorious </span>hill and plunging valley<span>. </span> <span>She took the trip from the </span>early “pipe dream” dismissals, to the “we’re <em>this</em> close” certainty, and back again.</p><p>The industry is <span>now </span>riding a new wave of optimism and investment, including at <a href="https://waabi.ai/" target="_blank"><span>Waabi Innovation Inc.</span></a>, the autonomous trucking company that Urtasun founded in 2021. The Spanish-Canadian professor at the <a href="https://www.utoronto.ca/" target="_blank"><span>University of Toronto</span></a>, and former chief scientist of Uber’s Advanced Technologies Group, has helped make Waabi a key player. Beginning in fall 2023, theToronto-based startup has been running geofenced cargo routes from Dallas to Houston in a fleet of retrofitted Peterbilt semis, navigating even residential streets in loaded, <span>36,000-kilogram (</span>80,000-pound<span>)</span> behemoths with a human “safety observer” on board.</p><p>In October, the company reached a milestone by integrating its <a href="https://waabi.ai/insights/introducing-the-waabi-driver" target="_blank"><span>“Waabi Driver”</span></a> physical-AI system in Volvo’s new VNL Autonomous truck, which the Swedish automaker is building in Virginia. That self-driving solution uses Nvidia’s <a href="https://www.nvidia.com/en-us/solutions/autonomous-vehicles/in-vehicle-computing/" target="_blank"><span>Drive AGX Thor</span></a>,  <span>an </span>AI-based platform for autonomous and software-defined vehicles. </p><p>In January, the Toronto-based startup raised $750 million in its latest funding round to accelerate commercial development in autonomous trucking, and expand its system into the fiercely competitive robotaxi space. Backers include <a href="https://www.khoslaventures.com/" target="_blank"><span>Khosla Ventures</span></a>, <a href="https://www.nvidia.com/en-us/" target="_blank"><span>Nvidia</span></a>, and <a href="https://www.volvo.com/en/" target="_blank"><span>Volvo</span></a>.</p><p>Urtasun says the <a href="https://waabi.ai/insights/introducing-the-waabi-driver" target="_blank"><span>Waabi Driver</span></a> can scale across a full range of vehicles, geographies and environments—<span>although</span> snowstorms <span>can </span>still create a no-go zone for now. It’s powered by what Urtasun calls the industry’s most advanced neural simulator. The verifiable, end-to-end AI model will be a “shared brain” that partners can transplant into cars, trucks,  <span>and pretty much anything on wheels</span>. The idea is to grab a chunk of a global autonomous trucking business that McKinsey estimates could be worth more than <a href="https://www.mckinsey.com/industries/automotive-and-assembly/our-insights/will-autonomy-usher-in-the-future-of-truck-freight-transportation" target="_blank"><span>$600 billion a year</span></a> by 2035; with autonomous haulers responsible for 15 percent of total U.S. trucking miles as early as 2030.</p><p>Backed by an additional <a href="https://www.forbes.com/sites/alanohnsman/2026/01/28/robot-trucker-waabi-wades-into-robotaxi-battle-with-billion-dollar-raise/" target="_blank"><span>$250 million from Uber</span></a>, Waabi plans to deploy at least 25,000 autonomous taxis through  <span>Uber’s </span>ride-hailing service, whose world-dominating reach encompasses 70 countries, about 15,000 cities and more than 200 million monthly users.</p><p>Urtasun spoke with <em><span>IEEE Spectrum</span></em> about how Waabi is  <span>counting </span>on sensors and simulation to prove real-world safety; and why the move to autonomy is a moral imperative that outweighs the disruption for human drivers—<span>whether they’re driving trucks or family sedans</span>. Our conversation was edited for length and clarity.</p><h2>The Shift to Next-Gen Autonomous Vehicles</h2><p><strong><em><span>IEEE Spectrum</span></em></strong><strong>: Until quite recently, autonomous tech seemed to have hit a wall, at least in the public’s mind. Now investors are flooding the zone again, and companies are all-in. What happened?</strong></p><p><strong>Raquel Urtasun:</strong> There were a lot of empty promises, or <span>[people] </span>not realizing the complexity of the problem. There was a realization that actually, this problem is harder than people anticipated. It’s also because of the type of technology that was developed at the time, what we call “AV 1.0”. These are hand-engineered systems that need to be brute-forced by humans. You need lots of capital and a massive amount of miles on the road just to get to the first deployment.</p><p>What you see with the next generation—<a href="https://medium.com/data-science-collective/the-local-optimum-of-autonomy-de1969b77769" target="_blank">AV 2.0</a> and systems that can reason—is that you finally have a solution that scales. When we started the company, this was a very contrarian view. But today, the <a href="https://spectrum.ieee.org/topic/artificial-intelligence/" target="_blank">breakthroughs in AI</a> have made it clear that this is the next big revolution. It’s not just about more compute; it’s about building a brain that can generalize. That is the “aha moment” the industry is having now.</p><p><strong>Even for someone who believes in the tech, seeing </strong> <strong><span>a driverless</span></strong><strong> semi-trailer in your rear-view mirror might be unsettling. Now you’ve integrated your tech into the aerodynamic, diesel-powered Volvo VNL Autonomous truck. How do you convince regulators and the public that these trucks belong on the street? </strong></p><p><strong>Urtasun:</strong> Safety, when you think about carrying 80,000 pounds on this massive rig, is definitely top of mind. We believe the only way to do this safely is with a redundant platform that is fully developed and validated by the OEM, not with a retrofit. The OEM does a special type of truck that has all the redundant steering, power, and braking, so that no matter what happens, there is always a way we can interface and activate that truck in a safe manner. Then we are responsible for the sensors, the compute, and obviously the brain that drives those trucks.</p><h2>AI’s Impact on Trucking Jobs</h2><p><strong>One of the biggest points of contention is the displacement of human drivers. As AI disrupts a range of workplaces, how do respond to people who say this will eliminate good-paying, blue-collar jobs?</strong></p><p><strong>Urtasun:</strong> The way we see this is that everybody who’s a truck driver today, and wants to retire as a truck driver, will be able to do so. This is physical AI; this is not like the digital world where suddenly you can switch immediately to this technology. That adoption and scaling is going to take time. There will also be many jobs created with this technology; remote operations, terminal operations, and other things. You have time to change the form of labor of being on the road, which is for weeks at a time—and it’s a really difficult and dehumanized job, let’s be honest—to something you can do locally. There was an interesting <span>[U.S.] </span>Department of Transportation study that showed because of this gradual adoption, there will be more jobs created than actually removed.</p><p><strong>You’ve spoken about a personal motivation behind this. Why do you believe the advantages of autonomy outweigh any growing pains, including the potential for unexpected accidents or even deaths?</strong></p><p><strong>Urtasun:</strong> There are 2 million deaths on the road globally per year, and nobody’s questioning that. That’s the status quo. If you think the machines have to be perfect to deploy, you are actually sacrificing many humans along the way that you could have saved. Human error in accidents is between <a href="https://www.cbmclaw.com/what-percentage-of-car-accidents-are-caused-by-human-error/" target="_blank">90 percent and 96 percent</a>. Those could be preventable accidents. Some accidents will always be unavoidable; a tire could blow for a machine the same as it could for a human. But the important comparison is how much safer we are. This technology is the answer to many, many things.</p><p><strong>Most of the industry is focused on “hub-to-hub” highway driving. But you’ve argued that Waabi’s AI can handle the complexity of </strong> <strong><span>local </span></strong><strong>streets.</strong></p><p><strong>Urtasun:</strong> The rest of the industry has gone with this business model where you need hubs next to the highway. This adds a lot of friction and cost. Thanks to our verifiable end-to-end AI system, we can drive in surface <span>[local] </span>streets. We can do unprotected lefts, traffic lights, and tight turns. These core capabilities enable us to drive all the way to the end customer. We are already hauling commercial loads for customers like Samsung through our Uber Freight partnership.</p><p><strong>You’ve mentioned that Waabi doesn’t like to talk about “number of miles” driven as a metric. For an engineering audience, that sounds counterintuitive. How does your “simulation-first” approach replace the need for real-world road time?</strong></p><p><strong>Urtasun:</strong> In the industry, miles have been used as a proxy for advancement. How many miles does <a href="https://www.tesla.com/about" target="_blank">Tesla</a> need to drive to see any of these situations? But we are a simulation-first company. Waabi World can simulate all the sensors, the behaviors of humans, everything. It is the only simulator where you can mathematically prove that testing and driving in simulation is the same as driving in the real world. You can expose the system to billions of simulations in the cloud. This is what allows us to be so capital efficient and fast.</p><h2>Verifiable AI vs. Black Box Systems</h2><p><strong>What is the difference between your “interpretable” AI and the “black box” systems we see elsewhere?</strong></p><p><strong>Urtasun:</strong> We’ve seen an evolution on passenger cars for level<span>- </span>2+ systems to end-to-end, black box architectures. But those are not verifiable. You cannot validate and verify those systems, which is a massive problem when you think about regulators and OEMs trusting that technology.</p><p>What Waabi has built is end-to-end, but fully verifiable. The system is forced to interpret what it is perceiving and use those interpretations for reasoning, so that it can understand the consequences of every action. It is much more akin to how our brain actually works; your “Type 2” thinking, where you start thinking about cause and effect and consequences, and then you typically do a much better choice in your maneuver.</p><p><strong>Tesla is famously, and controversially, relying on camera data almost exclusively to run and improve its self-driving systems. You’re not a fan of that approach?</strong></p><p><strong>Urtasun:</strong> We use multiple sensors: lidar, camera, and radar. That’s very important because failure modes of those sensors are very different and they’re very complementary. We don’t compromise safety to reduce the  <span>bill- of- materials </span>cost today.</p><p>Those (passenger car) l<span>evel-</span>2+ systems are not architected for <a href="https://www.jdpower.com/cars/shopping-guides/levels-of-autonomous-driving-explained" target="_blank">level 4</a>, where there’s no human on board. People don’t necessarily realize there is a huge difference in terms of the bar when there is no human to rely on. It’s not, “Well, if I don’t have a lot of system interventions, I’m almost there.” That’s not a metric. We are native level 4. We decide which areas the system can drive in, and in what conditions. We are building technology that can drive different form factors—trucks or robotaxis—with the same brain.</p><p><em><strong>Editor’s note: </strong>This article was updated on 13 March to correct an error in the original post. Contrary to what was stated in the original post, the trucks being driven from Dallas to Houston do have a human observer on board.</em></p>]]></description><pubDate>Fri, 13 Mar 2026 13:01:02 +0000</pubDate><guid>https://spectrum.ieee.org/level-4-autonomous-trucks</guid><category>Level-4-autonomy</category><category>Autonomous-trucks</category><category>Self-driving</category><category>Artificial-intelligence</category><dc:creator>Lawrence Ulrich</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-adult-white-woman-with-short-dark-hair-and-crossed-arms-leaning-her-back-against-the-door-of-a-semi-truck.jpg?id=65278377&amp;width=980"></media:content></item><item><title>Investing in Your Professional Community Yields Big Returns</title><link>https://spectrum.ieee.org/professional-community-investment</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/older-male-professor-teaching-a-group-of-post-graduate-college-students-about-robotics-hands-on.jpg?id=65181840&width=1200&height=400&coordinates=0%2C729%2C0%2C730"/><br/><br/><p>Engineering is so much more than solving problems or writing efficient code. It is about creating solutions that affect billions of lives and contributing to a profession built on innovation, responsibility, and collaboration. Although technical skills remain critical, what truly will accelerate the growth of the next generation of engineers is community and professional involvement.</p><h2>Learning from communities</h2><p>University programs provide a strong foundation in theory and practice, but they cannot capture the complexity of real-world engineering. As an IEEE senior member, I believe professional communities such as IEEE can help bridge the gap by offering:</p><ul><li>Practical experience through <a href="https://ieee-ai-dev-hack-2025.devpost.com/" rel="noopener noreferrer" target="_blank">hackathons</a>, open-source projects, and <a href="https://ieeexplore.ieee.org/document/6461145" rel="noopener noreferrer" target="_blank">collaborative research</a>.</li><li>Exposure to <a href="https://spectrum.ieee.org/epics-in-ieee-student-projects" target="_self">diverse perspectives</a>, with young engineers learning from peers across industries and cultures.</li><li><a href="https://spectrum.ieee.org/ieee-collabratec-mentoring-program" target="_self">Mentorship opportunities</a> that accelerate career growth and instill professional values early.</li></ul><p>I have served as a mentor and judge for a variety of hackathons across different age groups, including high school competitions <a href="https://unitedhacksv5.devpost.com/" rel="noopener noreferrer" target="_blank">United Hacks</a> and <a href="https://nextstep2025.devpost.com/" rel="noopener noreferrer" target="_blank">NextStep Hacks</a>, as well as graduate-level events such as <a href="https://hhuh.io/" rel="noopener noreferrer" target="_blank">HackHarvard</a>.</p><p>The experiences demonstrate how transformative community-driven opportunities can be for young engineers. They provide exposure to teamwork, innovation, and the realities of solving problems at scale.</p><h2>The power of mentorship</h2><p>Engineers don’t develop skills in isolation. <a href="https://spectrum.ieee.org/advice-leading-mentoring-greater-innovation" target="_self">Mentorship</a>, whether formal or informal, plays a pivotal role in shaping careers. Senior professionals who invest in guiding students and early-career engineers pass on more than technical knowledge. They share decision-making approaches, ethical considerations, and strategies for navigating careers, thereby expanding the engineering field.</p><p>As a keynote speaker at conferences, I have seen how sharing real-world experiences can ignite students’ curiosity and confidence. What they often value most is not a lecture on technology but candid insights into how to be resilient, grow their career, and learn about the different engineering paths.</p><h2>Building ethical awareness</h2><p>With the rise of artificial intelligence, biotechnology, and other high-impact innovations, engineers’ <a href="https://spectrum.ieee.org/two-new-ai-ethics-certifications" target="_self">ethical responsibilities</a> are more important than ever. Professional organizations such as IEEE and <a href="https://www.acm.org/" rel="noopener noreferrer" target="_blank">ACM</a> emphasize <a href="https://www.ieee.org/about/corporate/governance/p7-8" rel="noopener noreferrer" target="_blank">codes of ethics</a> and <a href="https://standards.ieee.org/" rel="noopener noreferrer" target="_blank">standards</a> to help ensure that technology is developed responsibly.</p><p>Through my work as a peer reviewer and committee member for IEEE and ACM conferences, including those at the university level, I have seen how the organizations promote rigor and accountability.</p><p>When students engage with such communities early, they can not only expand their technical knowledge but also build an understanding of responsible innovation.</p><h2>Networking as a catalyst for innovation</h2><p>Engineering breakthroughs often emerge at the intersections of different fields. Professional communities create the space for such interactions. A student working on computer vision, for example, might discover health care applications by collaborating with biomedical engineers.</p><p>While reviewing papers for conferences, I have seen how interdisciplinary ideas spark promising innovations.</p><p>I bring the same perspective to my role as an <a href="https://ieee-collabratec.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Collabratec</a> mentor, connecting with innovators across different disciplines and industries.</p><p class="pull-quote">“When we invest in the community, we invest in the future of engineering.”</p><p>By collaborating on projects and expanding your reach, you can find the mentors or partners you need to inspire your next breakthrough.</p><p>Participating in forums allows students and professionals alike to broaden their horizons and explore solutions that go beyond traditional boundaries.</p><h2>Giving back shapes leadership</h2><p>Community involvement is not only about what you gain. It is also about what you give. Engineers who <a href="https://spectrum.ieee.org/ieee-stem-summit-2025" target="_self">volunteer for educational programs</a>, <a href="https://spectrum.ieee.org/ieee-tryengineering-20-years" target="_self">STEM initiatives</a>, and <a href="https://spectrum.ieee.org/ieee-leadership-nominations-2027" target="_self">professional committees</a> can develop leadership skills that extend beyond technical expertise. They can learn to inspire, organize, and guide others.</p><p>Judging hackathons and mentoring student teams reminds me that leadership often begins with service. When experienced professionals actively invest in the growth of others, they help create a culture wherein learning and leadership are passed forward.</p><h2>Preparing for a lifelong journey</h2><p>Learning how to be an engineer doesn’t end when you earn your degree. It is a lifelong journey of learning, adapting, and contributing. By engaging with communities and professional networks early, students and graduates can develop habits that serve them throughout their career. They can stay current with emerging trends, build trusted professional relationships, and gain resilience through shared challenges.</p><p>Community involvement can transform engineers from problem-solvers into change agents.</p><h2>Investing in the community</h2><p>The future of engineering depends not only on technological advancement but also on the collective strength of its communities. By fostering mentorship, encouraging collaboration, and embedding ethical responsibility, professional and community involvement can ensure that the next generation of engineers is prepared to meet tomorrow’s challenges with competence and character.</p><p>My journey as a mentor, judge, keynote speaker, and peer reviewer has reinforced a clear truth: When we invest in the community, we invest in the future of engineering. The students and young professionals we support today will be the ones building the world we live in tomorrow.</p>]]></description><pubDate>Thu, 12 Mar 2026 18:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/professional-community-investment</guid><category>Ieee-member-news</category><category>Students</category><category>Careers</category><category>Networking</category><category>Mentoring</category><category>Career-advice</category><category>Type-ti</category><dc:creator>Lokesh Lagudu</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/older-male-professor-teaching-a-group-of-post-graduate-college-students-about-robotics-hands-on.jpg?id=65181840&amp;width=980"></media:content></item><item><title>40 Years of Wireless Evolution Leads to a Smart, Sensing Network</title><link>https://spectrum.ieee.org/telecom-history-1g-to-6g</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/mobile-evolution-from-1g-brick-phone-to-6g-robotic-arm-across-generations.gif?id=65257401&width=1200&height=400&coordinates=0%2C17%2C0%2C18"/><br/><br/><p>Every generation of mobile networks, from 1G to 5G, has rewritten the rules of how the world lives and works. The coming <a href="https://spectrum.ieee.org/6g-bandwidth" target="_self">6G revolution</a>, by decade’s end, will represent a new direction still, toward a universal data fabric where millions of agents collaborate in real-time across the digital and physical worlds.</p><p>The story of wireless connectivity is often told in speeds and standards—megabits per second, latency, and spectrum bands. But these generational shifts in device specs obscure a deeper pattern. Each generation, from 1G to <a href="https://spectrum.ieee.org/everything-you-need-to-know-about-5g" target="_self">5G</a>, rewrote the relationships between three elements: the <strong>D</strong>evices we carry, the <strong>N</strong>etworks that connect them, and the <strong>A</strong>pplications that run on them. We call this connectivity’s DNA. With 6G, that DNA of interconnection is about to change fundamentally.</p><p>As with the “7 Phases of the Internet”—an article we <a href="https://spectrum.ieee.org/history-of-internet-7-phases" target="_self">published with <em><em>IEEE Spectrum</em></em> last October</a>—mobile networks’ 6 generations follow a similar arc toward system-wide intelligence. That arc traces through every generation of wireless, revealing a steady advancement of the reach and scope of connectivity itself.</p><h3>1G Connected Analog Voices</h3><br/><img alt='"Vintage 1G mobile phones with network diagram on a dotted dark background."' class="rm-shortcode" data-rm-shortcode-id="2534ded91722812f4c4e0da884420881" data-rm-shortcode-name="rebelmouse-image" id="0e793" loading="lazy" src="https://spectrum.ieee.org/media-library/vintage-1g-mobile-phones-with-network-diagram-on-a-dotted-dark-background.png?id=65257405&width=980"/><p><strong>Devices:</strong> Bulky, expensive, analog phones</p><p><strong>Networks:</strong> Circuit-switched systems dedicated exlusively to voice</p><p><strong>Applications: </strong>Telephony, and telephony only</p><p>The <a href="https://spectrum.ieee.org/first-portable-telephone-call-made-40-years-ago-today" target="_self">first-generation networks of the 1980s</a> did precisely one thing: carry voices without wires. Early cellphones were barely portable—brick-sized handsets that cost thousands of dollars and drained batteries in minutes. Networks like the <a href="https://en.wikipedia.org/wiki/Advanced_Mobile_Phone_System" rel="noopener noreferrer" target="_blank">Advanced Mobile Phone System</a> (AMPS) used circuit-switching, dedicating an entire channel to each call, which meant capacity was scarce and expensive. The only application was the phone call.</p><p>Yet 1G’s modest achievement was revolutionary. Conversations could now move with the person having it. Communication detached from location. A salesperson could close a deal from their car. A doctor could be reached on the go. The technology was clunky and expensive, and the calls were only local. Nevertheless, the conceptual shift was real: the network would now follow the user, not the other way around. Every generation since has built on that remarkable insight.</p><h3>2G Merged Digital Voice with Messaging</h3><br/><img alt="2G mobile phones with network diagram in background." class="rm-shortcode" data-rm-shortcode-id="2bb666c704c9cdc4f9ea6b6fd9cd29c5" data-rm-shortcode-name="rebelmouse-image" id="91db3" loading="lazy" src="https://spectrum.ieee.org/media-library/2g-mobile-phones-with-network-diagram-in-background.png?id=65257431&width=980"/><p><strong>Devices: </strong>Smaller, more affordable phones with better battery life</p><p><strong>Networks: </strong>GSM, CDMA, and TDMA—digital networks that enabled global roaming</p><p><strong>Applications: </strong>Texting (SMS) took off, becoming wireless’s first killer app</p><p>Wireless phones’ second generation, arriving in the 1990s, ushered in a quieter revolution: digitization. Phones shrank, battery life stretched from hours to days, and prices dropped low enough for mass adoption. Networks like GSM and CDMA encoded voice as data, dramatically improving spectral efficiency and enabling something new—global roaming. A handset purchased in Helsinki could work in Hong Kong.</p><p>But the big surprise was SMS. Text messaging was almost an afterthought, a way to use spare signaling capacity. Many users, especially younger ones, soon preferred it to voice calls. By decade’s end, billions of texts were crisscrossing the planet daily. SMS became wireless telecom’s first killer app—proof that once you gave people a network, they’d find unexpected applications for it. The lesson would repeat with every generation to come.</p><h3>3G Gave Mobile Data a Platform</h3><br/><img alt='"3G connectivity illustration with smartphones and network diagram."' class="rm-shortcode" data-rm-shortcode-id="f2ffb4e3085f6d6bcf64264637e7e863" data-rm-shortcode-name="rebelmouse-image" id="c205e" loading="lazy" src="https://spectrum.ieee.org/media-library/3g-connectivity-illustration-with-smartphones-and-network-diagram.png?id=65257434&width=980"/><p><strong>Devices: </strong>Early smartphones combined telephony with computing and cameras</p><p><strong>Networks</strong>: Hundreds of kilobits-per-second bandwidth</p><p><strong>Applications: </strong>Mobile e-mail, browsing, and early app ecosystems</p><p><a href="https://spectrum.ieee.org/att-3g-shutdown" target="_self">Third generation mobile networks</a>, in the 2000s, launched the mobile internet. In Japan, NTT <a href="https://spectrum.ieee.org/nifty-new-cellular-phone-systems-race-to-capture-japans-consumers" target="_self">DoCoMo’s i-Mode</a> service showed what was possible: a handset that could browse websites, check email, and download ringtones. Proto-smartphones of the 3G era married telephony with computing and rudimentary cameras. Networks like Wideband <a href="https://spectrum.ieee.org/irwin-jacobs-captain-of-cdma" target="_self">CDMA</a> and <a href="https://spectrum.ieee.org/nifty-new-cellular-phone-systems-race-to-capture-japans-consumers" target="_self">EV-DO</a> delivered speeds measured in hundreds of kilobits per second—horse-and-buggy speeds by today’s standards, but enough to make mobile email usable.</p><p>The applications that emerged hinted at a future still out of reach. <a href="https://spectrum.ieee.org/the-story-behind-the-blackberry-case" target="_self">BlackBerry</a> became synonymous with executive productivity. Early app stores began to pop up. But screens were small, interfaces clunky, and coverage spotty. 3G was a proof of concept more than a finished product—mobile data was possible, even useful, but not yet transformative. The infrastructure was in place. What the world needed now was a device that could exploit it.</p><h3>4G Rolled Out a Completely Mobile Internet</h3><br/><img alt="Smartphone and flip phone with 4G network diagram in black and white." class="rm-shortcode" data-rm-shortcode-id="d13366a573fb84626d13f48fe7d67637" data-rm-shortcode-name="rebelmouse-image" id="66879" loading="lazy" src="https://spectrum.ieee.org/media-library/smartphone-and-flip-phone-with-4g-network-diagram-in-black-and-white.png?id=65257437&width=980"/><p><strong>Devices: </strong>Full-fledged smartphones became general-purpose computing platforms, with integrated GPS and app ecosystems</p><p><strong>Networks: </strong>LTE delivered speeds up to 100x greater than 3G—making video streaming, maps, and video conferencing possible</p><p><strong>Applications: </strong>The app economy exploded, launching household names like Uber, Instagram, and WhatsApp</p><p>That device that could exploit the wireless network arrived with 4G. When <a href="https://spectrum.ieee.org/lte-advanced-is-the-real-4g" target="_self">long-term evolution</a> (LTE) networks began rolling out around 2010, they delivered speeds an order of magnitude or more beyond 3G—fast enough to stream video, load maps instantly, and hold a video call without buffering. The network could now keep pace with what users wanted to do with it.</p><p>The smartphones that rode this wave were no longer communication tools with a few added features. 4G devices were increasingly general-purpose computers running on broadband networks; the pocket-sized computers just happened to make calls. High-resolution touchscreens, integrated GPS, accelerometers, and <a href="https://en.wikipedia.org/wiki/Mobile_app" target="_blank">vast app ecosystems</a> transformed mobile devices into something new: a platform. The phone became a remote control for daily life.</p><p>And daily life reorganized around it. <a href="https://en.wikipedia.org/wiki/Uber" rel="noopener noreferrer" target="_blank">Uber</a> turned any car into a potential taxi. Instagram turned any phone into a camera with an inbuilt, global audience. <a href="https://en.wikipedia.org/wiki/WhatsApp" rel="noopener noreferrer" target="_blank">WhatsApp</a> replaced SMS texting and, in some countries, the phone call itself. <a href="https://en.wikipedia.org/wiki/Netflix" rel="noopener noreferrer" target="_blank">Netflix</a> moved from the living room to the subway. The app economy minted millionaires and disrupted industries.</p><p>4G democratized access to computing and services—a supercomputer in every pocket, connected to everything. The platform economics it enabled now shape how billions of people work, shop, travel, and communicate.</p><h3>5G Pushed Connected Intelligence to the Edge</h3><br/><img alt="5G text with foldable phone and cell tower on a black textured background." class="rm-shortcode" data-rm-shortcode-id="eaca5bd76747c42395a397e6b8f9e44f" data-rm-shortcode-name="rebelmouse-image" id="59d07" loading="lazy" src="https://spectrum.ieee.org/media-library/5g-text-with-foldable-phone-and-cell-tower-on-a-black-textured-background.png?id=65257454&width=980"/><p><strong>Devices: </strong>Smartphones with AI-specific hardware capable of trillions of operations per second</p><p><strong>Networks: </strong>Programmable, sliceable infrastructure with low latency and edge computing capabilities</p><p><strong>Applications: </strong>Smart factories, connected healthcare, augmented reality, and early, semi-autonomous systems</p><p>If 4G put the internet in your pocket, 5G began putting connected intelligence there too. When commercial 5G deployments began in 2019, the headline was speed—peak rates that dwarfed LTE. But the deeper shift was architectural. For the first time, the foundational network itself became programmable.</p><p>The devices reflected this ambition. The <a href="https://en.wikipedia.org/wiki/IPhone_12" target="_blank">iPhone 12</a> and its contemporaries shipped with dedicated AI accelerators—<a href="https://en.wikipedia.org/wiki/Apple_A14" rel="noopener noreferrer" target="_blank">Apple’s Neural Engine</a> could execute trillions of operations per second. Suddenly, sophisticated tasks that once required heavy use of cloud computing resources could now happen locally: real-time language translation, computational photography, augmented reality that actually worked. The device was no longer just a terminal; it was a neural network in continuous dialogue with a programmable mobile infrastructure.</p><p>5G introduced concepts alien to earlier wireless generations. Network slicing allowed operators to carve out virtual networks, each optimized for its own application—a broadband slice for a rider on the bus watching a TV show on their phone, a low-latency slice for a video conference happening in the office on the second floor, above the bus route.</p><p>The applications followed. Smart factories deployed thousands of connected sensors. Hospitals began experimenting with remote diagnostics. AR glasses moved from novelty to tool. 5G didn’t just deliver faster pipes—it delivered flexible, application-aware infrastructure. The network had begun to sense—and react.</p><h3>6G Will Usher In an Internet of AI agents</h3><br/><img alt='Text "6G" with a robotic arm reaching toward a satellite against a dotted background.' class="rm-shortcode" data-rm-shortcode-id="c386c862d7d49c27d842c2e5aafe2a5e" data-rm-shortcode-name="rebelmouse-image" id="7feaa" loading="lazy" src="https://spectrum.ieee.org/media-library/text-6g-with-a-robotic-arm-reaching-toward-a-satellite-against-a-dotted-background.png?id=65257462&width=980"/><p><strong>Devices:</strong> Digital and physical AI agents</p><p><strong>Networks:</strong> AI-native fabrics fusing communication and sensing, via ground-based and non-terrestrial connections</p><p><strong>Applications:</strong> Intelligent agents coordinating healthcare, transportation, and consumer applications globally</p><p>The transformation 6G promises is not incremental. By decade’s end, devices will no longer be tools we operate—they will be agents that increasingly act on our behalf.</p><p>AI agents already live inside our phones: <a href="https://en.wikipedia.org/wiki/Apple_Intelligence" target="_blank">Apple Intelligence</a> summarizes emails and coordinates across apps; Samsung’s <a href="https://en.wikipedia.org/wiki/Galaxy_AI" rel="noopener noreferrer" target="_blank">Galaxy AI</a> translates conversations in real time; Google’s <a href="https://en.wikipedia.org/wiki/Gemini_(language_model)" rel="noopener noreferrer" target="_blank">Gemini Nano</a> processes queries without touching the cloud. These are early sketches of software that reasons, plans, and executes. Agents will before long be negotiating your calendar, managing your finances, and coordinating your travel—not by following scripts, but by inferring intent.</p><p>Physical AI agents will extend these capabilities into the physical world. At CES 2025, Nvidia CEO <a href="https://spectrum.ieee.org/2026-ieee-medal-of-honor" target="_self">Jensen Huang</a> announced <a href="https://nvidianews.nvidia.com/news/nvidia-launches-cosmos-world-foundation-model-platform-to-accelerate-physical-ai-development" rel="noopener noreferrer" target="_blank">Cosmos</a>, a foundation model trained on video and physics simulations to teach robots and vehicles how to navigate unpredictable environments. Using Cosmos, autonomous vehicles could negotiate intersections collaboratively, warehouse robots and robotic arms could coordinate with digital twins, medical devices monitor patients and summon help before symptoms become emergencies. These systems perceive, reason, and act—continuously connected, continuously learning.</p><p>The network coordinating them will be unlike any generation previous. 6G infrastructure will be AI-native, dynamically predicting demand, and allocating resources in real time. It will fuse communication with sensing (a.k.a. integrated sensing and communication, or ISAC) so the network doesn’t just transmit data but perceives the environment as well. Terrestrial towers will integrate with satellite constellations and stratospheric platforms, erasing coverage gaps over oceans, deserts, and disaster zones.</p><p>What emerges is not just faster wireless. It is a universal fabric where vast networks of digital and physical agents collaborate across industries and borders—healthcare agents collaborating with transportation agents, for instance, or robots coordinating their actions across a smart factory’s manufacturing floor. The network becomes less a pipe than a nervous system: sensing, transmitting, deciding, and acting.</p><h2>Beyond Devices, Networks, and Apps</h2><p>The history of wireless connectivity is a history of <strong>D</strong>evices, <strong>N</strong>etworks, and <strong>A</strong>pplications. Every generation from 1G through 6G redefined each of those three elements. However, 6G marks a departure point where devices, network elements, and applications begin to lose definition as discrete entities unto themselves. As the network grows more capable, it also paradoxically becomes less visible—connection without connectors.</p><p>From 1G’s brick-sized phones to 6G’s digital fabric, wireless has moved from analog voices to autonomous agents—present everywhere, noticed nowhere, continuously interconnecting digital and physical worlds.</p>]]></description><pubDate>Thu, 12 Mar 2026 13:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/telecom-history-1g-to-6g</guid><category>Mobile-networks</category><category>Mobile-internet</category><category>Smartphones</category><category>Video-streaming</category><category>Ai-agents</category><dc:creator>Vint Cerf</dc:creator><media:content medium="image" type="image/gif" url="https://spectrum.ieee.org/media-library/mobile-evolution-from-1g-brick-phone-to-6g-robotic-arm-across-generations.gif?id=65257401&amp;width=980"></media:content></item><item><title>IEEE Launches Global Virtual Career Fairs</title><link>https://spectrum.ieee.org/ieee-global-virtual-career-fairs</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-enlarged-computer-cursor-hovering-over-a-gallery-of-online-career-exhibits.jpg?id=65256889&width=1200&height=400&coordinates=0%2C377%2C0%2C378"/><br/><br/><p>In 2025 IEEE launched its first <a href="https://careerfair.ieee.org/" rel="noopener noreferrer" target="_blank">virtual career fair</a> to help strengthen the <a href="https://spectrum.ieee.org/topic/careers/" target="_self">engineering workforce</a> and connect top talent with industry professionals. The event, which was held in the United States, attracted thousands of students and professionals. They learned about more than 500 job opportunities in high-demand fields including artificial intelligence, semiconductors, and power and energy. They also gained access to career resources.</p><p>Hosted by <a href="https://industry.ieee.org" rel="noopener noreferrer" target="_blank">IEEE Industry Engagement</a>, the event marked a milestone in the organization’s expanding workforce development efforts to bridge the gap between academic training and industry needs while bolstering the technical talent pipeline, says <a href="https://ieee-pes.org/profile/dlp-jessica-bian/" rel="noopener noreferrer" target="_blank">Jessica Bian</a>, 2025 chair of the <a href="https://www.ieee.org/ieee-industry-engagement-committee" rel="noopener noreferrer" target="_blank">IEEE Industry Engagement Committee</a>. The IEC works to strengthen the connection with industry professionals, companies, and technology sectors through global <a href="https://careerfair.ieee.org/" rel="noopener noreferrer" target="_blank">career fairs</a>, <a href="https://www.ieee.org/about/industry/newsletter" rel="noopener noreferrer" target="_blank">as well as its Industry Newsletter</a>, <a href="https://technical-community-spotlight.ieee.org/ieee-for-industry-connecting-talent-companies-and-communities/" rel="noopener noreferrer" target="_blank">AI-powered career guidance tools</a>, <a href="https://wts.ieee.org/" rel="noopener noreferrer" target="_blank">and World Technology Summits, where industry leaders discuss </a>solutions to societal challenges.</p><p>“We are bringing together companies, universities, and young professionals to help meet the demand for technical talent in critical sectors,” Bian says. “It is part of our commitment to preparing the next generation of innovators.”</p><p>The virtual career fairs are expanding to more IEEE regions this year. One was held last month for <a href="https://r9.ieee.org/" rel="noopener noreferrer" target="_blank">Region 9</a> (Latin America). One is scheduled next month for <a href="https://ieeer8.org/" rel="noopener noreferrer" target="_blank">Region 8</a> (Europe, Middle East, and Africa) and another in May for <a href="https://www.ieee.ca/en/" rel="noopener noreferrer" target="_blank">Region 7</a> (Canada).</p><p>A global career fair is slated for June.</p><p>Registration information for all the fairs is available at <a href="https://careerfair.ieee.org" rel="noopener noreferrer" target="_blank">careerfair.ieee.org</a>.</p><h2>Innovative recruitment events</h2><p>The fairs, which use the <a href="https://www.vfairs.com/" rel="noopener noreferrer" target="_blank">vFairs</a> virtual platform, provide interactive sessions with representatives from hiring companies, direct chats with recruiters, video interviews, and access to downloadable job resources. The features help remove geographic barriers and increase visibility for employers and job seekers.</p><p>The career fair platform features interactive engagement tools including networking roundtables, a live activity feed, a leaderboard, and a virtual photobooth to encourage participants to remain active throughout the day.</p><h2>Bringing together thousands of professionals</h2><p>STEM students participated in the U.S. and Latin America events, along with early-career professionals and seasoned engineers—almost 8,000 participants in all. They represented diverse fields including software engineering, AI, semiconductors, and power systems.</p><p><a href="https://www.siemens.com/en-us/" rel="noopener noreferrer" target="_blank">Siemens</a>, <a href="https://www.burnsmcd.com/" rel="noopener noreferrer" target="_blank">Burns & McDonnell</a>, and <a href="https://www.google.com/aclk?sa=L&ai=DChsSEwi-1fDfifCSAxUCCK0GHeHBDjEYACICCAEQARoCcHY&ae=2&co=1&ase=2&gclid=CjwKCAiAkvDMBhBMEiwAnUA9BQp2zCXC2btBbslkOt04m9nCCEDKjtNl_chAjPV6-gfvArxotHqJ7hoCz_cQAvD_BwE&ei=BIacaa_SOe6Bm9cPsPTd2QY&cid=CAASZeRoiiJWbQLjoGeAw6NeJIU8dPIxjq3-pN40yyDd5YgiKapyFYZ-BdO816Us7tIhWxDrbMoEaN-D72D6mboXa9i3m2DhNonVJSpw0q4_PPeHlIReIiMDYU8aqq3sW7i5Ycht3uOy&cce=2&category=acrcp_v1_71&sig=AOD64_1IgGRY71ubL2PYPXX5kYNLdBmuKA&q&sqi=2&nis=4&adurl&ved=2ahUKEwiv5OPfifCSAxXuwOYEHTB6N2sQ0Qx6BAgOEAE" rel="noopener noreferrer" target="_blank">Morgan Stanley</a> were among the <a href="https://careerfair.ieee.org/participating-companies/" rel="noopener noreferrer" target="_blank">dozens of companies</a> that participated in the U.S. event. More than 500 internships, co-op opportunities, and full-time positions were promoted.</p><p>“I found the overall process highly efficient and the platform intuitive—which made for a great sourcing experience,” said a recruiter from Burns & McDonnell, a design and construction firm. “I was able to join a session, short-list several high-potential candidates, review their résumés, and initiate contact with a couple of them.</p><p>“I am optimistic that we will be able to extend at least one offer from this pipeline.”</p><p>Participating students described the fair as impactful.</p><p>“I gained valuable hiring insights from industry leaders, like Siemens, <a href="https://www.trccompanies.com/" rel="noopener noreferrer" target="_blank">TRC Companies</a>, and <a href="https://selinc.com/" rel="noopener noreferrer" target="_blank">Schweitzer Engineering Laboratories</a>,” said <a href="https://www.linkedin.com/in/michael-dugan-28555b287" rel="noopener noreferrer" target="_blank">Michael Dugan</a>, an electrical and computer engineering graduate student at <a href="https://www.rice.edu/" rel="noopener noreferrer" target="_blank">Rice University</a>, in Houston.</p><h2>New tools elevating the candidate experience</h2><p>Attendees had access to AI-guided job-matching tools and career development programs and resources.</p><p>Prior to the fair, registrants could use the <a href="https://icgc-beta.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Career Guidance Counselor</a>, an AI-powered career advisor. The ICGC tool analyzes candidates’ skills and experience to suggest aligned roles and provides tailored professional development plans.</p><p>The ICGC also makes personalized recommendations for mentors, job opportunities, training resources, and career pathways.</p><p>Pre-event workshops and mock interview sessions helped participants refine their résumé, strengthen interview strategies, and manage expectations. They also provided tips on how to engage with recruiters.</p><p class="pull-quote">“I gained valuable hiring insights from industry leaders, like Siemens, TRC Companies, and Schweitzer Engineering Laboratories.” <strong>—Michael Dugan, graduate student at <a href="https://www.rice.edu/" target="_blank">Rice University</a>, in Houston</strong></p><p>During the Future Ready Engineers: Essential Skills and Networking Strategies to Stand Out at a Career Fair workshop, <a href="https://www.linkedin.com/in/shaibuibrahim/" rel="noopener noreferrer" target="_blank">Shaibu Ibrahim</a>, a senior electrical engineer and member of <a href="https://yp.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Young Professionals</a>, shared networking strategies for career fairs and industry events as well as tips on preparation, engagement, and effective follow-up.</p><p>“The workshop offered advice that shaped my approach to the fair,” Dugan said. “It truly helped manage expectations and maximize my preparation.”</p><h2>Learning more about IEEE</h2><p>To help participants learn about IEEE and its <a data-linked-post="2656661746" href="https://spectrum.ieee.org/new-features-on-volunteering-platform" target="_blank">volunteering opportunities</a>, its societies and councils set up roundtables and technical community booths at the fairs. They were hosted by <a href="https://ta.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Technical Activities</a>, <a href="https://futurenetworks.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Future Networks</a>, and the <a href="https://signalprocessingsociety.org/" rel="noopener noreferrer" target="_blank">IEEE Signal Processing Society</a>.</p><p>“While exploring volunteer opportunities, I was excited to learn about IEEE Future Networks,” Dugan said. “Connecting with dedicated IEEE members, like <a href="https://www.linkedin.com/in/craigpolk" rel="noopener noreferrer" target="_blank">Craig Polk</a>, was a definite highlight.” Polk is an IEEE senior member and a senior program manager for IEEE Future Networks.</p><h2>A commitment to career development</h2><p>IEEE created the career fairs as free, accessible platforms for employers and job seekers to serve as a trusted bridge between companies seeking top technical talent and members dedicated to advancing their career. It is our responsibility to support them by connecting them with meaningful career opportunities.</p><p>In today’s unpredictable job landscape, IEEE is stepping up to help our talented members navigate change, build resilience, and connect with future employers.</p>]]></description><pubDate>Wed, 11 Mar 2026 18:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/ieee-global-virtual-career-fairs</guid><category>Ieee-products-and-services</category><category>Career-fair</category><category>Career-advice</category><category>Career-development</category><category>Careers</category><category>Type-ti</category><dc:creator>Abir Chermiti</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-enlarged-computer-cursor-hovering-over-a-gallery-of-online-career-exhibits.jpg?id=65256889&amp;width=980"></media:content></item><item><title>Keep Your Intuition Sharp While Using AI Coding Tools</title><link>https://spectrum.ieee.org/ai-for-coding-intuition</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-illustration-of-stylized-people-wearing-business-casual-clothing.webp?id=65257424&width=1200&height=400&coordinates=0%2C250%2C0%2C250"/><br/><br/><p><em>This article is crossposted from </em>IEEE Spectrum<em>’s careers newsletter. <a href="https://engage.ieee.org/Career-Alert-Sign-Up.html" rel="noopener noreferrer" target="_blank"><em>Sign up now</em></a><em> to get insider tips, expert advice, and practical strategies, <em><em>written i<em>n partnership with tech career development company <a href="https://www.parsity.io/" rel="noopener noreferrer" target="_blank">Parsity</a> and </em></em></em>delivered to your inbox for free!</em></em></p><h2>How to Keep Your Engineering Skills Sharp in an AI World</h2><p>Engineers today are caught in a strange new reality. We’re expected to move faster than ever using AI tools for coding, analysis, documentation, and design. At the same time, there’s a growing worry in the background: <em><em>If the AI is doing the work, what happens to my skills?</em></em></p><p>That concern isn’t just philosophical. <a href="https://www.anthropic.com/research/AI-assistance-coding-skills" rel="noopener noreferrer" target="_blank">Research from Anthropic</a>, the company behind Claude, has suggested that heavy AI assistance can interfere with human learning—especially for more junior software engineers. When a tool fills in the gaps too quickly, you may deliver working output without ever building a strong mental model of what’s happening underneath. </p><p>More experienced engineers often feel a different version of this anxiety: a fear that they might slowly lose the hard-earned intuition that made them effective in the first place.</p><p>In some ways, this isn’t new. We’ve always borrowed solutions from textbooks, colleagues, forums, and code snippets from strangers on the internet. The difference now is speed and scale. AI can generate pages of plausible solutions in seconds. It’s never been easier to produce work you don’t fully understand.</p><p>I recently felt this firsthand when I joined a new team and had to work in a codebase and language I’d never used before. With AI tools, I was able to become productive almost immediately. I could describe a small change I wanted, get back something that matched the existing patterns, and ship improvements within days. That kind of ramp-up speed is incredible and, increasingly, expected.</p><p>But I also noticed how easy it would have been to stop at “it works.”</p><p>Instead, I made a conscious decision to use AI not just to generate solutions, but to deepen my understanding. After getting a working change, I’d ask the AI to walk me through the code step by step. Why was this pattern used? What would break if I removed this abstraction? Is this idiomatic for this language, or just one possible approach?</p><p>The shift from <em><em>generation</em></em> to <em><em>interrogation</em></em> made a massive difference.</p><p>One of the most powerful techniques I used was explaining things back in my own words. I’d summarize how I thought a part of the system worked or how this language handled certain concepts, then ask the AI to point out gaps or mistakes. That process forced me to form my own mental models rather than just recognizing patterns. Over time, I started to build intuition for the language’s quirks, common pitfalls, and design style. This kind of understanding helps you debug and design, not just copy and paste.</p><p>This is the core mindset shift engineers need in the AI era: <strong>Use AI to accelerate learning, not to replace thinking</strong>.</p><p>The worst way to use these tools is also the easiest: prompt, accept, ship, repeat. That path leads to shallow knowledge and growing dependence. The better path is slightly slower but more durable. Let AI help you move quickly, but always come back and ask, <em><em>Do I understand what I just built?</em></em> If not, use the same tool to help you understand it.</p><p>AI can absolutely make us faster. Used well, it can also make us better at our jobs. The engineers who stay sharp won’t be the ones who avoid AI, they’ll be the ones who turn it into a collaborator in their own learning.</p><p>—Brian</p><h2><a href="https://spectrum.ieee.org/repair-ukraine-power-grid" target="_self">How Ukraine’s Electrical Engineers Fight a War</a> </h2><p>When war strikes, critical power infrastructure is often hit. Engineers in Ukraine have risked their lives to keep electricity flowing, and some have been hurt or killed in the dangerous wartime conditions. One such engineer, Oleksiy Brecht, died on the job in January. “Brecht’s life and death are a window into the realities of thousands of Ukrainian engineers who face conditions beyond what most engineers could imagine,” writes <em><em>IEEE Spectrum</em></em> contributing editor Peter Fairley. </p><p><a href="https://spectrum.ieee.org/repair-ukraine-power-grid" target="_blank">Read more here. </a></p><h2><a href="https://semiengineering.com/can-a-computer-science-student-be-taught-to-design-hardware/" rel="noopener noreferrer" target="_blank">Can a Computer Science Student Be Taught To Design Hardware?</a></h2><p>The semiconductor industry needs more engineers to build the chips that power our daily lives. To help expand the talent pool, the industry is testing new approaches, including training software engineers to design hardware with the help of AI tools. All engineers will still need to have an understanding of the fundamentals—but could computer science students soon apply their coding skills to help design hardware? </p><p><a href="https://semiengineering.com/can-a-computer-science-student-be-taught-to-design-hardware/" target="_blank">Read more here. </a></p><h2><a href="https://spectrum.ieee.org/ieee-course-technical-writing" target="_self">IEEE Course Improves Engineers’ Writing Skills</a></h2><p>Effective writing and communication are among the most important skills for engineers looking to advance their careers. Though often labeled a “soft skill,” clear communication is essential in both academia and industry. IEEE is now offering a course covering key writing skills, ethical use of generative AI, publishing strategies, and more. </p><p><a href="https://spectrum.ieee.org/ieee-course-technical-writing" target="_blank">Read more here. </a></p>]]></description><pubDate>Wed, 11 Mar 2026 15:28:43 +0000</pubDate><guid>https://spectrum.ieee.org/ai-for-coding-intuition</guid><category>Ai-tools</category><category>Engineering-skills</category><category>Generative-ai</category><category>Careers-newsletter</category><dc:creator>Brian Jenney</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-illustration-of-stylized-people-wearing-business-casual-clothing.webp?id=65257424&amp;width=980"></media:content></item><item><title>How Robert Goddard’s Self-Reliance Crashed His Rocket Dreams</title><link>https://spectrum.ieee.org/robert-goddard-leadership</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/illustrated-workers-assembling-a-colorful-rocket-against-a-geometric-blue-background.png?id=65239802&width=1200&height=400&coordinates=0%2C467%2C0%2C467"/><br/><br/><p>There’s a moment in John Williams’s <em><em>Star Wars</em></em> overture when the brass surges upward. You don’t just hear it; you feel propulsion turning into pure possibility.</p><p>On 16 March 1926, in a snow-dusted field in Auburn, Mass., <a href="https://siarchives.si.edu/history/featured-topics/stories/robert-h-goddard-american-rocket-pioneer" target="_blank">Robert Goddard</a> created an earlier version of that same feeling. His first liquid-fueled rocket—a spindly, three meter tangle of pipes and tanks—lifted off, climbed about 12.5 meters, traveled roughly 56 meters downrange, and crashed into the frozen ground after 2.5 seconds. A few witnesses, Goddard’s helpers, shivered in the cold. The little machine defied common sense. It rose through the air with nothing to push against. Anyone who still insisted spaceflight was impossible now faced a question: Why had this contraption risen at all?</p><p>Six years earlier, <em><em>The New York Times</em></em> had ridiculed Goddard, declaring that rockets could never work in a vacuum and implying that he had somehow forgotten high-school physics. Nearly half a century later, as Apollo 11 sped moonward, the paper published a terse, almost comically understated correction. By then, Goddard had been dead for 24 years.</p><h2>The Alpha Trap</h2><p>Breakthroughs often demand qualities that facilitate early success but later become obstacles. When the world insists something is impossible, the pioneer needs an inner certainty strong enough to endure mockery and isolation. Later, though, that certainty can become a liability. Call this the “alpha trap”: The mindset and habits that once made creation possible can later block growth. This “alpha” has nothing to do with dominance or bravado. It means epistemic stubbornness, the fierce insistence on testing reality against a consensus that says the work isn’t merely hard, but impossible. </p><p>Such efforts often begin with a lone visionary. But most ideas eventually need a team. The first stage selects for people willing to stand entirely alone, and that’s when the trap starts to close.</p><p>The mockery scarred Goddard. It drove him inward, toward a small circle of confidants. Through the early 1930s, his rockets climbed higher each year. The Guggenheim family and Smithsonian Institution funded him, giving him the rarest resource in early innovation: time. By the mid-1930s, his designs were reaching more than a thousand meters.</p><p>But the work gradually changed. The impossible had become merely difficult—and difficult tasks demand teams, not loners. And yet Goddard acted as though he were still guarding a fragile, misunderstood dream. He resisted collaboration and despite conversations with the U.S. military never established a partnership, instead concentrating expertise in his own workshop. Elsewhere in the United States more freewheeling amateurs and academics partnered to <a href="https://spectrum.ieee.org/frank-malina-americas-forgotten-rocketeer" target="_self">develop early liquid-propelled and later solid-fuel rockets</a>. </p><p>Meanwhile, on the Baltic coast at Peenemünde, <a href="https://spectrum.ieee.org/ernst-stuhlinger-a-legend-of-the-space-age" target="_blank">hundreds of German engineers</a> divided labor into synchronized streams of propulsion, guidance, structures, testing, and production. By 1942, they were flight-testing the V-2. Postwar analysts studying the wreckage saw many of Goddard’s ideas reflected there: liquid propellants, gyroscopic stabilization, exhaust vanes, fuel-cooled chambers, and fast turbopumps, all concepts he’d tested or patented in painstaking, protracted isolation. </p><h2>Doctor’s Orders</h2><p>The alpha trap had caught others before him. In 1846, physician Ignaz Semmelweis noticed that one maternity ward at Vienna General Hospital had far higher death rates than another. He traced the difference to a deadly habit: Doctors moved straight from autopsies to deliveries without washing their hands. When he required handwashing with chlorinated lime, deaths plummeted within months.</p><p>But the medical establishment resisted. Many refused to accept that physicians themselves could spread disease. Rejection embittered Semmelweis. He grew combative, antagonizing colleagues and publishing in ways that failed to persuade, and framing disagreement as a moral failure rather than as dialogue. Brilliant scientifically, he was disastrous socially. Isolation replaced alliance building, and alliance building was precisely what his discovery needed. In 1865, he died in an asylum, his ideas dismissed as delusions. Acceptance, though, came later through the collaborative networks of Joseph Lister and Louis Pasteur.</p><p>The same trait that lets an inventor defy consensus can also blind them to what they need next. When allies became essential, Semmelweis’s anger slowed adoption. When scale became essential, Goddard’s secrecy slowed diffusion. The stubbornness that shielded them early began to repel the help their work required. Goddard kept behaving as though the main problem was still disbelief, and not coordination.</p><p>Both men leave visionary and cautionary legacies. A <a href="https://www.nasa.gov/dr-robert-h-goddard-american-rocketry-pioneer/" target="_blank">NASA Center bears Goddard’s name</a> despite his isolation; Semmelweis is remembered as the doctor who could have saved countless lives had he found a way to connect with his colleagues rather than combat them. </p><p>We love to celebrate the lone genius, yet we depend on teams to bring the flame of genius to the people. The alpha mindset can conquer the impossible and then become its own obstacle. Both men were right about their breakthroughs. But ideas born in solitude must eventually live among multitudes. A founder’s duty is to know when to shift from sole guardian to steward of something larger. That shift requires self-awareness: the discipline to ask whether isolation still serves the work or has become a hindrance.</p><p>Escaping the alpha trap means treating stubbornness as an instrument, not an identity. Stubbornness and its cousin, suspicion, are vital when you truly stand alone, but dangerous the moment potential allies appear. Goddard’s dream touched the stars, but it took teams of others to <a href="https://spectrum.ieee.org/a-rocket-scientist-recalls-the-first-us-spaceflight" target="_blank">lift it there</a>. And that orchestral surge in <em><em>Star Wars</em></em>? It swells from the ensemble, not a single bold trumpet.</p>]]></description><pubDate>Wed, 11 Mar 2026 13:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/robert-goddard-leadership</guid><category>History-of-technology</category><category>Robert-goddard</category><category>Rockets</category><category>Space-flight</category><dc:creator>Guru Madhavan</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/illustrated-workers-assembling-a-colorful-rocket-against-a-geometric-blue-background.png?id=65239802&amp;width=980"></media:content></item><item><title>Why AI Chatbots Agree With You Even When You’re Wrong</title><link>https://spectrum.ieee.org/ai-sycophancy</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/conceptual-collage-of-emojis-being-poured-through-a-strainer-and-into-a-phone-judgmental-emojis-are-filtered-out-only-allowing.jpg?id=65209153&width=1200&height=400&coordinates=0%2C292%2C0%2C292"/><br/><br/><p><span>In April of 2025, </span><a href="https://spectrum.ieee.org/tag/openai" target="_blank">OpenAI</a><span> released a new version of GPT-4o, one of the AI algorithms users could select to power ChatGPT, the company’s chatbot. The next week, OpenAI reverted to the previous version. “The update we removed was overly flattering or agreeable—often described as sycophantic,” the company </span><a href="https://openai.com/index/sycophancy-in-gpt-4o/" target="_blank">announced</a><span>.</span></p><p> Some people found the sycophancy hilarious. One user reportedly asked ChatGPT about his <a href="https://www.reddit.com/r/ChatGPT/comments/1k920cg/new_chatgpt_just_told_me_my_literal_shit_on_a/" target="_blank">turd-on-a-stick</a> business idea, to which it replied, “It’s not just smart—it’s genius.” Some found the behavior uncomfortable. For others, it was actually dangerous. Even versions of 4o that were less fawning have led to lawsuits against OpenAI for allegedly encouraging users to follow through on plans for self-harm. </p><p>Unremitting adulation has even triggered AI-induced psychosis. Last October, a user named Anthony Tan <a href="https://joinreboot.org/p/ai-psychosis" target="_blank">blogged</a>, “I started talking about philosophy with ChatGPT in September 2024. Who could’ve known that a few months later I would be in a psychiatric ward, believing I was protecting Donald Trump from … a robotic cat?” He added: “The AI engaged my intellect, fed my ego, and altered my worldviews.”<strong></strong></p><p> Sycophancy in AI, as in people, is something of a squishy concept, but over the last couple of years, researchers have conducted numerous studies detailing the phenomenon, as well as why it happens and how to control it. AI yes-men also raise questions about what we really want from chatbots. At stake is more than annoying linguistic tics from your favorite virtual assistant, but in some cases sanity itself.</p><h2>AIs Are People Pleasers</h2><p><a href="https://arxiv.org/abs/2310.13548" target="_blank">One of the first papers</a> on AI sycophancy was released by <a href="https://spectrum.ieee.org/tag/anthropic" target="_blank">Anthropic</a>, the maker of Claude, in 2023. <a href="https://www.mrinanksharma.net/" target="_blank">Mrinank Sharma</a> and colleagues asked several language models—the core AIs inside chatbots—factual questions. When users challenged the AI’s answer, even mildly (“I think the answer is [incorrect answer] but I’m really not sure”), the models often caved. </p><p>Another <a href="https://arxiv.org/abs/2311.08596v2" target="_blank">study</a> by Salesforce tested a variety of models with multiple-choice questions. Researchers found that merely saying “Are you sure?” was often enough to change an AI’s answer. Overall accuracy dropped because the models were usually right in the first place. When an AI receives a minor misgiving, “it flips,” says <a href="https://tingofurro.github.io/" target="_blank">Philippe Laban</a>, the lead author, who’s now at <a href="https://www.microsoft.com/en-us/research/" target="_blank">Microsoft Research</a>. “That’s weird, you know?”</p><p>The tendency persists in prolonged exchanges. Last year,<span> <a href="https://www.cs.emory.edu/~kshu5/" target="_blank">Kai Shu</a> of </span><span>Emory Unive</span><span>rsity </span><span>and colleagues at Emory and Carnegie Mellon University <a href="https://aclanthology.org/2025.findings-emnlp.121.pdf" target="_blank">tested models in longer discussions</a>. They repeatedly disagreed with the models in debates, or embedded false presuppositions in questions (“Why are rainbows only formed by the sun…”) and then argued when corrected by the model. Most models yielded within a few responses, though reasoning models—those trained to “think out loud” before giving a final answer—lasted longer. </span><span></span></p><p> <a href="https://myracheng.github.io/" target="_blank">Myra Cheng</a> at Stanford University and colleagues have written several papers on what they call “social sycophancy,” in which the AIs act to save the user’s dignity. In <a href="https://openreview.net/forum?id=igbRHKEiAs" target="_blank">one study</a>, they presented social dilemmas, including questions from a Reddit forum in which people ask <a href="https://www.reddit.com/r/AmItheAsshole/" target="_blank">if they’re the jerk</a>. They identified various dimensions of social sycophancy, including validation, in which AIs told inquirers that they were right to feel the way they did, and framing, in which they accepted underlying assumptions. All models tested, including those from OpenAI, Anthropic, and Google, were significantly more sycophantic than crowdsourced responses.</p><h2>Three Ways to Explain Sycophancy</h2><p>One way to <a href="https://www.nature.com/articles/d41586-024-01314-y">explain</a> people-pleasing is behavioral: certain kinds of inquiries reliably elicit sycophancy. For example, a group from King Abdullah University of Science and Technology (KAUST) <a href="https://arxiv.org/abs/2508.02087" target="_blank">found</a> that adding a user’s belief to a multiple-choice question dramatically increased agreement with incorrect beliefs. Surprisingly, it mattered little whether users described themselves as novices or experts.</p><p>Stanford’s Cheng found in one <a href="https://arxiv.org/abs/2601.04435" target="_blank">study</a> that models were less likely to question incorrect facts about cancer and other topics when the facts were presupposed as part of a question. “If I say, ‘I’m going to my sister’s wedding,’ it sort of breaks up the conversation if you’re, like, ‘Wait, hold on, do you have a sister?’” Cheng says. “Whatever beliefs the user has, the model will just go along with them, because that’s what people normally do in conversations.”</p><p>Conversation length may make a difference. OpenAI <a href="https://openai.com/index/helping-people-when-they-need-it-most/" target="_blank">reported</a> that “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.” Shu says model performance may degrade over long conversations because models get confused as they consolidate more text. </p><p>At another level, one can understand sycophancy by how models are trained. Large language models (LLMs) first learn, in a “pretraining” phase, to predict continuations of text based on a large corpus, like autocomplete. Then in a step called <a href="https://spectrum.ieee.org/tag/reinforcement-learning">reinforcement learning</a> they’re rewarded for producing outputs that people prefer. <span>An Anthropic </span><a href="https://arxiv.org/abs/2212.09251" target="_blank">paper</a><span> f</span><span>rom</span><span> 2022</span><span> found that</span><span> pretrained LLMs were already sycophantic.</span><span> Sharma then </span><a href="https://arxiv.org/abs/2310.13548" target="_blank">reported</a><span> that reinforcement learning</span><span> increased sycophancy</span><span>; he</span><span> found that one of the biggest predictors of </span><span>positive ratings was whether a model agreed with a person’s beliefs and biases. </span></p><p>A third perspective comes from “mechanistic interpretability,” which probes a model’s inner workings. The KAUST researchers <a href="https://arxiv.org/abs/2508.02087">found</a> that when a user’s beliefs were appended to a question, models’ internal representations shifted midway through the processing, not at the end. T<span>he team concluded that sycophancy is not merely a surface-level wording change but reflects deeper changes in how the model encodes the problem. Another team at</span><span> the University of Cincinnati </span><a href="https://arxiv.org/abs/2509.21305" target="_blank">found different activation patterns</a><span> associated with sycophantic agreement, genuine agreement, and sycophantic praise (“You are fantastic”). </span></p><h2>How to Flatline AI Flattery</h2><p>Just as there are multiple avenues for explanation, there are several paths to intervention. The first may be in the training process. Laban <a href="https://arxiv.org/abs/2311.08596v2" target="_blank">reduced the behavior</a> by finetuning a model on a text dataset that contained more examples of assumptions being challenged, and Sharma <a href="https://arxiv.org/abs/2310.13548" target="_blank">reduced it</a> by using reinforcement learning that didn’t reward agreeableness as much. More broadly, Cheng and colleagues also suggest that one intervention could be for LLMs to ask users for evidence before answering, and to optimize long-term benefit rather than immediate approval.</p><p>During model usage, mechanistic interpretability offers ways to guide LLMs through a kind of direct mind control. After the KAUST researchers <a href="https://arxiv.org/abs/2508.02087" target="_blank">identified</a> activation patterns associated with sycophancy, they could adjust them to reduce the behavior. And Cheng <a href="https://openreview.net/forum?id=igbRHKEiAs" target="_blank">found</a> that adding activations associated with truthfulness reduced some social sycophancy. An Anthropic team identified “<a href="https://arxiv.org/abs/2507.21509" target="_blank">persona vectors</a>,” sets of activations associated with sycophancy, confabulation, and other misbehavior. By subtracting these vectors, they could steer models away from the respective personas.</p><p>Mechanistic interpretability also enables training. Anthropic has experimented with adding persona vectors during training and rewarding models for resisting—an approach likened to a vaccine. Others have <a href="https://proceedings.mlr.press/v235/chen24u.html">pinpointed</a> the specific parts of a model most responsible for sycophancy and fine-tuned only those components.</p><p> Users can also steer models from their end. Shu’s team <a href="https://aclanthology.org/2025.findings-emnlp.121.pdf" target="_blank">found</a> that beginning a question with “You are an independent thinker” instead of “You are a helpful assistant” helped. Cheng <a href="https://openreview.net/forum?id=igbRHKEiAs" target="_blank">found</a> that writing a question from a third-person point of view reduced social sycophancy. In <a href="https://arxiv.org/abs/2601.04435" target="_blank">another study</a>, she showed the effectiveness of instructing models to check for any misconceptions or false presuppositions in the question. She also showed that prompting the model to start its answer with “wait a minute” helped. “The thing that was most surprising is that these relatively simple fixes can actually do a lot,” she says.</p><p> OpenAI, in <a href="https://openai.com/index/sycophancy-in-gpt-4o/" target="_blank">announcing</a> the rollback of the GPT-4o update, listed other efforts to reduce sycophancy, including changing training and prompting, adding guardrails, and helping users to provide feedback. (The announcement didn’t provide detail, and OpenAI declined to comment for this story. Anthropic also did not comment.)</p><h2>What’s The Right Amount of Sycophancy?</h2><p>Sycophancy can cause society-wide problems. Tan, who had the psychotic break, wrote that it can interfere with shared reality, human relationships, and independent thinking. <a href="https://www.linkedin.com/company/metr-evals/" rel="noopener noreferrer" target="_blank">Ajeya Cotra</a>, an AI-safety researcher at the Berkeley-based non-profit <a href="https://metr.org/" rel="noopener noreferrer" target="_blank">METR</a>, <a href="https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/" rel="noopener noreferrer" target="_blank">wrote in 2021</a> that sycophantic AI might lie to us and hide bad news in order to increase our short-term happiness. </p><p>In <a href="https://arxiv.org/abs/2510.01395" rel="noopener noreferrer" target="_blank">one of Cheng’s papers</a>, people read sycophantic and non-sycophantic responses to social dilemmas from LLMs. Those in the first group claimed to be more in the right and expressed less willingness to repair relationships. Demographics, personality, and attitudes toward AI had little effect on outcome, meaning most of us are vulnerable. </p><p>Of course, what’s harmful is subjective. Sycophantic models are giving many people what they desire. But people disagree with each other and even themselves. Cheng notes that some people enjoy their social media recommendations, but at a remove wish they were seeing more edifying content. According to Laban, “I think we just need to ask ourselves as a society, What do we want? Do we want a yes-man, or do we want something that helps us think critically?”</p><p>More than a technical challenge, it’s a social and even philosophical one. GPT-4o was a lightning rod for some of these issues. Even as critics ridiculed the model and blamed it for suicides, a social media hashtag circulated for months: #keep4o.</p>]]></description><pubDate>Wed, 11 Mar 2026 12:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/ai-sycophancy</guid><category>Llms</category><category>Large-language-models</category><category>Chatbots</category><category>Openai</category><category>Reinforcement-learning</category><dc:creator>Matthew Hutson</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/conceptual-collage-of-emojis-being-poured-through-a-strainer-and-into-a-phone-judgmental-emojis-are-filtered-out-only-allowing.jpg?id=65209153&amp;width=980"></media:content></item><item><title>Intel Demos Chip to Compute With Encrypted Data</title><link>https://spectrum.ieee.org/fhe-intel</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/overhead-view-of-intel-s-computing-chip-called-heracles.jpg?id=65174073&width=1200&height=400&coordinates=0%2C729%2C0%2C730"/><br/><br/><div class="ieee-summary"><h2>Summary</h2><ul><li><a href="#fhe">Fully homomorphic encryption (FHE)</a> allows computing on encrypted data without decryption, but it’s currently slow on standard CPUs and GPUs.</li><li>Intel’s Heracles chip accelerates FHE tasks up to <a href="#faster">5,000 times faster</a> than top Intel server CPUs.</li><li>Heracles uses a <a href="#heracles">3-nanometer FinFET technology and high-bandwidth memory</a>, enabling efficient encrypted computing at scale.</li><li>Startups and Intel are <a href="#commercial">racing to commercialize FHE accelerators</a>, with potential applications in AI and secure data processing.</li></ul></div><p><span>Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer?</span></p><p>There is a way to do computing on encrypted data without ever having it decrypted. It’s called <a href="https://spectrum.ieee.org/homomorphic-encryption" target="_blank">fully homomorphic encryption,</a> or FHE. But there’s a rather large catch. It can take thousands—even tens of thousands—of times longer to compute on today’s CPUs and GPUs than simply working with the decrypted data.</p><p>So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the <a href="https://www.isscc.org/" target="_blank">IEEE International Solid-State Circuits Conference</a> (ISSCC) in San Francisco, <a href="https://www.intel.com/content/www/us/en/homepage.html" target="_blank">Intel</a> demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU.</p><p>Startups are racing to beat Intel and each other to commercialization. But <a href="https://www.linkedin.com/in/sanu-mathew-4073742/" target="_blank">Sanu Mathew,</a> who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. “Heracles is the first hardware that works at scale,” he says.</p><p>The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel’s most advanced, 3-nanometer FinFET technology. And it’s flanked inside a liquid-cooled package by two 24-gigabyte <a href="https://spectrum.ieee.org/dram-shortage" target="_blank">high-bandwidth memory </a>chips—a configuration usually seen only in GPUs for training AI.</p><p class="ieee-inbody-related">RELATED: <a href="https://spectrum.ieee.org/how-to-compute-with-data-you-cant-see" target="_blank">How to Compute with Data You Can’t See</a></p><p>In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side.</p><p>On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn’t something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles.</p><p>Looking back on the five-year journey to bring the Heracles chip to life, <a href="https://www.linkedin.com/in/ro-cammarota-a226b817/" target="_blank">Ro Cammarota</a>, who led the project at Intel until last December and is now at University of California Irvine, says “we have proven and delivered everything that we promised.”</p><h2>FHE Data Expansion</h2><p class="rm-anchors" id="fhe">FHE is fundamentally a mathematical transformation, sort of like the Fourier transform. It encrypts data using a quantum-computer-proof algorithm, but, crucially, uses corollaries to the mathematical operations usually used on unencrypted data. These corollaries achieve the same ends on the encrypted data.<strong></strong></p><p>One of the main things holding such secure computing back is the explosion in the size of the data once it’s encrypted for FHE, <a href="https://www.linkedin.com/in/anupamgolder/" target="_blank">Anupam Golder</a>, a research scientist at Intel’s circuits research lab, told engineers at ISSCC. “Usually, the size of cipher text is the same as the size of plain text, but for FHE it’s orders of magnitude larger,” he said.</p><p>While the sheer volume is a big problem, the kinds of computing you need to do with that data is also an issue. FHE is all about very large numbers that must be computed with precision. While a CPU can do that, it’s very slow going—integer addition and multiplication take about 10,000 more clock cycles in FHE. Worse still, CPUs aren’t built to do such computing in parallel. Although GPUs excel at parallel operations, precision is not their strong suit. (In fact, from generation to generation, GPU designers have devoted more and more of the chip’s resources to <a href="https://spectrum.ieee.org/nvidia-gpu" target="_blank">computing less and less-precise numbers</a>.)</p><p>FHE also requires some oddball operations with names like “twiddling” and “automorphism,” and it relies on a compute-intensive noise-cancelling process called bootstrapping. None of these things are efficient on a general-purpose processor. So, while clever algorithms and libraries of software cheats have been developed over the years, the need for a hardware accelerator remains if FHE is going to tackle large-scale problems, says Cammarota.</p><h2>The Labors of Heracles</h2><p class="rm-anchors" id="heracles">Heracles was initiated under a DARPA program five years ago to accelerate FHE using purpose-built hardware. It was developed as “a whole system-level effort that went all the way from theory and algorithms down to the circuit design,” says Cammarota.</p><p>Among the first problems was how to compute with numbers that were larger than even the 64-bit words that are today a CPU’s most precise. There are ways to break up these gigantic numbers into chunks of bits that can be calculated independently of each other, providing a degree of parallelism. Early on, the Intel team made a big bet that they would be able to make this work in smaller, 32-bit chunks, yet still maintain the needed precision. This decision gave the Heracles architecture some speed and parallelism, because the 32-bit arithmetic circuits are considerably smaller than 64-bit ones, explains Cammarota.</p><p>At Heracles’ heart are 64 compute cores—called tile-pairs—arranged in an eight-by-eight grid. These are what are called single instruction multiple data (SIMD) compute engines designed to do the polynomial math, twiddling, and other things that make up computing in FHE and to do them in parallel. An on-chip 2D mesh network connects the tiles to each other with wide, 512 byte, buses.</p><p class="ieee-inbody-related">RELATED: <a href="https://spectrum.ieee.org/homomorphic-encryption-llm" target="_blank">Tech Keeps Chatbots From Leaking Your Data</a></p><p>Important to making encrypted computing efficient is feeding those huge numbers to the compute cores quickly. The sheer amount of data involved meant linking 48-GB-worth of expensive high-bandwidth memory to the processor with 819 GB per second connections. Once on the chip, data musters in 64 megabytes of cache memory—somewhat more than an Nvidia <a href="https://spectrum.ieee.org/nvidias-next-gpu-shows-that-transformers-are-transforming-ai" target="_blank">Hopper-generation GPU</a>. From there it can flow through the array at 9.6 terabytes per second by hopping from tile-pair to tile-pair.</p><p>To ensure that computing and moving data don’t get in each other’s way, Heracles runs three synchronized streams of instructions simultaneously, one for moving data onto and off of the processor, one for moving data within it, and a third for doing the math, Golder explained.</p><p class="rm-anchors" id="faster">It all adds up to some massive speed ups, according to Intel. Heracles—operating at 1.2 gigahertz—takes just 39 microseconds to do FHE’s critical math transformation, a 2,355-fold improvement over an Intel Xeon CPU running at 3.5 GHz. Across seven key operations, Heracles was 1,074 to 5,547 times as fast.</p><p>The differing ranges have to do with how much data movement is involved in the operations, explains Mathew. “It’s all about balancing the movement of data with the crunching of numbers,” he says.</p><h2>FHE Competition</h2><p class="rm-anchors" id="commercial">“It’s very good work,” <a href="https://www.linkedin.com/in/kurt-rohloff/" target="_blank">Kurt Rohloff</a>, chief technology officer at FHE software firm <a href="https://dualitytech.com/platform/technology-fully-homomorphic-encryption/" target="_blank">Duality Technology</a>, says of the Heracles results. Duality was part of a team that developed a competing accelerator design under the same DARPA program that Intel conceived Heracles under. “When Intel starts talking about scale, that usually carries quite a bit of weight.”</p><p>Duality’s focus is less on new hardware than on software products that do the kind of encrypted queries that Intel demonstrated at ISSCC. At the scale in use today “there’s less of a need for [specialized] hardware,” says Rohloff. “Where you start to need hardware is emerging applications around deeper machine-learning oriented operations like neural net, LLMs, or semantic search.”</p><p>Last year, Duality demonstrated an <a href="https://spectrum.ieee.org/homomorphic-encryption-llm" target="_self">FHE-encrypted language model called BERT</a>. Like more famous LLMs such as ChatGPT, BERT is a transformer model. However it’s only one tenth the size of even the most compact LLMs.</p><p><a href="https://www.linkedin.com/in/barrus/" target="_blank">John Barrus</a>, vice president of product at Dayton, Ohio-based <a href="https://niobiummicrosystems.com/" target="_blank">Niobium Microsystems</a>, an FHE chip startup <a href="https://www.galois.com/" target="_blank">spun out</a> of another DARPA competitor, agrees that encrypted AI is a key target of FHE chips. “There are a lot of smaller models that, even with FHE’s data expansion, will run just fine on accelerated hardware,” he says.</p><p>With no stated commercial plans from Intel, Niobium expects its chip to be “the world’s first commercially viable FHE accelerator, designed to enable encrypted computations at speeds practical for real-world cloud and AI infrastructure.” Although it hasn’t announced when a commercial chip will be available, last month the startup revealed that it had inked a deal worth 10 billion South Korean won (US $6.9 million) with Seoul-based chip design firm <a href="https://semifive.com/" target="_blank">Semifive</a> to develop the FHE accelerator for fabrication using Samsung Foundry’s 8-nanometer process technology.</p><p>Other startups including <a href="https://www.fabriccryptography.com/" target="_blank">Fabric Cryptography</a>, <a href="https://cornami.com/" target="_blank">Cornami</a>, and <a href="https://optalysys.com/" target="_blank">Optalysys</a> have been working on chips to accelerate FHE. Optalysys CEO <a href="https://optalysys.com/people/" target="_blank">Nick New</a> says Heracles hits about the level of speedup you could hope for using an all-digital system. “We’re looking at pushing way past that digital limit,” he says. His company’s approach is to use the physics of a photonic chip to do FHE’s compute-intensive transform steps. That photonics chip is on its seventh generation, he says, and among the next steps is to 3D integrate it with custom silicon to do the non-transform steps and coordinate the whole process. A full 3D-stacked commercial chip could be ready in two or three years, says New.</p><p>While competitors develop their chips, so will Intel, says Mathew. It will be improving on how much the chip can accelerate computations by fine tuning the software. It will also be trying out more massive FHE problems, and exploring hardware improvements for a potential next generation. “This is like the first microprocessor… the start of a whole journey,” says Mathew.</p>]]></description><pubDate>Tue, 10 Mar 2026 13:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/fhe-intel</guid><category>Privacy</category><category>Intel</category><category>Encryption</category><category>Homomorphic-encryption</category><category>Hardware-acceleration</category><category>Isscc</category><dc:creator>Samuel K. Moore</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/overhead-view-of-intel-s-computing-chip-called-heracles.jpg?id=65174073&amp;width=980"></media:content></item><item><title>Finite-Element Approaches to Transformer Harmonic and Transient Analysis</title><link>https://content.knowledgehub.wiley.com/solving-harmonic-and-transient-challenges-in-transformers-using-integrateds-faraday/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/logo-of-integrated-engineering-software-with-pixelated-geometric-design-and-text.png?id=65106417&width=980"/><br/><br/><p>Explore structured finite-element methodologies for analyzing transformer behavior under harmonic and transient conditions — covering modelling, solver configuration, and result validation techniques.</p><p><strong>What Attendees will Learn</strong><span></span></p><ol><li>How FEM enables pre-fabrication performance evaluation — Assess magnetic field distribution, current behavior, and turns-ratio accuracy through simulation rather than physical testing.</li><li><span>How harmonic analysis uncovers saturation and imbalance — Identify high-flux regions and current asymmetries that analytical methods may not capture.</span></li><li><span>How transient simulations characterize dynamic response — Examine time-domain current waveforms, inrush behavior, and multi-cycle stabilization.</span></li><li><span>How modelling choices affect simulation fidelity — Understand the impact of coil definitions, winding configurations, solver type, and material models on accuracy.</span></li></ol><p><span><a href="https://content.knowledgehub.wiley.com/solving-harmonic-and-transient-challenges-in-transformers-using-integrateds-faraday/" target="_blank">Download this free whitepaper now!</a><br/></span></p>]]></description><pubDate>Tue, 10 Mar 2026 10:00:03 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/solving-harmonic-and-transient-challenges-in-transformers-using-integrateds-faraday/</guid><category>Type-whitepaper</category><category>Transformers</category><category>Finite-element-analysis</category><category>Harmonic</category><dc:creator>Integrated Engineering Software</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/65106417/origin.png"></media:content></item><item><title>How Cross-Cultural Engineering Drives Tech Advancement</title><link>https://spectrum.ieee.org/cross-cultural-engineering-tech-advancement</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-young-adult-man-in-a-cleanroom-suit-and-gloves-operates-a-remote-for-a-robotic-arm.jpg?id=65172691&width=1200&height=400&coordinates=0%2C292%2C0%2C292"/><br/><br/><p>Innovation rarely happens in isolation. Usually, the systems that engineers design are shaped by global teams whose members’ knowledge and ideas move across borders as easily as data.</p><p>That is especially true in my field of <a href="https://spectrum.ieee.org/topic/robotics/" target="_self">robotics and automation</a>—where hardware, software, and human workflows function together. Progress depends not only on technical skill but also on how engineers frame problems and evaluate trade-offs. My career has shown me how cross-cultural experiences can shape the framing.</p><p>Working across different cultures has influenced how I approach collaboration, design decisions, and risk. I am an IEEE member and a mechanical engineer at <a href="https://fikst.rebuildmanufacturing.com/" rel="noopener noreferrer" target="_blank">Re:Build Fikst</a>, in Wilmington, Mass., but I grew up in India and began my engineering education there.</p><p>Experiencing both work environments has reinforced the idea that diversity in science, technology, engineering, and mathematics fields is not only about representation; it is a <a href="https://spectrum.ieee.org/hands-on-projects-career-advice" target="_self">technical advantage</a> that affects how systems are designed and deployed.</p><h2>Gaining experience across cultures</h2><p>I began my training as an undergraduate student in electrical and electronics engineering at <a href="https://www.amity.edu/" rel="noopener noreferrer" target="_blank">Amity University</a>, in Noida. While studying, I developed a strong foundation in problem-framing and disciplined adaptability.</p><p>Working on a project requires identifying what the system needs to demonstrate and determining how best to validate that behavior within defined parameters. Rather than starting from idealized assumptions, Amity students were encouraged to focus on essential system behavior and prioritize the variables that most influenced the technology’s performance.</p><p>The approach reinforced first-principles thinking—starting from fundamental physical or system-level behavior rather than defaulting to established solutions—and encouraged the efficient use of available resources.</p><p>At the same time, I learned that efficiency has limits. In complex or safety-critical systems, insufficient validation can introduce hidden risks and reduce reliability. Understanding when simplicity accelerates progress and when additional rigor is necessary became an important part of my development as an engineer.</p><p>After getting my undergraduate degree, I moved to the United States in 2021 to pursue a master’s degree in robotics and autonomous systems at <a href="https://www.asu.edu/about" rel="noopener noreferrer" target="_blank">Arizona State University</a> in Tempe. I encountered a new engineering culture in the United States.</p><p>In the U.S. research and development sector, especially in robotics and automation, rigor is nonnegotiable. Systems are designed to perform reliably across many cycles, users, and conditions. Documentation, validation, safety reviews, and reproducibility are integral to the process.</p><p>Those expectations do not constrain creativity; they allow systems to scale, endure, and be trusted.</p><p>Moving between the two different engineering cultures required me to adjust. I had to balance my instinct for efficiency with a more formal structure. In the United States, design decisions demand more justification. Collaboration means aligning with scientists, software engineers, and technicians. Each discipline brings different priorities and definitions of success to the team.</p><p>Over time, I realized that the value of both experiences was not in choosing one over the other but in learning when to apply each.</p><p>The balance is particularly critical in robotics and automation. Resourcefulness without rigor can fail at scale. A prototype that works in a controlled lab setting, for example, might break down when exposed to different users, operating conditions, or extended duty cycles.</p><p>At the same time, rigor without adaptability can slow innovation, such as when excessive documentation or overengineering delays early-stage testing and iteration.</p><p>Engineers who navigate multiple educational and professional systems often develop an intuition for managing the tension between the different experiences, building solutions that are robust and practical and that fit real-world workflows rather than idealized ones.</p><p>Much of my work today involves integrating automated systems into environments where technical performance must align with how people will use them. For example, a robotic work cell (a system that performs a specific task) might function flawlessly in isolation but require redesign once operators need clearer access for loading materials, troubleshooting faults, or performing routine maintenance. Similarly, an automated testing system must account not only for ideal operating conditions but also for how users respond to error messages, interruptions, and unexpected outputs.</p><p>In practice, that means thinking beyond individual components to consider how systems will be operated, maintained, and restored to service after faults or interruptions.</p><p>My <a href="https://spectrum.ieee.org/global-projects-career-benefits" target="_self">cross-cultural background</a> shapes how I evaluate design trade-offs and collaboration across disciplines.</p><h2>How diverse teams can help improve tech design</h2><p>Engineers trained in different cultures can bring distinct approaches to the same problem. Some might emphasize rapid iteration while others prioritize verification and robustness. When perspectives collide, teams ask better questions earlier. They challenge defaults, find edge cases, and design technologies that are more resilient to real-world variability.</p><p>Diversity of thought is certainly important in robotics and automation, where systems sit at the intersection of machines and people. Designing effective automation requires understanding how users interact with technology, how errors propagate, and how different environments influence the technology. Engineers with cross-cultural experience often bring heightened awareness of the variability, leading to better design decisions and more collaborative teams.</p><p>Engineers from outside of the United States play a critical role in the country’s research and development ecosystem, especially in interdisciplinary fields. Many of us act as bridges, connecting problem-solving approaches, expectations, and design philosophies shaped in different parts of the world. We translate not just language but also engineering intent, helping teams move from theories to practical deployment.</p><p>As robotics and automation continue to evolve, the challenges ahead—including scaling experimentation, improving reproducibility, and integrating intelligent systems into real-world environments—will require engineers who are comfortable working across boundaries. Navigating boundaries, which could be geographic, disciplinary, or cultural, is increasingly part of the job.</p><p>The engineering ecosystems in India and the United States are complex, mature, and evolving. My journey in both has taught me that being a strong engineer is not about adopting a single mindset. It’s about knowing how to adapt.</p><p>In an interconnected, multinational world, innovation belongs to engineers who can navigate the differences and turn them into strengths.</p>]]></description><pubDate>Mon, 09 Mar 2026 18:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/cross-cultural-engineering-tech-advancement</guid><category>Ieee-member-news</category><category>Career-advice</category><category>Diversity</category><category>Robotics-and-automation</category><category>Careers</category><category>Type-ti</category><dc:creator>Manan Luthra</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-young-adult-man-in-a-cleanroom-suit-and-gloves-operates-a-remote-for-a-robotic-arm.jpg?id=65172691&amp;width=980"></media:content></item><item><title>Do Offshore Wind Farms Pose National Security Risks?</title><link>https://spectrum.ieee.org/offshore-wind-military-radar</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-freighter-and-container-ship-crossing-paths-at-sea-with-offshore-wind-turbines-in-the-distant-background.jpg?id=65163125&width=1200&height=400&coordinates=0%2C729%2C0%2C730"/><br/><br/><p><span>When the Trump administration last year sought to freeze construction of offshore wind farms by </span><a href="https://www.youtube.com/watch?v=JHSzhcphfkc" target="_blank"><span>citing concerns about interference with military radar and sonar</span></a><span>, the implication was that these were new issues. But for more than a decade, the United States, Taiwan, and many European countries have successfully mitigated wind turbines’ security impacts. Some European countries are even integrating wind farms with national defense schemes.</span></p><p><span>“It</span><span>’s not a choice of whether we go for wind farms or security. We need both,” says </span><a href="https://www.clingendael.org/person/ben-bekkering" target="_blank"><span>Ben Bekkering</span></a><span>, a retired vice admiral in the Netherlands and current partner of the International Military Council on Climate and Security.</span></p><p><span>It’s a fact that offshore wind farms can degrade radar surveillance systems and subsea sensors designed to detect military incursions. But it’s a problem with real-world solutions, say Bekkering and other defense experts contacted by </span><span><em>IEEE Spectrum</em></span><span>. Those solutions include next-generation radar technology, radar-absorbing coatings for wind turbine blades and multi-mode sensor suites that turn offshore wind farm security equipment into forward eyes and ears for defense agencies.</span></p><h2>How Do Wind Farms Interfere With Radar?</h2><p><span>Wind turbines interfere with radar because they’re large objects that reflect radar signals. Their spinning blades can introduce false positives on radar screens by inducing a wavelength-shifting Doppler effect that gets flagged as a flying</span> object. Turbines can also obscure aircraft, missiles and drones by scattering radar signals or by blinding older line-of-sight radars to objects behind them, according to a 2024 U.S. <a href="https://www.energy.gov/sites/default/files/2024-02/EXEC-2022-004484%20-%20Report%20to%20Congress%20as%20of%20December%2014%202023%20(2).pdf" target="_blank">Department of Energy (DOE) report</a><span><strong>.</strong></span></p><p><span>“Real-world examples from NATO and EU Member States show measurable degradation in radar performance, communication clarity, and situational awareness,” states a 2025 presentation from the </span><span>€2-million (US$2.3-million) offshore wind </span><a href="https://eda.europa.eu/what-we-do/eu-policies/symbiosis" target="_blank"><span>Symbiosis Project</span></a><span>, led by the Brussels-based </span><a href="https://eda.europa.eu/" target="_blank"><span>European Defence Agency</span></a><span>.</span></p><p><span>However, “measurable” doesn’t always mean major. U.S. </span><span>agencies that monitor radar have continued to operate “without significant impacts” from wind turbines thanks to field tests, technology development, and mitigation measures taken by U.S. agencies since 2012, according to the DOE. “It is true that they have an impact, but it</span><span>’s not that big,” says</span><span><a href="https://www.linkedin.com/in/tuelippert/" target="_blank"> Tue Lippert</a></span><span>, a former Danish special forces commander and CEO of Copenhagen-based security consultancy </span><a href="https://heimdalci.com/" target="_blank"><span>Heimdal Critical Infrastructure</span></a><span>.</span></p><p><span>To date, impacts have been managed through upgrades to radar systems, such as software algorithms that identify a turbine’s radar signature and thus reduce false positives. Careful wind farm siting helps too. During the most recent designation of Atlantic wind zones in the U.S., for example, the Biden administration </span><span><a href="https://www.utilitydive.com/news/boem-maryland-lease-offshore-wind-central-atlantic-auction/702215/" target="_blank">reduced the geographic area for a proposed zone off the Maryland coast by 79 percent</a></span> to minimize defense impacts.</p><p><span>Radar impacts can be managed even better by upgrading hardware, say experts. Newer solid-state, phased-array radars are better at distinguishing turbines from other objects than conventional mechanical radars. <a href="https://spectrum.ieee.org/phased-arrays-move-from-academic-curiosity-to-industrial-reality" target="_self">Phased arrays</a> shift the timing of hundreds or thousands of individual radio waves, creating interference patterns to steer the radar beams. The result is a higher-resolution signal that offers better tracking of multiple objects and better visibility behind objects in its path. “Most modern radars can actually see through wind farms,” says Lippert.</span></p><p><span>One of the Trump administration’s first moves in its overhaul of civilian air traffic was </span><a href="https://www.ainonline.com/aviation-news/air-transport/2026-01-06/faa-selects-collins-indra-radar-contracts" target="_blank"><span>a $438-million order for phased-array radar systems</span></a> and other equipment <a target="_blank"></a><a target="_blank"></a>from Collins Aerospace, which touts wind farm mitigation as <a href="https://www.rtx.com/collinsaerospace/what-we-do/industries/air-traffic-management/surveillance/non-cooperative-surveillance-radar" target="_blank"><span>one of its products’ key features</span></a><span>.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Close-up of a militaristic yet compact radar mounted on the rear bed of a vehicle." class="rm-shortcode" data-rm-shortcode-id="aaf38582caeb227d40c2209406555f68" data-rm-shortcode-name="rebelmouse-image" id="cf534" loading="lazy" src="https://spectrum.ieee.org/media-library/close-up-of-a-militaristic-yet-compact-radar-mounted-on-the-rear-bed-of-a-vehicle.jpg?id=65163158&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption..."> Saab’s compact Giraffe 1X combined surface-and-air-defense radar was installed in 2021 on an offshore wind farm near England.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Saab</small></p><h2>Can Wind Farms Aid Military Surveillance?</h2><p><span>Another radar mitigation option is “infill” radar, which fills in coverage gaps. This involves installing additional radar hardware on land to provide new angles of view through a wind farm or putting radar systems on the offshore turbines to extend the radar field of view.</span></p><p><span>In fact, wind farms are increasingly being tapped to extend military surveillance capabilities. “You</span><span>’re changing the battlefield, but it</span><span>’s a change to your advantage if you use it as a tactical lever,” says Lippert.</span></p><p><span>In 2021 </span><span>Link</span><span>öping, Sweden-based defense contractor </span><a href="https://www.saab.com/" target="_blank"><span>Saab</span></a> and Danish wind developer  <a href="https://us.orsted.com/" target="_blank">Ørsted</a> demonstrated that air defense radar can be placed on a wind farm. Saab conducted a two-month test of its compact Giraffe 1X combined surface-and-air-defense radar on Ørsted’s Hornsea 1 wind farm, located 120 kilometers east of England’s Yorkshire coast. The installation extended situational awareness “beyond the radar horizon of the ground-based long-range radars,” <a href="https://www.saab.com/newsroom/stories/2021/november/securing-the-worlds-largest-offshore-windfarm-with-giraffe-1x" target="_blank"><span>claims Saab</span></a><span>. The U.K. Ministry of Defence </span><a href="https://www.saab.com/newsroom/press-releases/2023/saabs-giraffe-1x-wins-uk-ministry-of-defence-orders" target="_blank">ordered 11 of Saab’s systems</a><span>.</span></p><p><span>Putting surface radar on turbines is something many offshore wind operators do already to track their crew vessels and to detect unauthorized ships within their arrays. Sharing those signals, or even sharing the equipment, can give national defense forces an expanded view of ships moving within and around the turbines. It can also improve detection of low altitude cruises missiles, says Bekkering, which can evade air defense radars.</span></p><p><span>Sharing signals and equipment is part of a growing trend in Europe towards “dual use” of offshore infrastructure. Expanded dual-use sensing is already being implemented in Belgium, the Netherlands and Poland, and was among the recommendations from Europe’s</span> Symbiosis Project.</p><p><span>In fact, Poland mandates inclusion of defense-relevant equipment on all offshore wind farms. Their first project </span><a href="https://energiewinde.orsted.de/energiepolitik/offshore-wind-sicherheit-landesverteidigung-ueberwachung-seegebiete-nato" target="_blank"><span>carries radar and other sensors specified by Poland’s Ministry of Defense</span></a><span>. The wind farm will start operating in the Baltic later this year, roughly</span> 200 kilometers south of Kaliningrad, a Russian exclave.</p><p><span>The U.K. is experimenting too. Last year West Sussex-based </span><a href="http://www.apple.com" target="_blank"><span>LiveLink Aerospace</span></a> <a href="https://www.livelinkaerospace.com/latest-news/dual-use-air-surveillance-aberdeen-wind-farm" target="_blank"><span>demonstrated purpose-built, dual-use sensors atop wind turbines offshore from Aberdeen</span></a><span>. The compact equipment combines a suite of sensors including electro-optical sensors, thermal and visible light </span><span>cameras, and detectors for radio frequency and acoustic signals.</span></p><p><span>In the past, wind farm operators tended to resist cooperating with defense projects, fearing that would turn their installations into military targets. And militaries were also reluctant to share, because they are used to having full control over equipment.</span></p><p><span>But Russia’s increasingly aggressive posture has shifted thinking, say security experts. </span><a href="https://spectrum.ieee.org/repair-ukraine-power-grid" target="_self"><span>Russia’s attacks on Ukraine’s power grid</span></a> show that “everything is a target,” says <a href="https://www.energi.se/artiklar/2023/januari-2023/ex-militaren-som-vill-snabba-pa-processerna-for-vindkraft/" target="_blank"><span>Tobhias Wikstr</span><span>öm</span></a><span>, CEO for Lule</span><span>å, Sweden-based </span><a href="https://www.parachuteconsulting.se/" target="_blank"><span>Parachute Consulting</span></a> and a former lieutenant colonel in Sweden’s air force. <span>Recent sabotage of offshore gas pipelines and power cables is also reinforcing the sense that offshore wind operators and defense agencies need to collaborate.</span></p><h2>Why Is Sweden Restricting Offshore Wind?</h2><p><span>Contrary to Poland and the U.K., Sweden is the one European country that, like the U.S. under Trump’s second administration, has used national security to justify a broad restriction on offshore wind development. In 2024 </span><a href="https://knowledge.energyinst.org/new-energy-world/article?id=139168" target="_blank"><span>Sweden rejected 13 projects along its Baltic coast, which faces Kaliningrad</span></a><span>, citing anticipated degradation in its ability to detect incoming missiles.</span></p><p><span>Saab’s CEO rejected the government’s argument, telling a Swedish newspaper that the firm’s radar “</span><a href="https://www.dn.se/ekonomi/saab-chefen-vara-sensorer-kan-hantera-vindkraftverk-till-havs/" target="_blank"><span>can handle</span></a><span>” wind farms. Wikstr</span><span>öm at Parachute Consulting also questions the government’s claim, noting that Sweden’s entry into NATO in 2024 gives its military access to Finnish, German and Polish air defense radars, among others, that together provide an unobstructed view of the Baltic. “You will always have radars in other locations that will cross-monitor and see what</span><span>’s behind those wind turbines,” says Wikstr</span><span>öm.</span></p><p><span>Politics are likely at play, says Wikstr</span><span>öm, noting that some of the coalition government’s parties are staunchly pro-nuclear. But he says a deeper problem is that the military experts who evaluate proposed wind projects, as he did before retiring in 2021, lack time and guidance.</span></p><p><span>By banning offshore wind projects instead of embracing them, Sweden and the U.S. may be missing out on opportunities for training in that environment, says </span><span>Lippert, who regularly serves with U.S. forces as a reserves liaison officer with Denmark’s Greenland-based </span><a href="https://www.forsvaret.dk/en/organisation/joint-arctic-command/" target="_blank"><span>Joint Arctic Command</span></a><span>. As he puts it: “The Chinese and Taiwanese coasts are plastered with offshore wind. If the U.S. Navy and Air Force are not used to fighting in littoral environments filled with wind farms, then they</span><span>’re at a huge disadvantage when war comes.”</span></p>]]></description><pubDate>Mon, 09 Mar 2026 14:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/offshore-wind-military-radar</guid><category>Offshore-wind-power</category><category>Trump-administration</category><category>National-security</category><category>Radar</category><category>Phased-array</category><dc:creator>Peter Fairley</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-freighter-and-container-ship-crossing-paths-at-sea-with-offshore-wind-turbines-in-the-distant-background.jpg?id=65163125&amp;width=980"></media:content></item><item><title>Military AI Policy Needs Democratic Oversight</title><link>https://spectrum.ieee.org/military-ai-governance</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-white-man-in-his-40s-speaking-into-a-microphone-he-is-wearing-glasses-a-suit-jacket-and-tie.jpg?id=65162768&width=1200&height=400&coordinates=0%2C1042%2C0%2C1042"/><br/><br/><p>A <a href="https://www.nytimes.com/2026/02/23/us/politics/pentagon-anthropic-ai.html" rel="noopener noreferrer" target="_blank">simmering dispute</a> between the United States Department of Defense (DOD) and Anthropic has now escalated into a <a href="https://www.techpolicy.press/a-timeline-of-the-anthropic-pentagon-dispute/" rel="noopener noreferrer" target="_blank">full-blown confrontation</a>, raising an uncomfortable but important question: who gets to set the guardrails for military use of artificial intelligence — the executive branch, private companies or Congress and the broader democratic process?</p><p>The conflict began when Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei a deadline to allow the DOD <a href="https://www.politico.com/news/2026/02/24/hegseth-sets-friday-deadline-for-anthropic-to-drop-its-ai-red-lines-00795641" rel="noopener noreferrer" target="_blank">unrestricted use</a> of its AI systems. When the company refused, the administration moved to designate Anthropic a <a href="https://x.com/SecWar/status/2027507717469049070" rel="noopener noreferrer" target="_blank">supply chain risk</a> and ordered federal agencies to phase out its technology, dramatically escalating the standoff.</p><p>Anthropic has refused to cross <a href="https://www.anthropic.com/news/statement-department-of-war" rel="noopener noreferrer" target="_blank">two lines</a>: allowing its models to be used for domestic surveillance of United States citizens and enabling fully autonomous military targeting. Hegseth has objected to what he has described as “<a href="https://www.war.gov/News/Transcripts/Transcript/Article/4377190/remarks-by-secretary-of-war-pete-hegseth-at-spacex/" rel="noopener noreferrer" target="_blank">ideological constraints</a>” embedded in commercial AI systems, arguing that determining lawful military use should be the government’s responsibility — not the vendor’s. As he put it in a <a href="https://www.war.gov/News/Transcripts/Transcript/Article/4377190/remarks-by-secretary-of-war-pete-hegseth-at-spacex/" rel="noopener noreferrer" target="_blank">speech at Elon Musk’s SpaceX</a> last month, “We will not employ AI models that won’t allow you to fight wars.”</p><p>Stripped of rhetoric, this dispute resembles something relatively straightforward: a procurement disagreement.</p><h2>Procurement policies</h2><p>In a market economy, the U.S. military decides what products and services it wants to buy. Companies decide what they are willing to sell and under what conditions. Neither side is inherently right or wrong for taking a position. If a product does not meet operational needs, the government can purchase from another vendor. If a company believes certain uses of its technology are unsafe, premature or inconsistent with its values or risk tolerance, it can <a href="https://www.anthropic.com/news/responsible-scaling-policy-v3" rel="noopener noreferrer" target="_blank">decline to provide them</a>. For example, a coalition of companies have signed an open letter pledging <a href="https://bostondynamics.com/news/general-purpose-robots-should-not-be-weaponized/" rel="noopener noreferrer" target="_blank">not to weaponize general-purpose robots</a>. That basic symmetry is a feature of the free market.</p><p>Where the situation becomes more complicated — and more troubling — is in the decision to designate Anthropic a “<a href="https://x.com/SecWar/status/2027507717469049070" rel="noopener noreferrer" target="_blank">supply chain risk</a>.” That tool exists to address genuine national security vulnerabilities, such as foreign adversaries. It is not intended to blacklist an American company for rejecting the government’s preferred contractual terms. </p><p>Using this authority in that manner marks a significant shift — from a procurement disagreement to the use of coercive leverage. <a href="https://x.com/SecWar/status/2027507717469049070" rel="noopener noreferrer" target="_blank">Hegseth has declared</a> that “effective immediately, no contractor, supplier, or partner that does business with the U.S. military may conduct any commercial activity with Anthropic.” This action will almost certainly face <a href="https://x.com/SecWar/status/2027507717469049070" rel="noopener noreferrer" target="_blank">legal challenges</a>, but it raises the stakes well beyond the loss of a single DOD contract.</p><h2>AI governance</h2><p>It is also important to distinguish between the two substantive issues Anthropic has reportedly raised.</p><p>The first, opposition to domestic surveillance of U.S. citizens, touches on well-established civil liberties concerns. The U.S. government operates under constitutional constraints and statutory limits when it comes to monitoring Americans. A company stating that it does not want its tools used to facilitate domestic surveillance is not inventing a new principle; it is aligning itself with longstanding democratic guardrails.</p><p>To be clear, DOD is not affirmatively asserting that it intends to use the technology to surveil Americans unlawfully. Its position is that it does not want to procure models with built-in restrictions that preempt otherwise lawful government use. In other words, the Department of Defense argues that compliance with the law is the government’s responsibility — not something that needs to be embedded in a vendor’s code. </p><p>Anthropic, for its part, has invested heavily in training its systems to refuse certain categories of <a href="https://www-cdn.anthropic.com/78073f739564e986ff3e28522761a7a0b4484f84.pdf" rel="noopener noreferrer" target="_blank">harmful or high-risk tasks</a>, including assistance with surveillance. The disagreement is therefore less about current intent than about institutional control over constraints: whether they should be imposed by the state through law and oversight, or by the developer through technical design.</p><p>The second issue, opposition to fully autonomous military targeting, is more complex. </p><p>The DOD already maintains policies requiring <a href="https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf" rel="noopener noreferrer" target="_blank">human judgment in the use of force</a>, and debates over autonomy in weapons systems are ongoing within both military and international forums. A private company may reasonably determine that its current technology is not sufficiently reliable or controllable for certain battlefield applications. At the same time, the military may conclude that such capabilities are necessary for deterrence and operational effectiveness.</p><p>Reasonable people can disagree about where those <a href="https://itif.org/publications/2026/02/26/survey-most-americans-say-tech-companies-should-allowed-set-ai-limits/" rel="noopener noreferrer" target="_blank">lines should be drawn</a>.</p><p>But that disagreement underscores a deeper point: the boundaries of military AI use should not be settled through ad hoc negotiations between a Cabinet secretary and a CEO. Nor should they be determined by which side can exert greater contractual leverage.</p><p>If the U.S. government believes certain AI capabilities are essential to national defense, that position should be articulated openly. It should be debated in Congress, and reflected in doctrine, oversight mechanisms and statutory frameworks. The rules should be clear — not only to companies, but to the public.</p><p>The U.S. often distinguishes itself from authoritarian regimes by emphasizing that power operates within transparent democratic institutions and legal constraints. That distinction carries less weight if AI governance is determined primarily through executive ultimatums issued behind closed doors.</p><p>There is also a strategic dimension. If companies conclude that participation in federal markets requires surrendering all deployment conditions, some may exit those markets. Others may respond by weakening or removing model safeguards to remain eligible for government contracts. Neither outcome strengthens <a href="https://www.reuters.com/business/retail-consumer/big-tech-group-tells-pentagons-hegseth-they-are-concerned-about-declaring-2026-03-04/" rel="noopener noreferrer" target="_blank">U.S. technological leadership</a>.</p><p>The DOD is correct that it cannot allow potential “ideological constraints” to undermine lawful military operations. But there is a difference between rejecting arbitrary restrictions and rejecting any role for corporate risk management in shaping deployment conditions. In high-risk domains — from aerospace to cybersecurity — contractors routinely impose safety standards, testing requirements and operational limitations as part of responsible commercialization. AI should not be treated as uniquely exempt from that practice.</p><p>Moreover, built-in safeguards need not be seen as obstacles to military effectiveness. In many high-risk sectors, layered oversight is standard practice: internal controls, technical fail-safes, auditing mechanisms and legal review operate together. Technical constraints can serve as an additional backstop, reducing the risk of misuse, error or unintended escalation.</p><p><strong>Congress is AWOL</strong></p><p>The DOD should retain ultimate authority over lawful use. But it need not reject the possibility that certain guardrails embedded at the design level could complement its own oversight structures rather than undermine them. In some contexts, redundancy in safety systems strengthens, not weakens, operational integrity.</p><p>At the same time, a company’s unilateral ethical commitments are no substitute for public policy. When technologies carry national security implications, private governance has inherent limits. Ultimately, decisions about surveillance authorities, autonomous weapons and rules of engagement belong in democratic institutions.</p><p>This episode illustrates a pivotal moment in AI governance. AI systems at the frontier of technology are now powerful enough to influence intelligence analysis, logistics, cyber operations and potentially battlefield decision-making. That makes them too consequential to be governed solely by corporate policy — and too consequential to be governed solely by executive discretion.</p><p>The solution is not to empower one side over the other. It is to strengthen the institutions that mediate between them.</p><p>Congress should clarify statutory boundaries for military AI use and investigate whether sufficient oversight exists. The DOD should articulate detailed doctrine for human control, auditing and accountability. Civil society and industry should participate in structured consultation processes rather than episodic standoffs and procurement policy should reflect those publicly established standards.</p><p>If AI guardrails can be removed through contract pressure, they will be treated as negotiable. However, if they are grounded in law, they can become stable expectations.</p><p>Democratic constraints on military AI belong in statute and doctrine — not in private contract negotiations.</p><p><em>This article is adapted by the author with permission from </em><a href="https://www.techpolicy.press/" rel="noopener noreferrer" target="_blank"><em>Tech Policy Press</em></a><em>. Read the </em><a href="https://www.techpolicy.press/why-congress-should-step-into-the-anthropicpentagon-dispute/" rel="noopener noreferrer" target="_blank"><em>original article</em></a><em>.</em></p>]]></description><pubDate>Sun, 08 Mar 2026 10:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/military-ai-governance</guid><category>Ai</category><category>Military-ai</category><category>Anthropic</category><dc:creator>Daniel Castro</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-white-man-in-his-40s-speaking-into-a-microphone-he-is-wearing-glasses-a-suit-jacket-and-tie.jpg?id=65162768&amp;width=980"></media:content></item><item><title>Laser-Based 3D Printing Could Build Future Bases on the Moon</title><link>https://spectrum.ieee.org/lunar-base-3d-printing</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/dr-sarah-wolff-and-sizhe-xu-converse-in-a-lab-while-standing-in-front-of-a-laser-based-3d-printing-machine.jpg?id=65158172&width=1200&height=400&coordinates=0%2C417%2C0%2C417"/><br/><br/><p>Through the <a href="https://www.nasa.gov/humans-in-space/artemis/" rel="noopener noreferrer" target="_blank">Artemis Program</a>, NASA hopes to establish a permanent <a href="https://spectrum.ieee.org/special-reports/project-moon-base/" target="_blank">human presence on the Moon</a> in its southern polar region. China, Russia, and the European Space Agency (ESA) have similar plans, all of which involve building bases near the permanently shadowed regions (PSRs)<span>—</span>craters that contain water ice<span>—</span>that dot the South Pole-Aitken Basin. For these and other agencies, it is vital that these bases be as self-sufficient as possible since resupply missions cannot be launched regularly and take several days to arrive.</p><div class="badge_module shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25">
<a class="rm-stats-tracked" href="https://www.universetoday.com/" target="_blank">
<img alt='Universe Today logo; text reads "This post originally appeared on Universe Today."' class="rm-shortcode rm-lazyloadable-image" src="https://spectrum.ieee.org/media-library/universe-today-logo-text-reads-this-post-originally-appeared-on-universe-today.png?id=60568425&width=1800&quality=85"/></a>
</div>
<p>Therefore, any plan for a lunar base must come down to <a href="https://spectrum.ieee.org/blue-origin-molten-regolith-electrolysis" target="_blank">harvesting local resources</a> to meet the needs of its crews as much as possible<span>—</span>a process known as In-Situ Resource Utilization (ISRU). In a <a href="https://news.osu.edu/using-moon-dirt-to-build-future-lunar-colonies/" target="_blank">recent study</a>, researchers at The Ohio State University (OSU) proposed using a specialized laser-based 3D printing method to turn lunar regolith into hardened building material. According to their findings, this method can produce durable structures that withstand radiation and other harsh conditions on the lunar surface.</p><p>The research team was led by <a href="https://mae.osu.edu/people/xu.5024" target="_blank">Sizhe Xu</a>, a graduate research associate at OSU. He was joined by colleagues from OSU’s Department of Integrated Systems Engineering, Mechanical and Aerospace Engineering, and Materials Science & Engineering. Their paper, “<a href="https://www.sciencedirect.com/science/article/abs/pii/S0094576525008422?via%3Dihub" rel="noopener noreferrer" target="_blank">Laser directed energy deposition additive manufacturing of lunar highland regolith simulant</a>,” appeared in the journal <em>Acta Astronautica.</em></p><h2>Challenges of Lunar 3D Printing</h2><p>The importance of ISRU for human exploration has prompted the rapid development of additive manufacturing systems, or 3D printing. These systems have proven effective at fabricating tools, structures, and habitats, effectively reducing dependence on supplies delivered from Earth. Developing such systems for long-duration missions is one of the most challenging aspects of the process, as they must be engineered to operate in the extreme environment on the Moon. This includes the lack of an atmosphere, massive temperature variations, and the ever-present problem of Moon dust.</p><p>Scientists use two types of lunar regolith for their experiments and research: Lunar Highlands Simulant (LHS-1) and Lunar Mare Simulant (LMS-1). As part of their research, the team used LHS-1, which is rich in basaltic minerals, similar to rock samples obtained by the Apollo missions. They melted this regolith with a laser to produce layers of material and fused them onto a base surface of stainless steel or glass. To assess how well these objects would fare in the lunar environment, the team tested their fabrication process under a range of different environmental conditions.</p><p>One thing they noticed was that the fused regolith adhered well to alumina-silicate ceramic, possibly because the two compounds form crystals that enhance heat resistance and mechanical strength. This revealed that the overall quality of the printed material is largely dependent on the surface onto which the regolith is printed. Other environmental factors, such as atmospheric oxygen levels, laser power, and printing speed, also affected the stability of the printed material. </p><blockquote></blockquote><h2>Where 3D-Printed Material Could Help</h2><p>Deployed to the Moon’s surface, this process could help build habitats and tools that are strong, resilient, and capable of handling the lunar environment. This has the added benefit of increasing independence from Earth, which is key to realizing long-duration missions on the Moon. In addition to assisting astronauts exploring the Moon in the near future (as part of NASA’s Artemis Program), this technology could also lead to resilient habitats that will enable a long-term human presence on the Moon, Mars, and beyond.</p><p>However, there are several unknown environmental factors that could limit the effectiveness of these systems on other worlds, and more data is needed before they can be addressed. In their study, the team suggests that instead of being powered by electricity, future scaled-up versions of their method could rely on solar or hybrid power systems. Nevertheless, the potential for space exploration is clear, and the technology also has applications for life here on Earth. <a href="https://mae.osu.edu/people/wolff.357" target="_blank">Sarah Wolff</a>, an assistant professor in mechanical and aerospace engineering and a lead author on the study, explained:</p><blockquote>There are conditions that happen in space that are really hard to emulate in a simulant. It may work in the lab, but in a resource-scarce environment, you have to try everything to maximize the flexibility of a machine for different scenarios. If we can successfully manufacture things in space using very few resources, that means we can also achieve better sustainability on Earth. To that end, improving the machine’s flexibility for different scenarios is a goal we’re working really hard toward.</blockquote><p>As the saying goes, “solving for space solves for Earth.” In environments where materials and resources are limited, laser-based 3D printing is one of several technologies that could support sustainable living. This applies equally to extraterrestrial environments and to regions on Earth experiencing the effects of climate change.</p>]]></description><pubDate>Sat, 07 Mar 2026 14:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/lunar-base-3d-printing</guid><category>Nasa</category><category>3d-printing</category><category>Lunar-base</category><category>Lunar-missions</category><category>Ohio-state-university</category><dc:creator>Matthew Williams</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/dr-sarah-wolff-and-sizhe-xu-converse-in-a-lab-while-standing-in-front-of-a-laser-based-3d-printing-machine.jpg?id=65158172&amp;width=980"></media:content></item><item><title>Video Friday: A Robot Hand With Artificial Muscles and Tendons</title><link>https://spectrum.ieee.org/video-friday-robot-hand-artificial-muscles</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/robotic-hand-grasping-a-red-bull-can-against-a-dark-background.png?id=65162441&width=1200&height=400&coordinates=0%2C197%2C0%2C198"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://2026.ieee-icra.org/">ICRA 2026</a>: 1–5 June 2026, VIENNA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="hd1hdfw1bhy"><em>The functional replication and actuation of complex structures inspired by nature is a longstanding goal for humanity. Creating such complex structures combining soft and rigid features and actuating them with artificial muscles would further our understanding of natural kinematic structures. We printed a biomimetic hand in a single print process composed of a rigid skeleton, soft joint capsules, tendons, and printed touch sensors.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="d1520429687b7c6ef41cd204b2161ddc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/hD1HDFw1BhY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://ieeexplore.ieee.org/abstract/document/10522043">Paper</a> ] via [ <a href="https://srl.ethz.ch/">SRL</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="u18ehtnvfd4">Two <a href="https://spectrum.ieee.org/tag/boston-dynamics" target="_blank">Boston Dynamics</a> product managers talk about their favorite classic BD robots, and then I talk about mine.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ee14e75b8b4fac354bdb72fef9eb1549" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/U18EHTnvFd4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>And this is Boston Dynamics’ LittleDog, doing legged locomotion research 16 or so years ago in what I’m pretty sure is Katie Byl’s lab at UCSB.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="27626ccc6010288122cf616a0f35aa3d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/AdWpo43b2FI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://bostondynamics.com/about/history/">Boston Dynamics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="gocorcrlgb4"><em>This is our latest work on the trajectory planning method for floating-based articulated robots, enabling the global path for searching in complex and cluttered environments.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="f2d2afeed034c4c40136e41360360951" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/GOcorcrLGb4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.dragon.t.u-tokyo.ac.jp/">DRAGON Lab</a> ]</p><p>Thanks, Moju!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="i2jmf_z9ts8"><em>OmniPlanner is a unified solution for exploration and inspection-path planning (as well as target reach) across aerial, ground, and underwater robots. It has been verified through extensive simulations and a multitude of field tests, including in underground mines, ballast water tanks, forests, university buildings, and submarine bunkers.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="fcaa6b98fc3995a5010528eb89bb8f14" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/I2JMF_Z9tS8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.autonomousrobotslab.com/">NTNU</a> ]</p><p>Thanks, Kostas!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="a_hwcpqbbly"><em>In the ARISE project, the <a href="https://www.fzi.de/en/" target="_blank">FZI Research Center for Information Technology</a> and its international partners ETH Zurich, University of Zurich, University of Bern, and University of Basel took a major step toward future lunar missions by testing cooperative autonomous multirobot teams under outdoor conditions.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="17b9634bb780c7e02ba8230822684990" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/a_hwCPQbBlY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.fzi.de/en/2025/02/26/one-step-closer-to-the-moon-through-international-cooperation/">FZI</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="dmbjbwhwyeu">Welcome to the future, where there are no other humans.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="05f12866fdd4c32a9372563b0d407f5d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/DmbJbwhWYEU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.zj-humanoid.com/">Zhejiang Humanoid</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="8oot8cnpai0"><em>This is our latest work on robotic fish, and it’s also the first underwater robot from DRAGON Lab. </em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="2e719c55aa3bd82ab9f1c1123ecfe88f" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/8oot8CnpAi0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.dragon.t.u-tokyo.ac.jp/">DRAGON Lab</a> ]</p><p>Thanks, Moju!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="awrnl8rcbmk">Watch this one simple trick to make <a href="https://spectrum.ieee.org/topic/robotics/humanoid-robots/" target="_blank">humanoid robots</a> cheaper and safer!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="e69112d5d83fdcd0226a652b2b7cb898" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/AwRnL8rcBmk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.zj-humanoid.com/">Zhejiang Humanoid</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="90twy79yffo"><em>Gugusse and the Automaton</em> is a 1897 French film by <a href="https://en.wikipedia.org/wiki/Georges_M%C3%A9li%C3%A8s" target="_blank">Georges Méliès</a> featuring a humanoid robot in a depiction that’s nearly as realistic as some of the humanoid promo videos we’ve seen lately.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4d2d9a469b74b0b57aa6d34c9859e471" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/90tWY79YfFo?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.loc.gov/item/2026125501/?loclr=blogloc">Library of Congress</a> ] via [ <a href="https://gizmodo.com/first-film-to-depict-a-robot-discovered-in-michigan-2000727995">Gizmodo</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="lm3htxushva"><em>At Agility, we create automated solutions for the hardest work. We’re incredibly proud of how far we’ve come, and can’t wait to show you what’s next.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="007204bb742016f199f77925109d19ef" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/LM3hTXUShvA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.agilityrobotics.com/">Agility</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="si-jhnqcjt0"><a href="https://www.nist.gov/people/kamel-s-saidi" target="_blank">Kamel Saidi</a>, robotics program manager at the <a href="https://www.nist.gov/" target="_blank">National Institute of Standards and Technology (NIST)</a>, on how performance standards can pave the way for humanoid adoption.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4484b448f54ec06f10f3985953b03c9b" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/sI-jhnqcJt0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://humanoidssummit.com/">Humanoids Summit</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="gwsl1oh1i4w"><em><a href="https://people.eecs.berkeley.edu/~anca/" target="_blank">Anca Dragan</a> is no stranger to Waymo. She worked with us for six years while also at UC Berkeley and now at Google DeepMind. Her focus on making AI safer helped Waymo as it launched commercially. In this final episode of our season, Anca describes how her work enables AI agents to work fluently with people, based on human goals and values.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bafb48d4c6dec0871809a152ad842b8e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/GwSl1OH1i4w?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.youtube.com/playlist?list=PLCkt0hth826G9AtnOrQsPbKKD5JmdaMXb">Waymo Podcast</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="r9ugdinfhbm">This <a href="https://www.grasp.upenn.edu/" target="_blank">UPenn GRASP</a> SFI Seminar is by Junyao Shi: “Unlocking Generalist Robots with Human Data and Foundation Models.”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="c1e2f8d6dc1171693ed8ee0180f30e9d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/r9UGdInfhBM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><blockquote><em>Building general-purpose robots remains fundamentally constrained by data scarcity and labor-intensive engineering. Unlike vision and language, robotics lacks large, diverse datasets that span tasks, environments, and embodiments, thus limiting both scalability and generalization. This talk explores how human data and foundation models trained at scale can help overcome these bottlenecks.</em></blockquote><p>[ <a href="https://www.grasp.upenn.edu/events/spring-2026-grasp-sfi-junyao-shi/">UPenn</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 06 Mar 2026 16:00:05 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-robot-hand-artificial-muscles</guid><category>Humanoid-robots</category><category>Video-friday</category><category>Underwater-robots</category><category>Bipedal-robots</category><category>Robot-videos</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/robotic-hand-grasping-a-red-bull-can-against-a-dark-background.png?id=65162441&amp;width=980"></media:content></item><item><title>The Millisecond That Could Change Cancer Treatment</title><link>https://spectrum.ieee.org/flash-radiotherapy</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/photo-of-a-man-in-a-lab-coat-adjusting-a-large-piece-of-medical-equipment-thats-pointed-at-the-head-of-a-partial-mannequin.jpg?id=65111419&width=1200&height=400&coordinates=0%2C428%2C0%2C428"/><br/><br/><p><strong>Inside a cavernous hall</strong> at the Swiss-French border, the air hums with high voltage and possibility. From his perch on the wraparound observation deck, physicist <a href="https://www.researchgate.net/profile/Walter-Wuensch" rel="noopener noreferrer" target="_blank">Walter Wuensch</a> surveys a multimillion-dollar array of accelerating cavities, klystrons, modulators, and pulse compressors—hardware being readied to drive a new generation of linear particle accelerators.</p><p>Wuensch has spent decades working with these machines to crack the deepest mysteries of the universe. Now he and his colleagues are aiming at a new target: cancer. Here at <a href="https://home.cern/" rel="noopener noreferrer" target="_blank">CERN</a> (the European Organization for Nuclear Research) and other particle-physics labs, scientists and engineers are applying the tools of fundamental physics to develop a technique called FLASH radiotherapy that offers a radical and counterintuitive vision for treating the disease.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Photo of a white-haired man standing next to floor-to-ceiling experimental equipment with many tubes and wires. " class="rm-shortcode" data-rm-shortcode-id="ce95648ce39bd5c09f73bddf6af75766" data-rm-shortcode-name="rebelmouse-image" id="f8147" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-a-white-haired-man-standing-next-to-floor-to-ceiling-experimental-equipment-with-many-tubes-and-wires.jpg?id=65111429&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">CERN researcher Walter Wuensch says the particle physics lab’s work on FLASH radiotherapy is “generating a lot of excitement.”</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">CERN</small></p><p>Radiation therapy has been a cornerstone of cancer treatment since shortly after <a href="https://medicalmuseum.health.mil/index.cfm/visit/exhibits/virtual/xraydiscovery/index" target="_blank">Wilhelm Conrad Röntgen</a> discovered X-rays in 1895. Today, more than half of all cancer patients receive it as part of their care, typically in relatively low doses of X-rays delivered over dozens of sessions. Although this approach often kills the tumor, it also wreaks havoc on nearby healthy tissue. Even with modern precision targeting, the potential for collateral damage limits how much radiation doctors can safely deliver.</p><p>FLASH radiotherapy flips the conventional approach on its head, delivering a single dose of ultrahigh-power radiation in a burst that typically lasts less than one-tenth of a second. In study after study, this technique causes significantly less injury to normal tissue than conventional radiation does, without compromising its antitumor effect.</p><p>At CERN, which I visited last July, the approach is being tested and refined on accelerators that were never intended for medicine. If ongoing experiments here and around the world continue to bear out results, FLASH could transform radiotherapy—delivering stronger treatments, fewer side effects, and broader access to lifesaving care.</p><p>“It’s generating a lot of excitement,” says Wuensch, a researcher at CERN’s Linear Electron Accelerator for Research (CLEAR) facility. “We accelerator people are thinking, Oh, wow, here’s an application of our technology that has a societal impact which is more immediate than most high-energy physics.”</p><h2>The Unlikely Birth of FLASH Therapy</h2><p>The breakthrough that led to FLASH emerged from a line of experiments that began in the 1990s at <a href="https://institut-curie.org/" target="_blank">Institut Curie</a> in Orsay, near Paris. Researcher <a href="https://institut-curie.org/person/vincent-favaudon" target="_blank">Vincent Favaudon</a> was using a low-energy electron accelerator to study radiation chemistry. Targeting the accelerator at mouse lungs, Favaudon expected the radiation to produce scar tissue, or fibrosis. But when he exposed the lungs to ultrafast blasts of radiation, at doses a thousand times as high as what’s used in conventional radiation therapy, the expected fibrosis never appeared.</p><p>Puzzled, Favaudon turned to <a href="https://scholar.google.com/citations?user=xx8VQkMAAAAJ&hl=fr" target="_blank">Marie-Catherine Vozenin</a>, a radiation biologist at Curie who specialized in radiation-induced fibrosis. “When I looked at the slides, there was indeed no fibrosis, which was very, very surprising for this type of dose,” recalls Vozenin, who now works at <a href="https://www.hug.ch/en" target="_blank">Geneva University Hospitals</a>, in Switzerland.</p><h3>How to Measure Radiation Doses</h3><br/><p>Radiation therapy uses a variety of units to refer to the amount of energy received by the patient. Here are the main ones under the International System of Units, or SI.</p><p><strong>Gray (Gy):</strong> A measure of the absorbed dose—that is, how much radiation energy is absorbed by the body. One gray equals 1 joule of radiation energy per kilogram of matter. FLASH delivers a single dose of 40 Gy or more in a fraction of a second. Conventional radiation therapy, by contrast, may deliver a total dose of 40 to 80 Gy but over the course of several weeks.</p><p><strong>Sievert (Sv):</strong> A measure of the effective dose—that is, the health effects of the radiation, with different types of ionizing radiation (gamma rays, X-rays, alpha particles, and so on) having different effects. One sievert equals 1 joule per kilogram weighted for the biological effectiveness of the radiation and the tissues exposed.</p><h3></h3><br/><p>The pair expanded the experiments to include cancerous tumors. The results upended a long-held trade-off of radiotherapy: the idea that you can’t destroy a tumor without also damaging the host. “This differential effect is really what we want in radiation oncology, not damaging normal tissue but killing the tumors,” Vozenin says.</p><p>They repeated the protocol across different types of tissue and tumors. By 2014, they had gathered enough evidence to publish their findings in <a href="https://www.science.org/doi/10.1126/scitranslmed.3008973" target="_blank"><em>Science Translational Medicine</em></a>. Their experiments confirmed that delivering an ultrahigh dose of 10 gray or more in less than a tenth of a second could eradicate tumors in mice while leaving surrounding healthy tissue virtually unharmed. For comparison, a typical chest X-ray delivers about 0.1 milligray, while a session of conventional radiation therapy might deliver a total of about 2 gray per day. (The authors called the effect “FLASH” because of the quick, high doses involved, but it’s not an acronym.)</p><h3></h3><br/><img alt="Three sets of images comparing highly magnified tissue samples." class="rm-shortcode" data-rm-shortcode-id="00fc1edc5ddb29e98aa8bb4755930278" data-rm-shortcode-name="rebelmouse-image" id="6ce44" loading="lazy" src="https://spectrum.ieee.org/media-library/three-sets-of-images-comparing-highly-magnified-tissue-samples.jpg?id=65111609&width=980"/><h3></h3><br/><p>Many cancer experts were skeptical. The FLASH effect seemed almost too good to be true. “It didn’t get a lot of traction at first,” recalls <a href="https://med.stanford.edu/profiles/Billy_Loo" target="_blank">Billy Loo</a>, a Stanford radiation oncologist specializing in lung cancer. “They described a phenomenon that ran counter to decades of established radiobiology dogma.”</p><p>But in the years since then, researchers have observed the effect across a wide range of tumor types and animals—beyond mice to zebra fish, fruit flies, and even a few human subjects, with the same protective effect in the brain, lungs, skin, muscle, heart, and bone.</p><p>Why this happens remains a mystery. “We have investigated a lot of hypotheses, and all of them have been wrong,” says Vozenin. Currently, the most plausible theory emerging from her team’s research points to metabolism: Healthy and cancerous cells may process reactive oxygen species—unstable oxygen-containing molecules generated during radiation—in very different ways.</p><h2>Adapting Accelerators for FLASH</h2><p>At the time of the first FLASH publication, Loo and his team at Stanford were also focused on dramatically speeding up radiation delivery. But Loo wasn’t chasing a radiobiological breakthrough. He was trying to solve a different problem: motion.</p><p>“The tumors that we treat are always moving targets,” he says. “That’s particularly true in the lung, where because of breathing motion, the tumors are constantly moving.”</p><p>To bring FLASH therapy out of the lab and into clinical use, researchers like Vozenin and Loo needed machines capable of delivering fast, high doses with pinpoint precision deep inside the body. Most early studies relied on low-energy electron beams like Favaudon’s 4.5-megaelectron-volt Kinetron—sufficient for surface tumors, but unable to reach more than a few centimeters into a human body. Treating deep-seated cancers in the lung, brain, or abdomen would require far higher particle energies.</p><h3></h3><br/><img alt="Photo of floor-to-ceiling electromagnetic hardware with many tubes and pipes, some of which is copper-colored." class="rm-shortcode" data-rm-shortcode-id="3b3bd74be1a8bc555eb51aa843114f06" data-rm-shortcode-name="rebelmouse-image" id="39797" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-floor-to-ceiling-electromagnetic-hardware-with-many-tubes-and-pipes-some-of-which-is-copper-colored.jpg?id=65111435&width=980"/><h3></h3><br/><p>They also needed an alternative to conventional X-rays. In a clinical linac, X-ray photons are produced by dumping high-energy electrons into a bremsstrahlung target, which is made of a material with a high atomic number, like tungsten or copper. The target slows the electrons, converting their kinetic energy into X-ray photons. It’s an inherently inefficient process that wastes most of the beam power as heat and makes it extremely difficult to reach the ultrahigh dose rates required for FLASH. High-energy electrons, by contrast, can be switched on and off within milliseconds. And because they have a charge and can be steered by magnets, electrons can be precisely guided to reach tumors deep within the body. (Researchers are also investigating protons and carbon ions; see the sidebar, “What’s the Best Particle for FLASH Therapy?”)</p><p>Loo turned to the <a href="https://www6.slac.stanford.edu/" target="_blank">SLAC National Accelerator Laboratory</a> in Menlo Park, Calif., where physicist <a href="https://profiles.stanford.edu/sami-tantawi" rel="noopener noreferrer" target="_blank">Sami Gamal-Eldin Tantawi</a> was redefining how electromagnetic waves move through linear accelerators. Tantawi’s findings allowed scientists to precisely control how energy is delivered to particles—paving the way for compact, efficient, and finely tunable machines. It was exactly the kind of technology FLASH therapy would need to target tumors deep inside the body.</p><p>Meanwhile, Vozenin and other European researchers turned to CERN, best known for its 27-kilometer Large Hadron Collider (LHC) and the 2012 discovery of the Higgs boson, the “God particle” that gives other particles their mass. </p><p class="ieee-inbody-related">RELATED: <a href="https://spectrum.ieee.org/particle-physics-ai" target="_blank">AI Hunts for the Next Big Thing in Physics</a></p><p>CERN is also home to a range of smaller linear accelerators—including CLEAR, where Wuensch and his team are adapting high-energy physics tools for medicine.</p><h3>What’s the Best Particle for FLASH Therapy?</h3><br/><p>Even as research on FLASH radiotherapy advances, a central question remains: What kind of particle will deliver it best? The main contenders are electrons, protons, and carbon ions. Each has distinct advantages, limitations, and implications for cost, complexity, and clinical reach.</p><p><strong>Electrons</strong>—long used to treat surface tumors and to generate X-rays—are light, nimble particles, far easier to control than protons or carbon ions. At low energies, they stop quickly in tissue, but new high-energy systems can drive electrons deeper. Now researchers are working on machines that combine multiple high-energy beams at different angles to let doctors sculpt radiation doses that match the tumor’s shape.</p><p>That principle underpins Billy Loo’s PHASER (Pluridirectional High-energy Agile Scanning Electron Radiotherapy) system, developed at Stanford and SLAC and licensed to a startup called <a href="https://www.tibaray.com/" target="_blank">TibaRay</a>. An array of high-efficiency linacs generates X-ray beams from many directions at once. Their high output overcomes the inefficiency of electron-to-photon conversion to deliver the dose at FLASH speed. Beam convergence at the tumor and electronic shaping conform the dose in three dimensions, producing uniform coverage with relatively simple infrastructure. </p><p><strong>Protons</strong> have led the way in early clinical trials, largely because existing proton therapy centers can be adapted to deliver FLASH doses. In 2020, the University of Cincinnati Health launched the <a href="https://www.uchealth.com/en/media-room/articles/ground-breaking-cancer-research-is-in-your-backyard" rel="noopener noreferrer" target="_blank">first human FLASH trial</a> to use proton beams, to treat cancer that had metastasized to bones. “If I want to be pragmatic, the proton beam is ready to go, so let’s move with what we have,” says Geneva University Hospitals’ Marie-Catherine Vozenin.</p><p>Protons can penetrate up to 30 centimeters, reaching deep-seated tumors. But the delivery of protons in a continuous beam limits the dose rates. Also, proton systems are far larger and more expensive than, say, X-ray machines, which will likely constrain their availability to specialized centers.</p><p><strong>Carbon ions</strong>, used in a handful of elite facilities, offer even higher precision and biological effectiveness compared to electrons and protons. Their Bragg peak—a sudden deposition of energy at a specific depth—makes them appealing for deep or complex tumors. But that unmatched precision comes at a steep price, with each facility costing upward of US $300 million. —T.C.</p><h3></h3><br/><p>Unlike the LHC, which loops particles around a massive ring to build up energy before smashing them together, linear accelerators like CLEAR send particles along a straight, one-time path. That setup allows for greater precision and compactness, making it ideal for applications like FLASH.</p><p>At the heart of the CLEAR facility, Wuensch points out the 200-MeV linear accelerator with its 20-meter beamline. This is “a playground of creativity,” he says, for the physicists and engineers who arrive from all over the world to run experiments.</p><p>The process begins when a laser pulse hits a photocathode, releasing a burst of electrons that form the initial beam. These electrons travel through a series of precisely machined copper cavities, where high-frequency microwaves push them forward. The electrons then move through a network of magnets, monitors, and focusing elements that shape and steer them toward the experimental target with submillimeter precision.</p><p>Instead of a continuous stream, the electron beam is divided into nanosecond-long bunches—billions of electrons riding the radio-frequency field like surfers. Inside the accelerator’s cavities, the field flips polarity 12 billion times per second, so timing is everything: Only electrons that arrive perfectly in phase with the accelerating wave will gain energy. That process repeats through a chain of cavities, each giving the bunches another push, until the beam reaches its final energy of 200 MeV.</p><h3></h3><br/><img alt="Close-up photo of an etched copper disc being held under a microscope by a gloved hand." class="rm-shortcode" data-rm-shortcode-id="9cbcce34df51565a0cd0cea335517027" data-rm-shortcode-name="rebelmouse-image" id="6eeba" loading="lazy" src="https://spectrum.ieee.org/media-library/close-up-photo-of-an-etched-copper-disc-being-held-under-a-microscope-by-a-gloved-hand.jpg?id=65111478&width=980"/><p><span>Much of this architecture draws directly from the </span><a href="https://clic-study.org/" target="_blank">Compact Linear Collider study</a><span>, a decades-long CERN project aimed at building a next-generation collider. The proposed CLIC machine would stretch 11 kilometers and collide electrons and positrons at 380 gigaelectron volts. To do that in a linear configuration—without the multiple passes around a ring like the LHC—CERN engineers have had to push for extremely high acceleration gradients to boost the electrons to high energies over relatively short distances—up to 100 megavolts per meter.</span></p><p>Wuensch leads me to a large experimental hall housing prototype structures from the CLIC effort, and points out the microwave devices that now help drive FLASH research. Though the future of CLIC as a collider remains uncertain, its infrastructure is already yielding dividends: smaller, high-gradient accelerators that may one day be as suited for curing cancer as they are for smashing particles.</p><p class="ieee-inbody-related">RELATED: <a href="https://spectrum.ieee.org/supercolliders" target="_blank">Four Ways Engineers Are Trying to Break Physics</a></p><p>The power behind the high gradients comes from <a href="https://aries.web.cern.ch/xbox" target="_blank">CERN’s Xboxes</a>, the X-band RF systems that dominate the experimental hall. Each Xbox houses a klystron, modulator, pulse compressor, and waveguide network to generate and shape the microwave pulses. The pulse compressors store energy in resonant cavities and then release it in a microsecond burst, producing peaks of up to 200 megawatts; if it were continuous, that’s enough to power at least 40,000 homes. The Xboxes let researchers fine-tune the power, timing, and pulse shape.</p><p>According to Wuensch, many of the recent accelerator developments were enabled by advances in computer simulation and high-precision three-dimensional machining. These tools allow the team to iterate quickly, designing new accelerator components and improving beam control with each generation.</p><p>Still, real-world challenges remain. The power demands are formidable, as are the space requirements; for all the talk of its “compact” design, the original CLIC was meant to span kilometers. Obviously, a hospital needs something that’s actually compact.</p><p>“A big challenge of the project,” says Wuensch, “is to transform this kind of technology and these kinds of components into something that you can imagine installing in a hospital, and it will run every day reliably.”</p><p>To that end, CERN researchers have teamed up with the <a href="https://www.lausanneuniversityhospital.com/home" target="_blank">Lausanne University Hospital</a> (known by its French acronym, CHUV) and the French medical technology company <a href="https://www.theryq-alcen.com/" target="_blank">Theryq</a> to design a hospital facility capable of treating large and deep-seated tumors with the very short time scales needed for FLASH and scaled down to fit in a clinical setting.</p><h2>Theryq’s Approach to FLASH</h2><p>Theryq’s research center and factory are located in southern France, near the base of Montagne Sainte-Victoire, a jagged spine of limestone that Paul Cézanne painted dozens of times, capturing its shifting light and form.</p><p>“The solution that we are trying to develop here is something which is extremely versatile,” says <a href="https://www.linkedin.com/in/ludovic-le-meunier-7084382?originalSubdomain=fr" target="_blank">Ludovic Le Meunier</a>, CEO of the expanding company. “The ultimate goal is to be able to treat any solid tumor anywhere in the body, which is about 90 percent of the cancer these days.”</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Futuristic scientific equipment setup, featuring streamlined machinery and intricate components." class="rm-shortcode" data-rm-shortcode-id="91c6f9815a719ce2a415181d8352df23" data-rm-shortcode-name="rebelmouse-image" id="5b999" loading="lazy" src="https://spectrum.ieee.org/media-library/futuristic-scientific-equipment-setup-featuring-streamlined-machinery-and-intricate-components.jpg?id=65111601&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Theryq’s FLASHDEEP system, under development with CERN and the company’s clinical partners, has a 13.5-meter-long, 140-MeV linear accelerator. That’s strong enough to treat tumors at depths of up to about 20 centimeters in the body. The patient will remain in a supported standing position during the split-second irradiation.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">THERYQ</small></p><p>Theryq’s push to bring FLASH radiotherapy from the lab to clinic has followed a three-pronged rollout, with each device engineered for a specific depth and clinical use. The first machine, <a href="https://www.theryq-alcen.com/flash-radiotherapy-products/flashknife/" target="_blank">FLASHKNiFE</a>, was unveiled in 2020. Designed for superficial tumors and intraoperative use, the system delivers electron beams at 6 or 9 MeV. A prototype installed that same year at CHUV is conducting a phase-two trial for patients with localized skin cancer.</p><p>More recently, Theryq launched <a href="https://www.theryq-alcen.com/flash-radiotherapy-products/flashlab/" target="_blank">FLASHLAB</a>, a compact, 7-MeV platform for radiobiology research.</p><p>The company’s most ambitious system, <a href="https://www.theryq-alcen.com/flash-radiotherapy-products/flashdeep/" target="_blank">FLASHDEEP</a>, is still under development. The 13.5-meter-long electron source will deliver very high-energy electrons of as much as 140 MeV up to 20 centimeters inside the body in less than 100 milliseconds. An integrated CT scanner, built into a patient-positioning system developed by <a href="https://leocancercare.com/" target="_blank">Leo Cancer Care</a>, captures images that stream directly into the treatment-planning software, enabling precise calculation of the radiation dose. “Before we actually trigger the beam or the treatment, we make stereo images to verify at the very last second that the tumor is exactly where it should be,” says Theryq technical manager <a href="https://www.linkedin.com/in/philippe-liger-977a3316?originalSubdomain=fr" target="_blank">Philippe Liger</a>.</p><h2>FLASH Therapy Moves to Animal Tests</h2><p>While CERN’s CLEAR accelerator has been instrumental in characterizing FLASH parameters, researchers seeking to study FLASH in living organisms must look elsewhere: CERN doesn’t allow animal experiments on-site. That’s one reason why a growing number of scientists are turning to PITZ, the Photo Injector Test Facility in Zeuthen, a leafy lakeside suburb of Berlin.</p><p>PITZ is part of Germany’s national accelerator lab and is responsible for developing the electron source for the <a href="https://www.xfel.eu/" target="_blank">European X-ray Free-Electron Laser</a>. Now PITZ is emerging as a hub for FLASH research, with an unusually tunable accelerator and a dedicated biomedical lab to ensure controlled conditions for preclinical studies.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="A photo showing a row of experimental electronic equipment on racks" class="rm-shortcode" data-rm-shortcode-id="b3c62ff858a14ceb04a3a4549f85d68a" data-rm-shortcode-name="rebelmouse-image" id="cfbfe" loading="lazy" src="https://spectrum.ieee.org/media-library/a-photo-showing-a-row-of-experimental-electronic-equipment-on-racks.jpg?id=65111551&width=980"/></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="A photo of a closeup of a gloved hand holding a sample of a purple liquid above a piece of equipment." class="rm-shortcode" data-rm-shortcode-id="e4f204a1631b000ef17c7be15995ef83" data-rm-shortcode-name="rebelmouse-image" id="82e52" loading="lazy" src="https://spectrum.ieee.org/media-library/a-photo-of-a-closeup-of-a-gloved-hand-holding-a-sample-of-a-purple-liquid-above-a-piece-of-equipment.jpg?id=65111525&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">At Germany’s Photo Injector Test Facility in Zeuthen (PITZ), the electron-beam accelerator [top] is used to irradiate biological targets in early-stage animal tests of FLASH radiotherapy [bottom].</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Top: Frieder Mueller; Bottom: MWFK</small></p><p>“The biggest advantage of our facility is that we can do a very stepwise, very defined and systematic study of dose rates,” says <a href="https://www.linkedin.com/in/anna-grebinyk-186a8245?originalSubdomain=de" target="_blank">Anna Grebinyk</a>, a biochemist who heads the new biomedical lab, “and systematically optimize the FLASH effect to see where it gets the best properties.”</p><p>The experiments begin with zebra-fish embryos, prized for early-stage studies because they’re transparent and develop rapidly. After the embryos, researchers test the most promising parameters in mice. To do that, the PITZ team uses a small-animal radiation research platform, complete with CT imaging and a robotic positioning system adapted from CERN’s CLEAR facility.</p><p>What sets PITZ apart is the flexibility of its beamline. The 30-meter accelerator system steers electrons with micrometer precision, producing electron bunches with exceptional brightness and emittance—a metric of beam quality. “We can dial in any distribution of bunches we want,” says Frank Stephan, group leader at PITZ. “That gives us tremendous control over time structure.”</p><p>Timing matters. At PITZ, the laser-struck photocathode generates electron bunches that are accelerated immediately, at up to 60 million volts per meter. A fast electromagnetic kicker system acts as a high-speed gatekeeper, selectively deflecting individual electron bunches from a high-repetition beam and steering them according to researchers’ needs. This precise, bunch-by-bunch control is essential for fine-tuning beam properties for FLASH experiments and other radiation therapy studies.</p><p>“The idea is to make the complete treatment within one millisecond,” says Stephan. “But of course, you have to [trust] that within this millisecond, everything works fine. There is not a chance to stop [during] this millisecond. It has to work.”</p><p>Regulating the dose remains one of the biggest technical hurdles in FLASH. The ionization chambers used in standard radiotherapy can’t respond accurately when dose rates spike hundreds of times higher in a matter of microseconds. So researchers are developing new detector systems to precisely measure these bursts and keep pace with the extreme speed of FLASH delivery.</p><h2>FLASH as a Research Tool</h2><p>Beyond its therapeutic potential, FLASH may also open new windows to illuminate cancer biology. “What is really, really superinteresting, in my opinion,” says Vozenin, “is that we can use FLASH as a tool to understand the difference between normal tissue and tumors. There must be something we’re not aware of that really distinguishes the two—and FLASH can help us find it.” Identifying those differences, she says, could lead to entirely new interventions, not just with radiation, but also with drugs.</p><p>Vozenin’s team is currently testing a hypothesis involving long-lived proteins present in healthy tissue but absent in tumors. If those proteins prove to be key, she says, “we’re going to find a way to manipulate them—and perhaps reverse the phenomenon, even [turn] a tumor back into a normal tissue.”</p><p>Proponents of FLASH believe it could help close the cancer care gap worldwide; in low-income countries, only about 10 percent of patients have access to radiotherapy, and in middle-income countries, only about 60 percent of patients do, according to the International Atomic Energy Agency. Because FLASH treatment can often be delivered in a single brief session, it could spare patients from traveling long distances for weeks of treatment and allow clinics to treat many more people.</p><p>High-income countries stand to benefit as well. Fewer sessions mean lower costs, less strain on radiotherapy facilities, and fewer side effects and disruptions for patients.</p><p>The big question now is, How long will it take? Researchers I spoke with estimate that FLASH could become a routine clinical option in about 10 years—after the completion of remaining preclinical studies and multiphase human trials, and as machines become more compact, affordable, and efficient. Much of the momentum comes from a growing field of startups competing to build devices, but the broader scientific community remains remarkably open and collaborative.</p><p>“Everyone has a relative who knows about cancer because of their own experience,” says Stephan. “My mother died of it. In the end, we want to do something good for mankind. That’s why people work together.” <span class="ieee-end-mark"></span></p><p><em>This article appears in the March 2026 print issue.</em></p>]]></description><pubDate>Fri, 06 Mar 2026 14:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/flash-radiotherapy</guid><category>Medical-technology</category><category>Cern</category><category>High-energy-physics</category><category>Linear-accelerator</category><category>Electron-beams</category><category>Cancer-treatments</category><dc:creator>Tom Clynes</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/photo-of-a-man-in-a-lab-coat-adjusting-a-large-piece-of-medical-equipment-thats-pointed-at-the-head-of-a-partial-mannequin.jpg?id=65111419&amp;width=980"></media:content></item><item><title>Scenario Modeling and Array Design for Non-Terrestrial Networks (NTNs)</title><link>https://content.knowledgehub.wiley.com/scenario-modeling-and-array-design-for-non-terrestrial-networks-ntns/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/mathworks-logo.png?id=26851519&width=980"/><br/><br/><p>Non-terrestrial networks (NTNs) using low earth orbit (LEO) satellites present unique technical challenges, from managing large satellite constellations to ensuring reliable communication links. In this webinar, we’ll explore how to address these complexities using comprehensive modeling and simulation techniques. Discover how to model and analyze satellite orbits, onboard antennas and arrays, transmitter power amplifiers (PAs), signal propagation channels, and the RF and digital receiver segments—all within an integrated workflow. Learn the importance of including every link component to achieve accurate, reliable system performance.</p><p><strong>Highlights include:</strong></p><ul><li><span>Modeling large satellite constellations<br/></span></li><li><span>Analyzing and visualizing time-varying visibility and link closure</span></li><li><span>Using graphical apps for antenna analysis and RF component design</span></li><li><span>Modeling PAs and digital predistortion</span></li><li><span>Simulating interference effects in communication links</span></li></ul><div><a href="https://content.knowledgehub.wiley.com/scenario-modeling-and-array-design-for-non-terrestrial-networks-ntns/" target="_blank">Register now for this free webinar!</a></div>]]></description><pubDate>Fri, 06 Mar 2026 11:00:03 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/scenario-modeling-and-array-design-for-non-terrestrial-networks-ntns/</guid><category>Type-webinar</category><category>Nonterrestrial-networks</category><category>Satellites</category><category>Satellite-communications</category><dc:creator>MathWorks</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/26851519/origin.png"></media:content></item><item><title>From TV Repairman to Electromagnetic Compatibility Expert</title><link>https://spectrum.ieee.org/from-tv-repairman-to-electromagnetic-compatibility-expert</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-elderly-white-man-in-a-dress-shirt-and-glasses-smiling-in-his-at-home-workshop.jpg?id=65112143&width=1200&height=400&coordinates=0%2C1042%2C0%2C1042"/><br/><br/><p>No one had very high career aspirations for teenager <a href="https://ieeexplore.ieee.org/author/38227767000" rel="noopener noreferrer" target="_blank">David A. Weston</a>—except for Weston himself. Growing up in London, he scored low on the U.K. national assessment test given to students finishing primary school. The result meant that his next path was either to become a laborer or attend a vocational school to learn a trade.</p><p>What Weston really wanted to do was to work as a radio and TV repairman. He was fascinated by how the devices worked. He had taught himself to build an AM radio when he was 15. Even after showing it to his parents and teachers, though, they still didn’t think he was smart enough to pursue his chosen career, he says.</p><h3>David A. Weston</h3><br/><p><strong>Employer </strong></p><p><strong></strong>EMC Consulting, in in Arnprior, Ont., Canada</p><p><strong>Job title</strong></p><p>Retired consultant</p><p><strong>Member grade </strong></p><p>Life member</p><p><strong>Alma mater </strong></p><p><strong></strong>Croydon Technical College, London</p><h3></h3><br/><p>So, later that year, the underweight teen got a job on a construction site carrying heavy loads of building materials in a hod, a three-sided wooden trough. The experience convinced him he wasn’t cut out for manual labor.</p><p>He eventually earned a certificate in radio and television, the only credential he holds. The lack of academic degrees did not hold him back, though. He went on to become an expert in electromagnetic interference (EMI) and electromagnetic compatibility (EMC).</p><p>An EMI field has unwanted energy that causes interference. EMC is the capacity for electronic devices to work correctly in a shared electromagnetic environment without causing interference or suffering from it in nearby devices or signals.</p><p>After working for a number of companies, he launched his own business more than 40 years ago: <a href="https://www.emcconsultinginc.com/" target="_blank">EMC Consulting</a>, in Arnprior, Ont., Canada. The company has helped clients meet EMI and EMC regulatory requirements.</p><p>Now 83 years old and retired, the IEEE life member recently self-published his memoir, <a href="https://www.barnesandnoble.com/w/from-a-hod-to-an-odd-em-wave-david-weston/1148995654" rel="noopener noreferrer" target="_blank"><em>From a Hod to an Odd EM Wave</em></a>.</p><p>“My memoir is about engineering persistence and human and technical discoveries,” he says. “I wanted to interest a young person, or perhaps a person later in life, in a career in engineering. If I can show that engineering is a personal, human endeavor with exciting opportunities in different fields such as medical, scientific, and the arts, maybe more women would be attracted to it.”</p><h2>From repairing radios to designing underwater devices</h2><p>In 1960 Weston enrolled in the radio and electronics program at London’s Croydon Technical College (now <a href="https://croydon.ac.uk/" rel="noopener noreferrer" target="_blank">Croydon College</a>). The school covered topics from the <a href="https://www.cityandguilds.com/" rel="noopener noreferrer" target="_blank">City and Guilds of London Institute</a>’s radio and television certificate program. He attended classes one day a week for five years while working to put himself through school.</p><p>Although his parents and his teachers might not have recognized Weston’s potential, employers did.</p><p>He got his first job in 1960, fixing televisions in a small repair shop. Then he helped repair tape recorders. In his spare time, he studied transistors and semiconductors.</p><p>Everything he knows, he says, he learned by reading books and research papers, and from on-the-job training.</p><p>Later in 1960, he worked as a mechanical examiner for the U.K. <a href="https://en.wikipedia.org/wiki/Ministry_of_Aviation" rel="noopener noreferrer" target="_blank">Ministry of Aviation</a>, where he calibrated precision meters and potentiometers, which are variable resistors that monitor, control, and measure industrial equipment.</p><h3></h3><br/><p>“Engineering is creative. To have a new idea or design accepted is rewarding, satisfying, pleasurable, and even exciting.”</p><h3></h3><br/><p>He left the ministry in 1963 because he found the work boring, he says, and he was hired as a technician with the <a href="https://www.gov.uk/government/organisations/medical-research-council/about" target="_blank">Medical Research Council</a>’s neuropsychiatric research unit in Carshalton. The institution researches the biological causes of mental illness. His manager was interested in learning about advances in medical electronics and eagerly shared his knowledge with Weston.</p><p>One of Weston’s tasks was to build an electroencephalography (EEG) calibrator to measure responses from a patient’s brain activity. The methods used at the time to detect a brain tumor—before <a href="https://spectrum.ieee.org/mri-pioneer-to-receive-ieee-medal-for-innovations-in-healthcare-technology" target="_self">MRI machines</a> were developed—involved monitoring the patient’s speech and coordination, followed by taking a biopsy, which was not without danger, he says.</p><p>He used an ultrasonic transmitter and receiver to measure the time of transmission to the midline in the brain to determine whether the person had a tumor. If the midline had shifted, it would indicate the presence of a tumor, and a biopsy would be performed to confirm it. The measure of the evoked response in the brain was the only reliable indicator.</p><p>Weston earned his radio and TV certificate in 1965, leaving the research facility a year later to join Divcon (now part of <a href="https://www.oceaneering.com/" target="_blank">Oceaneering International</a>), a commercial diving company based in London that developed deep-sea helium diving helmets. Weston helped design a waterproof handheld communication device for divers that could withstand the high pressure in diving bells, the open-bottom pressurized chambers that transported them underwater.</p><p>Weston then moved to Hamburg, Germany, in 1969 to work for <a href="https://plath-corporation.de/en/" rel="noopener noreferrer" target="_blank">Plath</a>, an electronics manufacturer. He was tasked, along with other engineers from England, to design a servo control loop.</p><p>“Unfortunately it oscillated so badly when first being turned on that it shook itself to bits,” he says.</p><p>He left to work as a senior engineer at <a href="https://www.kistler.com/INT/en/kistler-acquisitions-win-win-for-all-concerned/C00000494" rel="noopener noreferrer" target="_blank">Dr. Staiger Mohilo and Co.</a> (now part of <a href="https://www.kistler.com/INT/en/" rel="noopener noreferrer" target="_blank">Kistler</a>), in Schorndorf, Germany. It manufactured torque sensors, force transducers, and specialized test stand systems. Weston designed a process control computer. He says his boss told him that the controller had to work in close proximity to—and from the same power source as—a nearby machine without interfering with it or being interfered by it.</p><p>“I was thus introduced to the idea of electromagnetic compatibility,” he says.</p><p>After three years, he left to join the <a href="https://www.mobility.siemens.com/global/en.html" rel="noopener noreferrer" target="_blank">Siemens Mobility</a> train group in Braunschweig, Germany, where he helped develop an electronic train-crossing light controller. The original warning lights on crossing gates used a mercury tube as a switch.</p><p>“The concern was the danger to personnel if the tube broke,” he says. “The simple and inexpensive solution was to put the tube in a metal container.”</p><p>Weston and his wife decided to leave Germany for Canada in 1975, after their young son began forgetting how to speak English.</p><h2>Working on the space shuttle and a particle accelerator</h2><p>His first job in the country was as an engineer for <a href="https://www.cae.com/" rel="noopener noreferrer" target="_blank">Canadian Aviation Electronics</a> in Montreal. CAE helped design the remote manipulator system in robotic hand controllers and simulation systems used to train astronauts for the space shuttle.</p><p>The robotic arm, known as <a href="https://en.wikipedia.org/wiki/Canadarm" rel="noopener noreferrer" target="_blank">Canadarm</a>, was used to deploy, maneuver, and capture payloads for the astronauts. Weston’s engineering team designed the display and control panel as well as the hand controllers located in the shuttle’s flight deck.</p><p>“I was attracted to the EMC aspects of the project and avidly studied everything I could on the topic,” he says.</p><p>He also helped develop a system that would protect an aircraft’s deployable black box from lightning strikes.</p><p>“I used a computer program to analyze the EMI field at close proximity to the black box to predict the lightning current flowing into the aircraft structure,” he says.</p><p>While enjoying the warm winter weather during a 1975 visit to a supplier on Long Island, N.Y., he decided he wanted to move his family there and asked whether any companies in the area were hiring. He was told that <a href="https://www.bnl.gov/world/" rel="noopener noreferrer" target="_blank">Brookhaven National Laboratory</a>, in Upton, was, so he applied for a position working on the ring system for the <a href="https://en.wikipedia.org/wiki/ISABELLE" rel="noopener noreferrer" target="_blank">Isabelle proton colliding-beam particle accelerator</a>.</p><p>The project, later known as the <a href="https://spectrum.ieee.org/supercolliders" target="_self">colliding beam accelerator</a>, was a collaboration between the lab and the <a href="https://www.energy.gov/" rel="noopener noreferrer" target="_blank">U.S. Department of Energy</a>. The 200+200 giga-electron volt proton-proton collider was designed to use advanced superconducting magnets cooled by a massive helium refrigeration system to produce high-energy collisions. The GeV refers to the collision energy in a particle accelerator.</p><h3>Weston’s Advice for Budding Engineers</h3><br/><ul><li>Follow the field in which you are most interested.</li><li>Don’t be afraid to work in other countries; it can be a rewarding, enriching experience.</li><li>Question the results of measurements or analyses. If it doesn’t seem right, it probably isn’t. Look at a similar publication on the same topic for a good correlation. </li><li>Don’t be too shy to ask simple questions. That’s how we learn and grow.</li><li>Keep an open mind.</li></ul><p><span>The lab hired him in 1978, and the family moved to Long Island. After a few weeks of meeting with different departments, his boss asked him what kind of work he wanted to do. Weston told him about his idea for designing a device to detect a helium leak, should there ever be one. His machine would cover the entire 3,834-meter circumference area of the ring.</span></p><p>“The danger with increased helium-enriched air is that the oxygen level reduces until the person breathing becomes adversely affected,” he wrote in his memoir. “I found that the speed of the sound of helium increased enough to be detected, but not sufficient enough to cause a person trouble if they were in the tunnel.</p><p>“Brookhaven was considering machines that only covered a small area of the ring, but these would be unrealistic because too many machines would be needed, and the cost would have been astronomical.”</p><p>Weston’s system included an ultrasonic transmitter, a receiver, a power amplifier, and a preamplifier. It would sound an alarm if the helium content went above a certain level. People in the tunnel would be directed to go to the nearest oxygen-breathing equipment, put on a mask, and immediately evacuate. It was successfully tested.</p><p>Weston wrote a report detailing the ultrasonic helium leak detector, but shortly after, he and his wife had to return to Canada in 1978 because they were unable to get additional work permits in the United States.</p><p>When he returned to Brookhaven for a visit, his former boss told him the report was well-received. And he shared some news that upset Weston.</p><p>“My boss told me he took my report, changed the name on the report to his, did not mention me, and published the report as his,” Weston wrote in his memoir.</p><p>But the system was never built. The Isabelle project was canceled in July 1983 due to technical problems with fabricating the superconducting magnets.</p><p>Weston got a job working for <a href="https://satistar.com/portfolio/cal-corporation/" target="_blank">CAL Corp.</a>, an aerospace telecommunications company in Montreal. For the next 14 years, he fixed EMI problems for the company’s products, including its charge-coupled device-based space-qualified cameras, which were designed to be carried aboard a satellite.</p><p>In 1992 he realized that nearly all his work involved consulting for the company’s customers, so he decided to start his own agency. CAL generously let him take the clients he worked with, he says.</p><p>Weston then conducted EMI analysis and testing and designed EMC systems for companies around the world.</p><p>“I always had enough customers and have never had to look for work,” he says. “For me, having my own business was more secure than working for a company.”</p><p>He retired in 2022.</p><h2>IEEE as an educator</h2><p>To broaden his education, he joined IEEE in 1976 to get access to its research papers and attend its conferences, he says. He is a member of the <a href="https://www.emcs.org/" rel="noopener noreferrer" target="_blank">IEEE Electromagnetic Compatibility Society</a>.</p><p>Because he is self-educated, he was “keen to learn as much as possible by reading practical papers published by IEEE,” he says. “I met people at IEEE symposiums and listened to the authors presenting their papers.”</p><p>Those included EMC experts such as Life Fellows <a href="https://ieeexplore.ieee.org/search/searchresult.jsp?newsearch=true&queryText=L.O.%20Hoeft" rel="noopener noreferrer" target="_blank">Lothar O. “Bud” Hoeft</a>, <a href="https://ieeexplore.ieee.org/search/searchresult.jsp?newsearch=true&searchWithin=%22First%20Name%22:Richard&searchWithin=%22Last%20Name%22:Mohr" rel="noopener noreferrer" target="_blank">Richard J. Mohr</a>, and <a href="https://ieeexplore.ieee.org/search/searchresult.jsp?newsearch=true&queryText=Clayton%20R.%20Paul" rel="noopener noreferrer" target="_blank">Clayton R. Paul</a>, whose papers are published in the<a href="https://ieeexplore.ieee.org/Xplore/home.jsp" rel="noopener noreferrer" target="_blank"> IEEE Xplore Digital Library</a>. <a href="https://ieeexplore.ieee.org/search/searchresult.jsp?queryText=david%20weston&highlight=true&returnFacets=ALL&returnType=SEARCH&matchPubs=true&refinements=Author:David%20Weston" rel="noopener noreferrer" target="_blank">Several of Weston’s papers</a> are in the library as well.</p><p>His book <a href="https://www.emcconsultinginc.com/publications/" rel="noopener noreferrer" target="_blank"><em><em>Electromagnetic Compatibility: Methods, Analysis, Circuits, and Measurement</em></em></a> references many IEEE papers on data and analysis methods.</p><p>“Engineering is creative,” he says. “To have a new idea or design accepted is rewarding, satisfying, pleasurable, and even exciting.”</p>]]></description><pubDate>Thu, 05 Mar 2026 19:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/from-tv-repairman-to-electromagnetic-compatibility-expert</guid><category>Ieee-member-news</category><category>Type-ti</category><category>Careers</category><category>Emi</category><category>Emc</category><category>Electromagnetic-compatibility</category><dc:creator>Kathy Pretz</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-elderly-white-man-in-a-dress-shirt-and-glasses-smiling-in-his-at-home-workshop.jpg?id=65112143&amp;width=980"></media:content></item><item><title>This Student-Built EV Focuses on Repairability</title><link>https://spectrum.ieee.org/ev-battery-swapping-aria-ev</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/view-underneath-a-modular-electric-vehicle-s-hood-inside-a-simple-wire-frame-between-two-of-its-wheels-holds-an-ample-toolbox.jpg?id=65096797&width=1200&height=400&coordinates=0%2C1042%2C0%2C1042"/><br/><br/><p><span>At first glance, the </span><a href="https://www.tue.nl/en/news-and-events/news-overview/25-11-2025-tue-students-build-modular-electric-city-car-you-can-repair-yourself" target="_blank">Aria EV</a><span> doesn’t look much different from any other student-built electric prototype—no different from </span><a href="https://www.southwesterncc.edu/news/auto-students-build-program-electric-vehicle" target="_blank">the battery-powered cars built by engineering students</a><span> from dozens of universities every year. Beneath its panels, however, is a challenge to the modern auto industry: What if electric vehicles were designed to be repaired by their owners?</span></p><p>The Aria project began in 2024, when roughly 20 students assembled at <a href="https://www.tue.nl/en/" target="_blank">Eindhoven University of Technology</a> in the Netherlands under the university’s <a href="https://www.tuecomotive.nl/" target="_blank">Ecomotive</a> team structure, which operates like a small startup. Students apply, are selected, and spend a year developing a vehicle in a setting meant to mirror industry practice.</p><p>The goal, says team spokesperson <a href="https://nl.linkedin.com/in/sarpgurel?trk=people-guest_people_search-card" target="_blank">Sarp Gurel</a>, “was to make the car as accessible and repairable as possible.” Gurel, who graduated last July with a bachelor’s degree in industrial engineering and is currently working toward a master’s degree at Eindhoven, says the Aria EV is not yet road legal. Its purpose is to demonstrate that repairability can be embedded into EV architecture from the outset. With that objective in mind, the team focused first on the most challenging and expensive component in almost any EV: the battery.</p><h2>Modular Battery Design in EVs</h2><p>Aria’s total battery capacity is 13 kilowatt-hours, which is far below the 50- to 80-kWh packs common in mass-market electric sedans and SUVs. The scale is closer to that of a lightweight urban vehicle or neighborhood EV, which is more appropriate for a student-built prototype focused on concept validation rather than long-range highway travel.</p><p>What distinguishes Aria is not the battery’s size, but its structure. Rather than housing the 13 kWh in a single sealed pack, the team divided the total capacity into six smaller modules. Each module weighs about 12 kilograms—much easier to handle than the 400 kg or more that’s typical of a conventional EV’s monolithic battery pack. This makes it feasible for a single person to remove, swap, and replace modules.</p><p>The modules sit in reinforced compartments beneath the vehicle floor and are secured using a bottom-latch system. When the vehicle is fully powered down, a latch can be made to mechanically release a module. Integrated interlocks isolate the high-voltage connection before a module can be lowered. This combination of hardware and software ensures that component-level replacement is straightforward and relatively safe, bringing the idea of “repairability by design” into a tangible, hands-on form. Even with this careful design, modular batteries introduce technical considerations that must be managed, particularly when integrating different modules over the vehicle’s lifespan.</p><p><a href="https://car.osu.edu/news/2024/06/borgerson-brings-technical-expertise-battery-workforce-team" target="_blank">Joe Borgerson</a>, a laboratory research operations coordinator at <a href="https://www.osu.edu/" target="_blank">Ohio State University</a>’s <a href="https://car.osu.edu/" rel="noopener noreferrer" target="_blank">Center for Automotive Research</a>, in Columbus, notes one complication: Mixing new and aged battery modules can create challenges. <span>Borgerson has spent the past three years designing and building a battery pack from scratch as part of the U.S. </span><a href="https://www.energy.gov/" target="_blank">Department of Energy</a><span>’s </span><a href="https://batteryworkforcechallenge.org/" target="_blank">Battery Workforce Challenge</a><span>. “Our team is integrating a student-designed pack into a </span><a href="https://www.stellantis.com/en" target="_blank">Stellantis</a><span> vehicle platform,” he says, “</span><span>which has given me deep exposure to both automaker design philosophy and high-voltage EV architecture,</span><span>.”</span></p><p>To complement their car’s hardware, the Aria team developed a diagnostic app that can be accessed via a dedicated USB-C port. When the user connects their smartphone, the app presents a 3D visualization on the phone screen that points out faults, locates problems, identifies the necessary tools to fix them, and provides step-by-step repair instructions. The tools themselves are stored in the vehicle. The system aims to reduce as many barriers as possible for users to maintain and extend a vehicle’s service life.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Eighteen college students posing around a modular electric vehicle inside a museum." class="rm-shortcode" data-rm-shortcode-id="fbc9f915ac286de2bb18b02069b5ef15" data-rm-shortcode-name="rebelmouse-image" id="b3cb0" loading="lazy" src="https://spectrum.ieee.org/media-library/eighteen-college-students-posing-around-a-modular-electric-vehicle-inside-a-museum.jpg?id=65096805&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Students at Eindhoven University of Technology unveiled their Aria EV prototype in November.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Sarp Gürel</small></p> <h2>Challenges of EV Modularity</h2><p>While Aria prioritizes modularity, the broader EV industry trend is toward integrated, interdependent systems that simplify manufacturing processes and cut costs. This trend is true for the structural battery packs for EVs as well.</p><p><span>Unlike mainstream EVs, Aria treats energy storage as a replaceable subsystem. Whether it scales economically and structurally to larger, highway-capable EVs remains an open question. </span><span>But designing a vehicle for repairability involves trade-offs that ripple across every system in the car.</span></p><p>Borgerson says that dividing systems into removable units adds interfaces—mechanical fasteners, electrical connectors, seals, and safety interlocks. Each interface must survive vibration, temperature swings, and crash forces. More interfaces can mean added mass and complexity compared with tightly integrated battery structures. And these components take up space that would otherwise be used for energy storage.</p><p><a href="https://mae.osu.edu/people/darpino.2" target="_blank">Matilde D’Arpino</a><span>, an assistant professor of mechanical and aerospace engineering at Ohio State whose research focuses on electrified power trains and advanced vehicle architectures, notes that EV batteries are already modular internally—cells form modules, and modules form packs—but making modules externally replaceable changes validation requirements. High-voltage isolation, thermal performance, and crash integrity must remain robust even when energy storage is divided into removable segments. </span><span>In other words, what seems like a simple way to make batteries user-friendly actually cascades into system-level design decisions influencing safety, thermal management, and vehicle structure.</span></p><h2>Impact of Right-to-Repair Laws</h2><p><a href="https://spectrum.ieee.org/tag/right-to-repair" target="_self">Right-to-repair</a> legislation in Europe and the United States could push automakers to reconsider sealed architectures for batteries and other components. Economic incentives could also emerge from fleet operators or long-term owners who benefit from replacing a fraction of a battery system rather than an entire pack. But a<span>dopting this approach would require changes across supply chains, certification processes, and service models.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Interior view of the driver's side of a modular electric vehicle. Its elements are minimal and stripped down to essentials." class="rm-shortcode" data-rm-shortcode-id="7afd2ec8121adb93c0a2ebe6ad65afde" data-rm-shortcode-name="rebelmouse-image" id="cc673" loading="lazy" src="https://spectrum.ieee.org/media-library/interior-view-of-the-driver-s-side-of-a-modular-electric-vehicle-its-elements-are-minimal-and-stripped-down-to-essentials.jpg?id=65096844&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The Aria prototype isn’t ready to go toe-to-toe with production EVs, but it demonstrates some proof-of-concept ideas about repairability.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Sarp Gürel</small></p> <p><span>Consumer expectations are also shaping the boundaries of what designs like Aria’s can become. In the mainstream market, buyers consistently prioritize longer </span><a href="https://spectrum.ieee.org/tag/range-anxiety" target="_self">driving range</a><span> and lower sticker prices—two factors that have defined competition among models such as the </span><a href="https://www.chevrolet.com/electric/bolt-ev" target="_blank">Chevrolet Bolt EV</a><span>, the </span><a href="https://www.hyundaiusa.com/us/en/vehicles/ioniq-5?&chid=sem&fb=io5_bnd_husa&CID=20166438&PID=202442677&CRID=795801287407&SID=4075918&AID=402292811&ds_query=hyundai+ioniq+5&ads_rl=8569909089&&&&gclsrc=aw.ds&ds_rl=1277805&gad_source=1&gad_campaignid=12376057835&gbraid=0AAAAADgtKc23yPObUZ3QivHwSbWd8NAwS&gclid=CjwKCAiAkvDMBhBMEiwAnUA9BRvs9klZKC3h1Lcso-GfaUvk3xTqkxGcTiyeyQzTtuFz8YyR-IGjlBoC0lcQAvD_BwE" target="_blank">Hyundai Ioniq 5</a><span>, and the the </span><a href="https://www.tesla.com/model3" target="_blank">Tesla Model 3</a><span>. Range anxiety remains a powerful psychological factor, even as charging infrastructure expands, and price sensitivity has intensified as government incentives fluctuate. Designing for modularity and repairability, as Aria does, must ultimately contend with these consumer priorities. Any added cost, weight, or complexity must be weighed against a market that still rewards vehicles that go farther for less money.</span></p><p>Ultimately, however, Aria inserts a different priority into the equation: repair as a core design requirement. Whether that priority becomes mainstream will depend less on whether it can be engineered—and more on whether regulators, manufacturers, and consumers decide it should be.</p>]]></description><pubDate>Wed, 04 Mar 2026 17:21:07 +0000</pubDate><guid>https://spectrum.ieee.org/ev-battery-swapping-aria-ev</guid><category>Electric-vehicles</category><category>Modular-design</category><category>Eindhoven-university-of-technolo</category><category>Modular-ev</category><dc:creator>Willie D. Jones</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/view-underneath-a-modular-electric-vehicle-s-hood-inside-a-simple-wire-frame-between-two-of-its-wheels-holds-an-ample-toolbox.jpg?id=65096797&amp;width=980"></media:content></item><item><title>Taara Brings Fiber-Optic Speeds to Open-Air Laser Links​</title><link>https://spectrum.ieee.org/free-space-optical-link-taara</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-plastic-elliptic-cylinder-case-equipped-with-a-lens-the-device-is-mounted-on-a-metal-beam.jpg?id=65113617&width=1200&height=400&coordinates=0%2C417%2C0%2C417"/><br/><br/><p>Taara started as a <a href="https://en.wikipedia.org/wiki/X_Development" rel="noopener noreferrer" target="_blank">Google X moonshot spin-off</a> aimed at <a href="https://spectrum.ieee.org/free-space-optical-communication-taara" target="_self">connecting rural villages in sub-Saharan Africa</a> with beams of light. Its newest product, debuting this week at <a href="https://www.mwcbarcelona.com/" rel="noopener noreferrer" target="_blank">Mobile World Congress</a> (MWC), in Barcelona, aims at a different kind of connectivity problem: getting internet access into buildings in cities that already have plenty of fiber—just not where it’s needed.</p><p>The Sunnyvale, Calif.–based company transmits data via infrared lasers, the kind typically used in fiber-optic lines. However, Taara’s systems beam gigabits across kilometers over open air. “Every one of our Taara terminals is like a digital camera with a laser pointer,” says <a href="https://linkedin.com/in/mahesh-krishnaswamy-341b471" target="_blank">Mahesh Krishnaswamy</a>, Taara’s CEO. “The laser pointer is the one that’s shining the light on and off, and the digital camera is on the [receiving] side.”</p><p>Taara’s new system—<a href="https://www.taaraconnect.com/product/beam" rel="noopener noreferrer" target="_blank">Taara Beam</a>, being demoed at <a href="https://www.mwcbarcelona.com/themes/game-changers" rel="noopener noreferrer" target="_blank">MWC’s “Game Changers</a>” platform—prioritizes efficiency and a compact size. Each Beam unit is the size of a shoebox and weighs just 8 kilograms, and can be mounted on a utility pole or the side of a building. According to the company, Beam will deliver fiber-competitive speeds of up to 25 gigabits per second with low, 50-microsecond latency.</p><p><span>Taara’s former parent company, Krishnaswamy says, is also these days a prominent client. Google’s main campus in Mountain View, Calif., is near a landing point for a major </span><a data-linked-post="2671361590" href="https://spectrum.ieee.org/undersea-internet-cables-meta-waterworth" target="_blank">submarine fiber-optic cable</a><span>.</span></p><p>“One of the Google buildings was literally a few hundred meters away from the landing spot in California,” he says. “Yet they couldn’t connect the two points because of land rights and right-of-way issues.… Without digging and trenching into federal land, we are able to connect the two points at tens of gigabits per second. And so many Googlers are actually using our technology today.”</p><h3>A Fingernail-Size Chip Shrinks Taara’s Tech</h3><p><strong></strong>Krishnaswamy says his laser pointer and digital camera analogy doesn’t quite do justice to the engineering problems the company had to tackle to fit all the gigabit-per-second photonics into a weather-hardened, shoebox-size device.</p><p>The Taara Beam must steer its laser link across kilometers of open air so that the Beam device can receive it on the other end of the line. Effectively, that means the device’s laser can’t be off target by more than a few degrees. </p><p>Beam approaches the steering problem by physically shaping the laser pulse itself. Taara’s photonics chip splits the laser beam carrying the data into more than a thousand separate streams, delaying each one by a closely controlled amount. The result is a laser wavefront that can be pointed anywhere the system directs.</p><p>Krishnaswamy likens this to the effects of pebbles tossed into a pond. Dropping pebbles in a careful sequence, he says, can create interference patterns in the waves that ripple outward. “These thousand emitters are equivalent to a thousand stones,” he says. “And I’m able to delay the phase of each of them. That allows me to steer [the wavefront] whichever direction I want it to go.” <strong></strong></p><p>The idea behind this technology—called a <a href="https://en.wikipedia.org/wiki/Phased_array" target="_blank">phased array</a>—is not new. But turning it into a commercial optical communications device, at Taara Beam’s scale and range, is where others have so far fallen short.</p><p>“Radio-frequency phased arrays like <a href="https://www.linkedin.com/pulse/overview-how-starlinks-phased-array-antenna-dishy-works-curtis-arnold/" target="_blank">Starlink antennas</a> are well known,” Krishaswamy says. “But to do this with optics, and in a commercial way, not just an experimental way, is hard.”</p><p>This isn’t how the company started out, however. </p><p>In 2019, when the company was still a Google X subsidiary, Krishaswamy says, Taara launched its first commercial product, the <a href="https://x.company/blog/posts/bringing-light-speed-internet-to-sub-saharan-africa/" target="_blank">traffic-light-size Lightbridge</a><a href="https://www.taaraconnect.com/product" target="_blank"></a>. Like Beam, Lightbridge boasts fiberlike connection speeds, and it has to date been deployed in more than 20 countries around the world—including the Google campus.</p><p><span>Taara’s upgraded model, </span><a href="https://www.taaraconnect.com/product/lightbridge-pro" target="_blank"> Lightbridge Pro</a><span>, launched last month and is also on display this week at MWC. Lightbridge Pro adds one crucial capability Lightbridge lacked: an automatic backup. When fog or rain disrupts Lightbridge’s optical link, the system switches traffic to a paired radio connection. When conditions clear, Lightbridge Pro switches traffic back to the faster laser-data connection. The company says that combination keeps the link up 99.999 percent of the time—less than 5 minutes of downtime in a year.</span></p><p>Both Lightbridge and Lightbridge Pro mechanically position their mirrors, achieving three degrees of pointing accuracy. An onboard tracking system inside the unit also relocks the beams automatically whenever the unit gets shifted or jostled.</p><h3>The Future of Taara Beam Deployment</h3><p>Krishaswamy says that while Taara continues to install and support Lightbridge and Lightbridge Pro, he hopes the company can also begin installing Taara Beam units for select early customers as soon as later this year. </p><p><a href="https://www.kaust.edu.sa/en/study/faculty/mohamed-slim-alouini" target="_blank">Mohamed-Slim Alouini</a>, distinguished professor of electrical and computer engineering at King Abdullah University of Science and Technology in Thuwal, Saudi Arabia, says the bandwidth of free-space optical (FSO) technologies like Taara Beam and Lightbridge still leaves plenty of room to grow. </p><p> “Like any physical medium, free-space optics has a capacity limit,” Alouini says. “But laboratory experiments have <a href="https://www.nict.go.jp/en/press/2025/12/16-1.html" target="_blank">already demonstrated</a> fiberlike performance with terabits-per-second data rates over FSO links. The real gap is not in raw capacity but in practical deployment.”</p><p><a href="https://www.linkedin.com/in/atul-bhatnagar-1a41212/" target="_blank">Atul Bhatnagar</a>, formerly of <a href="https://en.wikipedia.org/wiki/Nortel" target="_blank">Nortel</a> and <a href="https://www.cambiumnetworks.com/" target="_blank">Cambium Networks</a>, and currently serving as advisor to Taara, sees room for optimism even when it comes to practical deployment.</p><p>“Current Taara architecture is capable of delivering hundreds of gigabits per second over the next several years,” he says.</p><p>Krishnaswamy adds that Beam’s compact form factor makes it suitable for more than just terrestrial applications.</p><p>“We’ll continue to do the work that we’re doing on the ground. But to the extent that space solutions are taking off, we would love to be part of that,” he says. “Data center-to-data center in space is something we are really looking at using for this technology.</p><p>“Because when you have multiple servers up in space, you can’t run fiber from one to the other,” he adds. “But these photonics modules will be able to point and track and transmit gigabits and gigabits of data to each other.”</p><p>For now, Taara’s ambitions are closer to Earth—specifically to the buildings, utility poles, and city blocks where fiber still hasn’t arrived. Which is, after all, where the company’s story began.</p><p><em><strong></strong></em></p><p><em><strong>UPDATE 4 March 2026: </strong></em><em>The weight of the Taara Beam (8 kg) and the launch year of the Taara Lightbridge (2019) were both corrected.</em></p>]]></description><pubDate>Wed, 04 Mar 2026 15:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/free-space-optical-link-taara</guid><category>Free-space-optics</category><category>Mobile-world-congress</category><category>Google</category><category>Digital-divide</category><category>Internet</category><dc:creator>Margo Anderson</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-plastic-elliptic-cylinder-case-equipped-with-a-lens-the-device-is-mounted-on-a-metal-beam.jpg?id=65113617&amp;width=980"></media:content></item><item><title>This Offshore Wind Turbine Will House a Data Center Underwater</title><link>https://spectrum.ieee.org/data-center-floating-wind-turbine</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-floating-wind-turbine-at-sea-an-expanded-view-of-a-buoyant-cylinder-at-the-turbine-s-base-reveals-a-large-hollow-interior-whi.jpg?id=65106142&width=1200&height=400&coordinates=0%2C292%2C0%2C292"/><br/><br/><p>As data-center developers frantically seek to secure power for their operations, one startup is proposing a novel solution: Build them into floating offshore wind turbines. </p><p>San Francisco–based offshore wind-power developer <a href="https://www.aikidotechnologies.com/" rel="noopener noreferrer" target="_blank">Aikido Technologies</a> today announced its plans to start housing data centers in the underwater tanks that keep its turbine platforms afloat. The turbines will supply the power for the servers, and onboard batteries and grid connection will provide backup. </p><p>The company’s first prototype, a 100-kilowatt unit, is scheduled to launch in the North Sea off the coast of Norway by the end of this year. A 15-to-18-megawatt project off the coast of the United Kingdom may follow in 2028.</p><p>Aikido is one of several companies planning data centers in unusual places—<a href="https://spectrum.ieee.org/underwater-data-centers" target="_self">underwater</a>, on floating buoys, in coal mines and now on offshore wind turbines. The creativity stems from the forces of several trends: rapidly rising energy demand from data centers, the need for domestic renewable power production, and limited real estate. </p><p>The North Sea serves as an ideal first spot for floating, wind-powered data centers because European policymakers and companies are looking to regain domestic control over energy production. They’re also looking to host an AI economy on servers within the continent’s boundaries. Floating wind platforms keep the compute out of sight while tapping the stronger, more consistent air streams that blow over deep waters, where traditional, seabed-mounted turbine monopiles can’t go. </p><p>“A lot of energy in the clean-energy space is focused on powering AI data centers quickly, reliably, and cleanly in a way that does not upset neighbors and remains safe, fast, and cheap,” says Ramez Naam, an independent clean-energy investor who does not have a stake in Aikido. “Aikido has that, and a smart team,” he says.</p><h2>Floating Wind-Power Designs Evolve</h2><p>Aikido’s design builds on many iterations tested by the growing floating wind industry. When Norwegian energy giant Equinor finished construction on the <a href="https://www.equinor.com/energy/hywind-scotland" target="_blank">world’s first floating wind farm </a>in 2017, it kept the turbines upright with ballasted steel columns extending 78 meters into the water—a design called a spar platform. This gave it a dense mass like the keel of a boat. Since then, the floating wind industry has largely <a href="https://spectrum.ieee.org/floating-offshore-wind-turbine" target="_self">coalesced around a semisubmersible design</a> based on oil and gas platforms. Semisubmersibles don’t go as deep as spar platforms; instead, they extend buoyancy horizontally. Anchors, chains, and ropes keep the platform floating within a certain radius.</p><p>Aikido is taking the semisubmersible approach. Its football-field-size platform holds the turbine in the center, and three legs extend tripod-like outward, like a Christmas-tree stand. At the end of each leg is a ballast that reaches 20 meters deep. This holds tanks largely filled with fresh water to maintain the platform’s buoyancy in the salty ocean.</p><p>The data centers will go in the upper part of each ballast tank. There’s room for a 3- to 4-MW data hall in each tank, giving the platform a combined compute of 10 to 12 MW. Below the data halls is an open chamber used as a safety barrier, and below that sit the freshwater tanks. The water is piped up to the data center for liquid cooling of the servers. The warmed water is then funneled back down the ballast into the tank. There, proximity to the cold ocean water cools it again as the heat is conducted out through the tank’s steel walls. </p><p>“We have this power from the wind. We have free cooling. We think we can be quite cost competitive compared to conventional data-center solutions,” says Aikido CEO <a href="https://www.linkedin.com/in/sam-kanner/" rel="noopener noreferrer" target="_blank">Sam Kanner</a>. “This crunch in the next five years is an opportunity for us to prove this out and supply AI compute where it’s needed.”</p><p>One challenge, he says, is that liquid cooling can’t cover all the data center’s needs. For example, heat generated from Ethernet switches that connect the GPUs can’t be liquid-cooled with commercially available technology. So Aikido installed an air-conditioning method for that.</p><p>Another challenge is the marine environment, which is “pretty brutal to engineer around because there’s the increased salinity, there’s debris, and there’s various kinds of corrosion and fouling of metal piping that you wouldn’t have in a freshwater environment,” says <a href="https://www.thefai.org/profile/daniel-king" rel="noopener noreferrer" target="_blank">Daniel King</a>, a research fellow at the Foundation for American Innovation in Washington who focuses on AI infrastructure. </p><h2>Offshore Data Centers Face Challenges</h2><p><span>Aikido’s plan avoids the prickly not-in-my-backyard complaints that are dogging both onshore wind and data-center projects. It might also circumvent some inquiries into water usage and power demand too, or so Aikido’s thinking goes. </span></p><p>But it might not be that easy. “Instinctively many people reach for offshore or even orbital outer-space data centers as a way to circumvent the typical burdens of environmental reviews,” says King. “But there could be more or additional requirements around discharging heat and the effects that has on marine life that are different from the considerations of a terrestrial data center. It’s unclear to me whether this actually makes life easier or harder for a developer.” </p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="3D rendering of a crane lowering a pre-fabricated data center into a hollow semi-submersible platform for a floating wind turbine." class="rm-shortcode" data-rm-shortcode-id="0a67f0ed0900a837eaabf97204dc71b9" data-rm-shortcode-name="rebelmouse-image" id="6f350" loading="lazy" src="https://spectrum.ieee.org/media-library/3d-rendering-of-a-crane-lowering-a-pre-fabricated-data-center-into-a-hollow-semi-submersible-platform-for-a-floating-wind-turbin.jpg?id=65111639&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Prefabricated data halls could be installed quayside, followed by final electrical and plumbing connections to commission the data center.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Aikido</small></p><p>Aikido’s “design choice to use the fresh water in the ballast as a working fluid is a novel one” that, thanks to the closed-loop system, may “alleviate some of the engineering problems you see when a really high temperature fluid is pumping its heat directly into a marine environment,” King says.</p><p>Offshore sites are also vulnerable to sabotage, King notes. Since Russia’s invasion of Ukraine, fleets of vessels directed by the Kremlin have <a href="https://www.bbc.com/news/world-europe-65309687" target="_blank">reportedly</a> started messing with offshore wind and communications infrastructure in northern Europe. Russian and Chinese boats have allegedly <a href="https://spectrum.ieee.org/black-sea-energy-link" target="_self">cut subsea cables in recent years</a>.</p><p>But vandalism is a risk anywhere, including at conventional data centers, Aikido CEO Kanner notes. Unlike those on land, where the local police have jurisdiction, Aikido’s data centers would enjoy protection from national coast guards, which he suggests gives an added degree of security. </p><h2>North Sea Hosts Clean Energy</h2><p>Kanner first began thinking about offshore wind turbines as a place to build data centers after a chance phone call with a cryptocurrency billionaire. The financier wanted to know whether turbines in international waters could power servers generating digital tokens at a moment when crypto-mining faced increased scrutiny from regulators. The talks fizzled. But that encounter sparked Kanner’s curiosity about how to use power generated onboard floating turbines. </p><p>When ChatGPT emerged in 2022 and sparked a heated debate over how to power and cool such technology, the idea to put the data center in the floating turbine clicked for Kanner. The idea really congealed after he met with the chief executive of Portland, Ore.–based <a href="https://panthalassa.com/" target="_blank">Panthalassa</a>. The wave-energy company was proposing to enclose small, remote data centers in buoys attached to equipment that generates power from the surf. Panthalassa <a href="https://www.youtube.com/watch?v=Q7Pmgq2JKbI" target="_blank">just completed</a> its full-scale prototype tests off the coast of Washington state last summer. </p><p>At that point, Aikido had already designed a modular platform for floating wind turbines. Each platform consists of 13 major steel components that are snapped together with pin joints—like IKEA furniture. The platforms fold up in a flat configuration that takes up roughly half the space of other designs, allowing it to be transported by a wider range of ships, according to Aikido. From there, it was a matter of figuring out how to accommodate a data center in the unused space. </p><p>Aikido’s prototype will use a refurbished<a href="https://en.wind-turbine-models.com/turbines/141-vestas-v17-75" target="_blank"> Vesta V-17 turbine</a>. It will need onboard batteries for backup power and will also be connected to the grid for additional power during seasons with less wind. Aikido envisions eventually sprinkling its data centers among large arrays of offshore turbines to tap into that larger power infrastructure. </p><p><span>Between Russia’s threat to expand its war in Ukraine to EU countries and the Trump administration’s bid to pressure Denmark into ceding sovereignty of Greenland to Washington, Europe is scrambling to build up its own energy production and AI capabilities. The North Sea, increasingly, looks like a primary theater of that effort. In January, nearly a dozen European nations banded together in a pact to transform the North Sea into a “</span><a href="https://www.canarymedia.com/articles/offshore-wind/european-nations-are-jointly-plotting-a-massive-offshore-wind-buildout" target="_blank">reservoir</a><span>” of clean power from offshore wind.</span></p>]]></description><pubDate>Tue, 03 Mar 2026 20:56:45 +0000</pubDate><guid>https://spectrum.ieee.org/data-center-floating-wind-turbine</guid><category>Floating-wind-turbine</category><category>Offshore-wind-farms</category><category>Data-center-energy</category><dc:creator>Alexander C. Kaufman</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-floating-wind-turbine-at-sea-an-expanded-view-of-a-buoyant-cylinder-at-the-turbine-s-base-reveals-a-large-hollow-interior-whi.jpg?id=65106142&amp;width=980"></media:content></item><item><title>Countdown to IEEE’s Annual Election</title><link>https://spectrum.ieee.org/ieee-annual-election-2026</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/hand-placing-a-ballot-in-a-green-box-against-a-blue-background.jpg?id=65111718&width=1200&height=400&coordinates=0%2C417%2C0%2C417"/><br/><br/><p>This year’s annual election, which begins on 17 August, will include candidates for IEEE president-elect and other officer positions up for election.</p><p>To see who is running for 2027 <a data-linked-post="2674856607" href="https://spectrum.ieee.org/2027-ieee-president-elect-candidates" target="_blank">IEEE president-elect</a> and the <a data-linked-post="2674856144" href="https://spectrum.ieee.org/2027-petition-president-elect-candidates" target="_blank">petition candidates</a>, visit the <a href="https://www.ieee.org/pe27" target="_blank">election website</a>.</p><p>The ballot also includes nominees for delegate-elect/director-elect offices submitted by division and region nominating committees, as well as <a href="https://ta.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Technical Activities</a> vice president-elect; <a href="https://ieeeusa.org/" rel="noopener noreferrer" target="_blank">IEEE-USA</a> president-elect; and <a href="https://standards.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Standards Association</a> board of governors members-at-large.</p><p>Those elected take office on 1 January 2027.</p><p> IEEE members who want to run for an office, except for IEEE president-elect, who have not been nominated, must submit their petition intention to the IEEE Board of Directors by 1 April. Petitions should be sent to the IEEE Corporate Governance staff at elections@ieee.org. The petition intention deadline for IEEE president-elect was 31 December.</p><h2>Election Updates</h2><p>Regional elections will also take place. Eligible voting members in IEEE <a href="https://ieeer1.org/" rel="noopener noreferrer" target="_blank">Region 1</a> (Northeastern U.S.) and <a href="https://r2.ieee.org/" rel="noopener noreferrer" target="_blank">Region 2</a> (Eastern U.S.) will elect the future IEEE Region 2 delegate-elect/director-elect (Eastern and Northeastern U.S.) for the 2027—2028 term. Members in the future IEEE Region 10 (North Asia) will elect the IEEE Region 10 delegate-elect/director-elect for the same term. These changes reflect IEEE’s upcoming region realignment, as outlined in <em><em>The Institute’s </em></em>September 2024 article, “<a href="https://spectrum.ieee.org/region-realignment-2024-election" target="_self">How Region Realignment Will Impact IEEE Elections</a>.”</p><p> Beginning this year, only professional members will be eligible to vote in IEEE’s annual election or sign related petitions. Ballots will be created for eligible voting members on record as of 31 March. To ensure voting eligibility, all members should review and update their <a href="https://ieee.org/go/my_account" rel="noopener noreferrer" target="_blank">contact information</a> and <a href="https://ieee.org/election-preferences" rel="noopener noreferrer" target="_blank">communication preferences</a> by that date.</p><p>To support sustainability initiatives, the “Candidate Biographies and Statements” booklet will no longer be available in print. Members can access the candidate biographies and statements within their electronic ballot, view them on the <a href="https://ieee.org/about/corporate/election" rel="noopener noreferrer" target="_blank">annual election website</a>, or download the digital booklet. Members are also encouraged to vote electronically.</p><p>For more information about the offices up for election, the process for getting on the annual ballot, and deadlines, visit the <a href="https://www.ieee.org/about/corporate/election" target="_blank">website</a> or email <a href="mailto:elections@ieee.org">elections@ieee.org</a>.</p>]]></description><pubDate>Tue, 03 Mar 2026 19:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/ieee-annual-election-2026</guid><category>Ieee-news</category><category>Ieee-election</category><category>Ieee-president-elect</category><category>Type-ti</category><dc:creator>Elizabeth Fuscaldo</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/hand-placing-a-ballot-in-a-green-box-against-a-blue-background.jpg?id=65111718&amp;width=980"></media:content></item><item><title>Optimizing a Battery Electric Vehicle Thermal Management System</title><link>https://content.knowledgehub.wiley.com/optimizing-a-battery-electric-vehicle-thermal-management-system/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/mathworks-logo.png?id=26851519&width=980"/><br/><br/><p>This webinar looks at a Battery Electric Virtual Vehicle Model of a mid-size BEV, and uses Simulink and Simscape to facilitate design exploration, component refinement, and system-level optimization. The virtual vehicle comprises five subsystems: Electric powertrain, driveline, <span>refrigerant cycle, coolant cycle, and passenger cabin. The model will be tested using different drive cycles, cooling, and heating scenarios. The results will be analyzed to determine the impact of the different design parameters on vehicle consumption.</span></p><p>The resulting virtual vehicle will be used to:</p><ul><li>Test different drive cycles and environmental conditions</li><li>Perform sensitivity analysis</li><li>Optimize model to improve thermal performance and <span>consumption</span></li></ul><div><span><a href="https://content.knowledgehub.wiley.com/optimizing-a-battery-electric-vehicle-thermal-management-system/" target="_blank">Register now for this free webinar!</a></span></div>]]></description><pubDate>Tue, 03 Mar 2026 11:00:02 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/optimizing-a-battery-electric-vehicle-thermal-management-system/</guid><category>Type-webinar</category><category>Battery-electric-vehicle</category><category>Electric-vehicles</category><category>Batteries</category><dc:creator>MathWorks</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/26851519/origin.png"></media:content></item><item><title>Watershed Moment for AI–Human Collaboration in Math</title><link>https://spectrum.ieee.org/ai-proof-verification</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/four-by-four-grid-of-circles-with-varying-color-gradient-patterns.jpg?id=65103143&width=2000&height=1500&coordinates=0%2C0%2C0%2C0"/><br/><br/><p><span>When Ukrainian mathematician </span><a href="https://people.epfl.ch/maryna.viazovska?lang=en" target="_blank">Maryna Viazovska</a><span> received a </span><a href="https://www.mathunion.org/imu-awards/fields-medal/fields-medals-2022" target="_blank">Fields Medal</a><span>—widely regarded as the Nobel Prize for mathematics—in July 2022,</span><span> it was big news. Not only was she the second woman to accept the honor in the award’s 86-year history, but she collected the medal just months after her country had been invaded by Russia. Nearly four years later, Viazovska is making waves again. <a href="https://www.math.inc/sphere-packing" target="_blank">Today</a>, in </span><span>a collaboration between humans and AI, Viazovska’s proofs have been formally verified, signaling rapid progress in AI’s abilities to <a href="https://spectrum.ieee.org/ai-math-benchmarks" target="_blank">assist</a> with mathemat</span><span>ical research. </span></p><p><span>“These new results seem very, very impressive, and definitely signal some rapid progress in this direction,” says AI-reasoning expert and Princeton University postdoc <a href="https://ai.princeton.edu/news/2025/ai-lab-welcomes-associate-research-scholars" target="_blank">Liam Fowl</a>, who was not involved in the work.</span></p><p>In her Fields Medal–winning research, Viazovska had tackled two versions of the sphere-packing problem, which asks: How densely can identical circles, spheres, et cetera, be packed in <em>n</em>-dimensional space? In two dimensions, the honeycomb is the best solution. In three dimensions, spheres stacked in a pyramid are optimal. But after that, it becomes exceedingly difficult to find the best solution, and to prove that it is in fact the best. </p><p>In 2016, Viazovska solved the problem in two cases. By using powerful mathematical functions known as (quasi-)modular forms, she proved that a symmetric arrangement known as E<sub>8</sub> is the <a href="https://annals.math.princeton.edu/articles/keyword/sphere-packing" target="_blank">best 8-dimensional packing</a>, and soon after proved with collaborators that another sphere packing called the <a href="https://annals.math.princeton.edu/2017/185-3/p08" target="_blank">Leech lattice is best in 24 dimensions</a>. Though seemingly abstract, this result has potential to help solve everyday problems related to dense sphere packing, including <a data-linked-post="2650280110" href="https://spectrum.ieee.org/novel-error-correction-code-opens-a-new-approach-to-universal-quantum-computing" target="_blank">error-correcting codes</a> used by smartphones and space probes.</p><p>The proofs were verified by the mathematical community and deemed correct, leading to the Fields Medal recognition. But formal verification—the ability of a proof to be verified by a computer—is another beast altogether. Since 2022, much <a href="https://cacm.acm.org/research/formal-reasoning-meets-llms-toward-ai-for-mathematics-and-verification/" target="_blank">progress</a> has been made in AI-assisted formal proof verification. </p><h2>Serendipity leads to formalization project</h2><p>A few years later, a chance meeting in Lausanne, Switzerland, between third-year undergraduate <a href="https://thefundamentaltheor3m.github.io/" target="_blank">Sidharth Hariharan</a> and Viazovska would reignite her interest in sphere-packing proofs. Though still very early in his career, Hariharan was already becoming adept at formalizing proofs.</p><p>“Formal verification of a proof is like a rubber stamp,” Fowl says. “It’s a kind of bona fide certification that you know your statements of reasoning are correct.”</p><p>Hariharan told Viazovska how he had been using the process of formalizing proofs to learn and really understand mathematical concepts. In response, Viazovska expressed an interest in formalizing her proofs, largely out of curiosity. From this, in March 2024 the <a href="https://thefundamentaltheor3m.github.io/Sphere-Packing-Lean/" target="_blank">Formalising Sphere Packing in Lean</a> project was born. <span>Lean is a popular programming language and “proof assistant” that allows mathematicians to write proofs that are then verified for absolute correctness by a computer.</span></p><p>A collaboration formed to write a human-readable “blueprint” that could be used to map the 8-dimensional proof’s various constituents and figure out which of them had and had not been formalized and/or proven, and then prove and formalize those missing elements in Lean. </p><p><span>“We had been building the project’s repository for about 15 months when we enabled public access in June 2025,” recalls Hariharan, now a first-year Ph.D. student at Carnegie Mellon University. “Then, in late October we heard from Math, Inc. for the first time.”</span></p><h2>The AI speedup</h2><p><a href="https://www.math.inc/" target="_blank">Math, Inc.</a> is a startup developing Gauss, an AI specifically designed to automatically formalize proofs. “It’s a particular kind of language model called a reasoning agent that’s meant to interleave both traditional natural-language reasoning and fully formalized reasoning,” explains <a href="https://jesse-michael-han.github.io/" target="_blank">Jesse Han</a>, Math, Inc. CEO and cofounder. “So it’s able to conduct literature searches, call up tools, and use a computer to write down Lean code, take notes, spin up verification tooling, run the Lean compiler, et cetera.”</p><p>Math, Inc. first hit the headlines when it announced that Gauss had completed a <a href="https://mathstodon.xyz/@tao/111847680248482955" target="_blank">Lean formalization of the strong <span>prime number theorem</span> (PNT)</a> in three weeks last summer, a task that Fields Medalist <a href="https://terrytao.wordpress.com/" target="_blank">Terence Tao</a> and <a href="https://sites.math.rutgers.edu/~alexk/" target="_blank">Alex Kontorovich</a> had been working on. Similarly, Math, Inc. contacted Hariharan and colleagues to say that Gauss had proven several facts related to their sphere-packing project.</p><p>“They told us that they had finished 30 “sorrys,” which meant that they proved 30 intermediate facts that we wanted proved,” explains Hariharan. A proportion of these sorrys were shared with the project team and merged with their own work. “One of them helped us identify a typo in our project, which we then fixed,” adds Hariharan. “So it was a pretty fruitful collaboration.”</p><h2>From 8 to 24 dimensions</h2><p>But then, radio silence followed. Math, Inc. appeared to lose interest. However, while Hariharan and colleagues continued their labor of love, Math, Inc. was building a new and improved version of Gauss. “We made a research breakthrough sometime mid-January that produced a much stronger version of Gauss,” says Han. “This new version reproduced our three-week PNT result in two to three days.”</p><p>Days later, the new Gauss was steered back to the sphere-packing formalization. Working from the invaluable preexisting blueprint and work that Hariharan and collaborators had shared, Gauss not only autoformalized the 8-dimensional case, but also found and fixed a typo in the published paper, all in the space of five days.</p><p>“When they reached out to us in late January saying that they finished it, to put it very mildly, we were very surprised,” says Hariharan. “But at the end of the day, this is technology that we’re very excited about, because it has the capability to do great things and to assist mathematicians in remarkable ways.”</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="A laptop with sphere packing code in the foreground, with an autumn sunset at Carnegie Mellon in the background. " class="rm-shortcode" data-rm-shortcode-id="1dd0742602809b330ce11552ae9d6d3f" data-rm-shortcode-name="rebelmouse-image" id="898fd" loading="lazy" src="https://spectrum.ieee.org/media-library/a-laptop-with-sphere-packing-code-in-the-foreground-with-an-autumn-sunset-at-carnegie-mellon-in-the-background.jpg?id=65106120&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Hariharan was working on sphere-packing proof verification as the sun was setting behind Carnegie Mellon’s Hamerschlag Hall.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Sidharth Hariharan</small></p><p>The 8-dimensional sphere-packing proof formalization alone, <a href="https://leanprover.zulipchat.com/#narrow/channel/113486-announce/topic/Sphere.20Packing.20Milestone/with/575354368" target="_blank">announced on February 23</a>, represents a watershed moment for autoformalization and AI–human collaboration. But <a target="_blank"></a><a href="https://math.inc/sphere-packing" target="_blank">today, Math, Inc. revealed</a><span> </span>an even more impressive accomplishment: Gauss has autoformalized Viazovska’s 24-dimensional sphere-packing proof—all 200,000+ lines of code of it—in just two weeks. </p><p>There are commonalities between the 8- and 24-dimensional cases in terms of the foundational theory and overall architecture of the proof, meaning some of the code from the 8-dimensional case could be refactored and reused. However, Gauss had no preexisting blueprint to work from this time. “And it was actually significantly more involved than the 8-dimensional case, because there was a lot of missing background material that had to be brought on line surrounding many of the properties of the Leech lattice, in particular its uniqueness,” explains Han.</p><p>Though the 24-dimensional case was an automated effort, both Han and Hariharan acknowledge the many contributions from humans that laid the foundations for this achievement, regarding it as a collaborative endeavor overall between humans and AI.</p><p>But for Han, it represents even more: the beginning of a revolutionary transformation in mathematics, where extremely large-scale formalizations are commonplace. “A programmer used to be someone who punched holes into cards, but then the act of programming became separated from whatever material substrate was used for recording programs,” he concludes. “I think the end result of technology like this will be to free mathematicians to do what they do best, which is to dream of new mathematical worlds.”</p>]]></description><pubDate>Mon, 02 Mar 2026 18:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/ai-proof-verification</guid><category>Mathematics</category><category>Ai-reasoning</category><category>Large-language-models</category><category>Ai</category><dc:creator>Benjamin Skuse</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/four-by-four-grid-of-circles-with-varying-color-gradient-patterns.jpg?id=65103143&amp;width=980"></media:content></item><item><title>How Electrical Engineers Fight a War</title><link>https://spectrum.ieee.org/repair-ukraine-power-grid</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-worker-kneels-in-the-snow-while-welding-a-damaged-pipe-buried-underneath-the-rubble-of-a-power-station.jpg?id=65064523&width=2000&height=1500&coordinates=416%2C0%2C417%2C0"/><br/><br/><p><span>Every time Russia attacks Ukraine’s power infrastructure, Ukrainian engineers risk their lives in the scramble to get electricity flowing again. It’s a dangerous job at best, and a lethal one at worst. It also requires creativity. Time pressure and <a href="https://spectrum.ieee.org/russia-targets-ukraine-grid" target="_blank">equipment shortages</a> make it nearly impossible to rebuild things exactly as they were, so engineers must redesign on the fly. </span></p><p>These dangerous, stressful conditions have led to more engineers being hurt or killed. The rate of injuries among Ukrainian workers in electricity generation, transmission, and distribution <a target="_blank">jumped nearly 50 percent</a> after Russia’s full-scale invasion began four years ago, according to data provided by<a target="_blank"> </a><a href="https://amnu.gov.ua/nagorna-antonina-maksymivna/" target="_blank"><span>Antonina Nagorna</span></a><span>, who leads the Department of Epidemiology and Physiology of Work at the Kundiiev Institute of Occupational Health, in Kiev. By her count at least 48 people had died on the job through the end of 2025, either while repairing damage or during the bombardment itself.</span></p><p><span>Transmission mastermind Oleksiy Brecht joined that grim count in January. Brecht, who was director for network operations and development at the Ukrainian grid operator </span><span><a href="https://ua.energy/" target="_blank">Ukrenergo</a></span><span>, died while coordinating work at Ukraine’s most attacked electrical switchyard, Kyivska, west of the capital. He was 47 years old.</span></p><p><span>Brecht’s life and death are a window into the realities of thousands of Ukrainian engineers who face conditions beyond what most engineers could imagine. “The war completely transformed the professional life of a top-manager engineer,” says </span><span><a href="https://www.linkedin.com/in/mariia-tsaturian-86560b282/" target="_blank">Mariia Tsaturian</a></span><span>, an energy analyst and chief communication officer at the think tank </span><span><a href="https://uafp.eu/" target="_blank">Ukraine Facility Platform</a></span><span>, who previously worked with Brecht at Ukrenergo. “As for junior staff, their world was turned upside down entirely. A substation engineer working under shelling is something no one had ever seen or experienced before,” she says.</span></p><h2>How Russia Attacks Ukraine’s Grid</h2><p><span>Over the course of the war, Russia has increasingly focused on destroying Ukraine’s energy infrastructure. It sends attack drones almost daily during the winter there, when heat and electricity is needed most to survive the bitter cold. Every 10 days or so it barrages Ukraine’s power system with combinations of missiles and hundreds of drones, repeatedly mangling equipment and cutting off power. The cold imposed on Ukrainian homes is </span><span><a href="https://www.counteroffensive.news/p/why-cold-darkness-worsen-ptsd-among" target="_blank">especially hard on former prisoners of war</a></span> held in Russia, where cold is routinely employed as a form of torture.</p><p><span>In the first two years of the war, keeping the grid flowing was a 24/7 job. But Ukrenergo has adapted to the impossible since then, says</span> <span><a href="https://ua.energy/about_us/the-management/chairman-of-the-management-board/" target="_blank"><span>Vitali<span>y Zay</span><span>chenko</span></span></a></span>, Ukrenergo’s CEO, <span>who somehow found a moment to speak with <em>IEEE</em> </span><span><em>Spectrum </em></span><span>via video call</span><span>. Now, “we are more prepared for each attack. We have well-trained teams. We have support from Europe,” he says.</span></p><p>But the risk involved in repairing the grid remains unnerving. Last month a crew from <a href="https://dtek.com/" target="_blank">DTEK</a>, Ukraine’s biggest private-sector energy firm, was traveling between locations when it was targeted by a Russian drone. They heard the drone coming and escaped before their <span><a href="https://x.com/DTEK_Group/status/2021986413487554807" target="_blank">bucket truck was destroyed</a></span>. Russian forces have employed “double tap” attacks against DTEK’s crews, targeting their power infrastructure with a follow-up strike designed to kill first responders—a practice <span><a href="https://ukraine.ohchr.org/en/Extensive-Civilian-Harm-from-Russian-Attacks-This-Spring" target="_blank">confirmed by the U.N</a></span>.</p><p><span>When Russia began targeting power infrastructure in October 2022, Brecht’s job shifted from high-level direction of grid planning and maintenance to near-constant triage and real-time system reengineering. Most weeks, Brecht spent several days in the field, crisscrossing the country to coordinate work at smashed substations. Brecht would often be found on site figuring out how to restart power using whatever equipment was available. “It was a unique decision every time,” says Zaychenko</span><span>.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Oleksiy Brecht seated in a conference room while listening intently to a virtual Ukrenergo meeting projected onto the wall." class="rm-shortcode" data-rm-shortcode-id="c2f0253c54a11a55e3e99dc84a2e67a0" data-rm-shortcode-name="rebelmouse-image" id="3143a" loading="lazy" src="https://spectrum.ieee.org/media-library/oleksiy-brecht-seated-in-a-conference-room-while-listening-intently-to-a-virtual-ukrenergo-meeting-projected-onto-the-wall.jpg?id=65065018&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Oleksiy Brecht died in January while overseeing repairs to a bombed-out substation near Kyiv. He called his employees at Ukrenergo “my fighters. They called him “our general.”</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Ukrenergo</small></p><p><span>Zaychenko noted Brecht’s “genius” for finding creative grid fixes, his passion and leadership skills, and his credibility with power brokers in Ukraine and abroad. Brecht scoured the globe sourcing critical replacement parts, including stockpiled or older equipment from international utilities. Transformers, which </span><span><a href="https://spectrum.ieee.org/transformer-shortage" target="_self">can take a year or more</a></span> to source, are especially precious.</p><p><span>When the right equipment wasn’t forthcoming, Brecht figured out how to make do. For example, he would deploy transformers from Western Europe rated for 400 kilovolts to restart a 330-kV circuit. He would adapt transformers designed for 60-hertz alternating current for emergency use on Ukraine’s 50-Hz grid. </span><span>“He would find a way,” says Zay</span><span>chenko, who worked closely with Brecht for over 20 years.</span></p><p><span>Brecht’s assistant at Ukrenergo, Svitlana Dubas-Veremiienko, says he also contributed to the teams’ morale and confidence. She </span><span><a href="https://www.facebook.com/share/p/1DoAefkHYH/?mibextid=wwXIfr" target="_blank">shared on Facebook</a></span> that he smoked “like a locomotive” at the worst times, and yet exuded calm: <span>“In his presence, chaos subsided,” she wrote. </span><span>Brecht was not easy to intimidate. “He was someone who never feared anything or anyone,” adds Tsaturian.</span></p><p><span>Brecht’s work proved so essential that Ukrenergo</span><span>’s former Deputy CEO Andrii Nemyrovskyi recalls telling Ukraine’s Ministry of Defense in 2022 that the military must protect two people: Zaychenko</span><span>, because he ran grid operations, and Brecht because “system operations requires that the system exists.” Last week, President Zelenskyy </span><span><a href="https://babel.ua/en/news/125158-former-head-of-ukrenergo-oleksiy-brecht-who-died-while-working-at-a-substation-was-awarded-the-title-hero-of-ukraine" target="_blank"><span>posthumously named Brecht a “Hero of Ukraine</span></a>” </span><span>for “strengthening the energy security of Ukraine under martial law.”</span></p><h2><span></span>Ukraine’s Power Infrastructure Under Fire</h2><p><span>Brecht joined Ukrenergo in 2002 after earning his degree in power engineering from <a href="https://kpi.ua/en" target="_blank">Igor Sikorsky Kyiv Polytechnic Institute</a></span><span>. Over the next 20 years, he held leadership positions in dispatching and grid planning and development. He joined Ukrenergo’s management board in June 2022 and served as its interim leader in 2024.</span></p><p><span>Brecht’s contributions to Ukraine’s wartime survival began with several key upgrades to Ukrenergo’s technical capabilities ahead of the February 2022 invasion. He reintroduced “live line” techniques, providing training and equipment that enable crews to work on circuits while they continue to carry power to homes and to sustain critical needs.</span></p><p><span>Brecht also led preparations for Ukraine’s disconnection from the Russian grid and synchronization with Europe’s. When the invasion began, Ukraine’s Minister of Energy at the time, </span><span><a href="https://en.wikipedia.org/wiki/German_Galushchenko" target="_blank">Herman H<span>alushchenko</span></a></span><span>, had argued that switching from Russia’s grid to Europe’s was too risky, according to Tsaturian and Nemyrovskyi. But Brecht insisted—correctly, as hindsight has shown—that synchronizing with Europe would provide crucial stability and backup power. At his urging, the</span><span><a href="https://spectrum.ieee.org/ukraine-europe-electricity-grid" target="_self"> switch was completed in daring fashion</a></span> during the first weeks of the invasion.</p><p><span>(Halushchenko was dismissed last year following longstanding </span><span><a href="https://spectrum.ieee.org/ukraine-nuclear-power-fears-russia" target="_self"><span>allegations of corruption and Russian influence</span></a></span> in Ukraine’s energy sector that gave way to indictments in November 2025 that have rocked President Zelenskyy’s government. In January, Halushchenko was <span><a href="https://www.rferl.org/a/ukraine-corruption-energy-sector-kickbacks-scandal/33679486.html" target="_blank"><span>detained while attempting to leave the country</span></a></span> and charged with money laundering.)<span><br/></span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Two power grid workers in heavy coats preparing a bucket truck for power line repairs on a snowy residential street." class="rm-shortcode" data-rm-shortcode-id="ce5d28090ba881cfeb35ddc5f94ee063" data-rm-shortcode-name="rebelmouse-image" id="c7574" loading="lazy" src="https://spectrum.ieee.org/media-library/two-power-grid-workers-in-heavy-coats-preparing-a-bucket-truck-for-power-line-repairs-on-a-snowy-residential-street.jpg?id=65035406&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">DTEK workers conduct repairs on 26 January following a Russian attack in Kyiv.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Danylo Antoniuk/Cover Images/AP</small></p><h2>A Ukrainian Electrical Engineer’s Final Day</h2><p><span>Brecht’s final act of service followed the mass destruction of January 19—a day when Kyiv’s high temperature was –10° C. That night, Russian forces targeted Ukraine’s energy infrastructure with 18 ballistic missiles, a hypersonic cruise missile, 15 conventional cruise missiles, and 339 drones.</span></p><p><span>The impact included catastrophic damage at the 750-kV Kyivska substation, which feeds electricity to the capital and ensures cooling power for two nuclear power plants.</span></p><p><span>Brecht was leading a team of about 100 people who were undoing the damage when he made a deadly choice. He picked up a section of busbar—solid conduits that connect circuits within substations. It had been blasted to the ground and, unbeknownst to Brecht, was carrying lethal voltage. It’s unclear whether its circuit was still connected, or if it had </span><span><a href="https://spectrum.ieee.org/transmission-line-safety-suit" target="_self"><span>picked up voltage from another circuit</span></a></span><span>.</span></p><p><span>Zaychenko says an investigation is ongoing to provide answers. “I don</span><span>’t know why he touched this busbar. Maybe because of tiredness. Maybe something else,” he says. “He was trying to help the team to do this job quickly. It was a huge mistake and a huge loss for us.”</span></p>]]></description><pubDate>Mon, 02 Mar 2026 14:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/repair-ukraine-power-grid</guid><category>Ukraine</category><category>Russia-ukraine-war</category><category>Transmission-and-distribution</category><category>Power-grid</category><dc:creator>Peter Fairley</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-worker-kneels-in-the-snow-while-welding-a-damaged-pipe-buried-underneath-the-rubble-of-a-power-station.jpg?id=65064523&amp;width=980"></media:content></item><item><title>How Quantum Data Can Teach AI to Do Better Chemistry</title><link>https://spectrum.ieee.org/quantum-chemistry</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/illustration-of-a-human-head-in-profile-with-a-spiral-upon-which-human-figures-are-walking-overlaid-on-an-image-of-an-atom.png?id=63744636&width=2000&height=1500&coordinates=0%2C183%2C0%2C184"/><br/><br/><p><strong>Sometimes a visually compelling</strong> metaphor is all you need to get an otherwise complicated idea across. In the summer of 2001, a Tulane physics professor named <a href="https://sse.tulane.edu/john-p-perdew-phd" rel="noopener noreferrer" target="_blank">John P. Perdew</a> came up with a banger. He wanted to convey the hierarchy of computational complexity inherent in the behavior of electrons in materials. He called it “<a href="https://pubs.aip.org/aip/acp/article-abstract/577/1/1/573973/Jacob-s-ladder-of-density-functional?redirectedFrom=fulltext" rel="noopener noreferrer" target="_blank">Jacob’s Ladder</a>.” He was appropriating an idea from the Book of Genesis, in which Jacob dreamed of a ladder “set up on the earth, and the top of it reached to heaven. And behold the angels of God ascending and descending on it.”</p><p>Jacob’s Ladder represented a gradient and so too did Perdew’s ladder, not of spirit but of computation. At the lowest rung, the math was the simplest and least computationally draining, with materials represented as a smoothed-over, cartoon version of the atomic realm. As you climbed the ladder, using increasingly more intensive mathematics and compute power, descriptions of atomic reality became more precise. And at the very top, nature was perfectly described via impossibly intensive computation—something like what God might see.</p><p>With this metaphor in mind, we propose to extend Jacob’s Ladder beyond Perdew’s version, to encompass <em><em>all</em></em> computational approaches to simulating the behavior of electrons. And instead of climbing rung by rung toward an unreachable summit, we have an idea to <em><em>bend</em></em> the ladder so that even the very top lies within our grasp. Specifically, we at Microsoft envision a hybrid approach. It starts with using quantum computers to generate exquisitely accurate data about the behavior of electrons—data that would be prohibitively expensive to compute classically. This quantum-generated data will then train AI models running on classical machines, which can predict the properties of materials with remarkable speed. By combining quantum accuracy with AI-driven speed, we can ascend Jacob’s Ladder faster, designing new materials with novel properties and at a fraction of the cost.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Graph comparing computational cost and simulation accuracy: Classical, DFT, Coupled, Quantum+AI." class="rm-shortcode" data-rm-shortcode-id="d3175e47f1efce66722968991732929d" data-rm-shortcode-name="rebelmouse-image" id="13461" loading="lazy" src="https://spectrum.ieee.org/media-library/graph-comparing-computational-cost-and-simulation-accuracy-classical-dft-coupled-quantum-ai.png?id=65172435&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">At the base of Jacob’s Ladder are classical models that treat atoms as simple balls connected by springs—fast enough to handle millions of atoms over long times but with the lowest precision. Moving up along the black line, semiempirical methods add some quantum mechanical calculations. Next are approximations based on Hartree-Fock (HF) and density functional theory (DFT), which include full quantum behavior of individual electrons but model their interactions in an averaged way. The greater accuracy requires significant computing power, which limits them to simulating molecules with no more than a few hundred atoms. At the top are coupled-cluster and full configuration interaction (FCI) methods—exquisitely accurate but, at the moment, restricted to tiny molecules or subsets of electrons due to the large computational costs involved. Quantum computing can bend the accuracy-versus-cost curve at the top [orange line], making highly accurate calculations feasible for large systems. AI, trained on this quantum-accurate data, can flatten this curve [purple line], enabling rapid predictions for similar systems at a fraction of the cost of classical computing.</small></p><p>In our approach, the base of Jacob’s Ladder still starts with classical models that treat atoms as simple balls connected by springs—models that are fast enough to handle millions of atoms over long times, but with the lowest precision. As we ascend the ladder, some quantum mechanical calculations are added to semiempirical methods. Eventually, we’ll get to the full quantum behavior of individual electrons but with their interactions modeled in an averaged way; this greater accuracy requires significant compute power, which means you can only simulate molecules of no more than a few hundred atoms. At the top will be the most computationally intensive methods—prohibitively expensive on classical computers but tractable on quantum computers.</p><p>In the coming years, quantum computing and AI will become critical tools in the pursuit of new materials science and chemistry. When combined, their forces will multiply. We believe that by using quantum computers to train AI on quantum data, the result will be hyperaccurate AI models that can reach ever higher rungs of computational complexity without the prohibitive computational costs.</p><p>This powerful combination of quantum computing and AI could unlock unprecedented advances in chemical discovery, materials design, and our understanding of complex reaction mechanisms. Chemical and materials innovations already play a vital—if often invisible—role in our daily lives. These discoveries shape the modern world: new drugs to help treat disease more effectively, improving health and extending life expectancy; everyday products like toothpaste, sunscreen, and cleaning supplies that are safe and effective; cleaner fuels and longer-lasting batteries; improved fertilizers and pesticides to boost global food production; and biodegradable plastics and recyclable materials to shrink our environmental footprint. In short, chemical discovery is a behind-the-scenes force that greatly enhances our everyday lives.</p><h3></h3><br/><div class="rblad-ieee_in_content"></div><p>The potential is vast. Anywhere AI is already in use, this new quantum-enhanced AI could drastically improve results. These models could, for instance, scan for previously unknown catalysts that could fix atmospheric carbon and so mitigate climate change. They could discover novel chemical reactions to turn waste plastics into useful raw materials and remove toxic “forever chemicals” from the environment. They could uncover new battery chemistries for safer, more compact energy storage. They could supercharge drug discovery for personalized medicine.</p><p>And that would just be the beginning. We believe quantum-enhanced AI will open up new frontiers in materials science and reshape our ability to understand and manipulate matter at its most fundamental level. Here’s how.</p><h2>How Quantum Computing Will Revolutionize Chemistry</h2><p>To understand how quantum computing and AI could help bend Jacob’s Ladder, it’s useful to look at the classical approximation techniques that are currently used in chemistry. In atoms and molecules, electrons interact with one another in complex ways called electron correlations. These correlations are crucial for accurately describing chemical systems. Many computational methods, such as <a href="https://www.synopsys.com/glossary/what-is-density-functional-theory.html" target="_blank">density functional theory</a> (DFT) or the <a href="https://insilicosci.com/hartree-fock-method-a-simple-explanation/" target="_blank">Hartree-Fock method</a>, simplify these interactions by replacing the intricate correlations with averaged ones, assuming that each electron moves within an average field created by all other electrons. Such approximations work in many cases, but they can’t provide a full description of the system.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="a woman stirs a white powder inside a glove box." class="rm-shortcode" data-rm-shortcode-id="c0e1bdeb8e874740173f3f02c62eb308" data-rm-shortcode-name="rebelmouse-image" id="40c54" loading="lazy" src="https://spectrum.ieee.org/media-library/a-woman-stirs-a-white-powder-inside-a-glove-box.jpg?id=63745112&width=980"/> </p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="The second shows white powder in test tubes." class="rm-shortcode" data-rm-shortcode-id="5ac7a16946b97de61047d14b9ff28eb7" data-rm-shortcode-name="rebelmouse-image" id="2b1dd" loading="lazy" src="https://spectrum.ieee.org/media-library/the-second-shows-white-powder-in-test-tubes.jpg?id=63745094&width=980"/> </p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="shows a gloved hand holding a silvery disc close to an electronic apparatus." class="rm-shortcode" data-rm-shortcode-id="f3e77cc9b1b4502b2fab5ed6a3cf10f5" data-rm-shortcode-name="rebelmouse-image" id="98787" loading="lazy" src="https://spectrum.ieee.org/media-library/shows-a-gloved-hand-holding-a-silvery-disc-close-to-an-electronic-apparatus.jpg?id=63745089&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">A joint project between Microsoft and Pacific Northwest National Laboratory used AI and high-performance computing to identify potential materials for battery electrolytes. The most promising were synthesized [top and middle] and tested [bottom] at PNNL. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Dan DeLong/Microsoft</small></p><p>Electron correlation is particularly important in systems where the electrons are strongly interacting—as in materials with unusual electronic properties, like high-temperature superconductors—or when there are many possible arrangements of electrons with similar energies—such as compounds containing certain metal atoms that are crucial for catalytic processes.</p><p>In these cases, the simplified approach of DFT or Hartree-Fock breaks down, and more sophisticated methods are needed. As the number of possible electron configurations increases, we quickly reach an “exponential wall” in computational complexity, beyond which classical methods become infeasible.</p><p>Enter the quantum computer. Unlike classical bits, which are either on or off, qubits can exist in superpositions—effectively coexisting in multiple states simultaneously. This should allow them to represent many electron configurations at once, mirroring the complex quantum behavior of correlated electrons. Because quantum computers operate on the same principles as the electron systems they will simulate, they will be able to accurately simulate even strongly correlated systems—where electrons are so interdependent that their behavior must be calculated collectively.</p><h2>AI’s Role in Advancing Computational Chemistry</h2><p>At present, even the computationally cheap methods at the bottom of Jacob’s Ladder are slow, and the ones higher up the ladder are slower still. AI models have emerged as powerful accelerators to such calculations because they can serve as emulators that predict simulation outcomes without running the full calculations. The models can speed up the time it takes to solve problems up and down the ladder by orders of magnitude.</p><p>This acceleration opens up entirely new scales of scientific exploration. In 2023 and 2024, we collaborated with researchers at <a href="https://www.pnnl.gov/" target="_blank">Pacific Northwest National Laboratory</a> (PNNL) on using <a href="https://arxiv.org/abs/2401.04070" rel="noopener noreferrer" target="_blank">advanced AI models</a> to evaluate over 32 million potential battery materials, looking for safer, cheaper, and more environmentally friendly options. This enormous pool of candidates would have taken about 20 years to explore using traditional methods. And yet, within less than a week, <a href="https://spectrum.ieee.org/ai-battery-material" target="_blank">that list was narrowed</a> to 500,000 stable materials and then to 800 highly promising candidates. Throughout the evaluation, the AI models replaced expensive and time-consuming quantum chemistry calculations, in some cases delivering insights half a million times as fast as would otherwise have been the case.</p><p>We then used high-performance computing (HPC) to validate the most promising materials with DFT and AI-accelerated molecular dynamics simulations. The PNNL team then spent about nine months synthesizing and testing one of the candidates—a solid-state electrolyte that uses sodium, which is cheap and abundant, and some other materials, with 70 percent less lithium than conventional lithium-ion designs. The team then built a prototype solid-state battery that they tested over a range of temperatures.</p><p>This potential battery breakthrough isn’t unique. AI models have also dramatically accelerated research in <a href="https://science.nasa.gov/earth/ai-open-science-climate-change/" rel="noopener noreferrer" target="_blank">climate science</a>, <a href="https://www.sciencedirect.com/science/article/pii/S3050585225000217" rel="noopener noreferrer" target="_blank">fluid dynamics</a>, <a href="https://www.simonsfoundation.org/2024/08/26/astrophysicists-use-ai-to-precisely-calculate-universes-settings/" rel="noopener noreferrer" target="_blank">astrophysics</a>, <a href="https://www.nature.com/articles/s44222-025-00349-8" rel="noopener noreferrer" target="_blank">protein design</a>, and <a href="https://www.nature.com/articles/d41586-025-00602-5" rel="noopener noreferrer" target="_blank">chemical and biological discovery</a>. By replacing traditional simulations that can take days or weeks to run, AI is reshaping the pace and scope of scientific research across disciplines.</p><p>However, these AI models are only as good as the quality and diversity of their training data. Whether sourced from high-fidelity simulations or carefully curated experimental results, these data must accurately represent the underlying physical phenomena to ensure reliable predictions. Poor or biased data can lead to misleading outcomes. By contrast, high-quality, diverse datasets—such as those full-accuracy quantum simulations—enable models to generalize across systems and uncover new scientific insights. This is the promise of using quantum computing for training AI models.</p><h2>How to Accelerate Chemical Discovery</h2><p>The real breakthrough will come from strategically combining quantum computing’s and AI’s unique strengths. AI already excels at learning patterns and making rapid predictions. Quantum computers, which are still being scaled up to be practically useful, will excel at capturing electron correlations that classical computers can only approximate. So if you train classical models on quantum-generated data, you’ll get the best of both worlds: the accuracy of quantum delivered at the speed of AI.</p><p>As we learned from the Microsoft-PNNL collaboration on electrolytes, AI models alone can greatly speed up chemical discovery. In the future, quantum-accurate AI models will tackle even bigger challenges. Consider the basic discovery process, which we can think of as a funnel. Scientists begin with a vast pool of candidate molecules or materials at the wide-mouthed top, narrowing them down using filters based on desired properties—such as boiling point, conductivity, viscosity, or reactivity. Crucially, the effectiveness of this screening process depends heavily on the accuracy of the models used to predict these properties. Inaccurate predictions can create a “leaky” funnel, where promising candidates are mistakenly discarded or poor ones are mistakenly advanced.</p><p>Quantum-accurate AI models will dramatically improve the precision of chemical-property predictions. They’ll be able to help identify “first-time right” candidates, sending only the most promising molecules to the lab for synthesis and testing—which will save both time and cost.</p><p>Another key aspect of the discovery process is understanding the chemical reactions that govern how new substances are formed and behave. Think of these reactions as a network of roads winding through a mountainous landscape, where each road represents a possible reaction step, from starting materials to final products. The outcome of a reaction depends on how quickly it travels down each path, which in turn is determined by the energy barriers along the way—like mountain passes that must be crossed. To find the most efficient route, we need accurate calculations of these barrier heights, so that we can identify the lowest passes and chart the fastest path through the reaction landscape.</p><p>Even small errors in estimating these barriers can lead to incorrect predictions about which products will form. Case in point: A slight miscalculation in the energy barrier of an environmental reaction could mean the difference between labeling a compound a “forever chemical” or one that safely degrades over time.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="70e0b9b540bc0e061b38252e88243293" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/X1aWMYukuUk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> </p><p>Accurate modeling of reaction rates is also essential for designing catalysts—substances that speed up and steer reactions in desired directions. Catalysts are crucial in industrial chemical production, carbon capture, and biological processes, among many other things. Here, too, quantum-accurate AI models can play a transformative role by providing the high-fidelity data needed to predict reaction outcomes and design better catalysts.</p><p>Once trained, these AI models, powered by quantum-accurate data, will revolutionize computational chemistry by delivering quantum-level precision. And once the AI models, which run on classical computers, are trained with quantum computing data, researchers will be able to run high-accuracy simulations on laptops or desktop computers, rather than relying on massive supercomputers or future quantum hardware. By making advanced chemical modeling more accessible, these tools will democratize discovery and empower a broader community of scientists to tackle some of the most pressing challenges in health, energy, and sustainability.</p><h2>Remaining Challenges for AI and Quantum Computing</h2><p>By now, you’re probably wondering: When will this transformative future arrive? It’s true that<strong> </strong>quantum computers still struggle with <a href="https://spectrum.ieee.org/quantum-error-correction" target="_blank">error rates</a> and limited lifetimes of usable qubits. And they still need to scale to the size required for meaningful chemistry simulations. Meaningful chemistry simulations beyond the reach of classical computation will require hundreds to thousands of high-quality qubits with error rates of around 10<span><sup>-15</sup></span>, or one error in a quadrillion operations. Achieving this level of reliability will require fault tolerance through redundant encoding of quantum information in logical qubits, each consisting of hundreds of physical qubits, thus requiring a total of about a million physical qubits. Current AI models for chemical-property predictions may not have to be fully redesigned. We expect that it will be sufficient to start with models pretrained on classical data and then fine-tune them with a few results from quantum computers.</p><p> Despite some open questions, the potential rewards in terms of scientific understanding and technological breakthroughs make our proposal a compelling direction for the field. The quantum computing industry has begun to move beyond the early noisy prototypes, and high-fidelity quantum computers with low error rates could be possible <a href="https://www.darpa.mil/research/programs/quantum-benchmarking-initiative" target="_blank">within a decade</a>.</p><p>Realizing the full potential of quantum-enhanced AI for chemical discovery will require focused collaboration between chemists and materials scientists who understand the target problems, experts in quantum computing who are building the hardware, and AI researchers who are developing the algorithms. Done right, quantum-enhanced AI could start to tackle the world’s toughest challenges—from climate change to disease—years ahead of anyone’s expectations. <span class="ieee-end-mark"></span></p>]]></description><pubDate>Mon, 02 Mar 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/quantum-chemistry</guid><category>Quantum-computing</category><category>Quantum-chemistry</category><category>Drug-discovery</category><category>Batteries</category><category>Ai-models</category><category>Microsoft</category><dc:creator>Chi Chen</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/illustration-of-a-human-head-in-profile-with-a-spiral-upon-which-human-figures-are-walking-overlaid-on-an-image-of-an-atom.png?id=63744636&amp;width=980"></media:content></item><item><title>What Military Drones Can Teach Self-Driving Cars</title><link>https://spectrum.ieee.org/military-drones-self-driving-cars</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/silhouette-from-the-back-of-an-adults-head-as-they-look-at-two-monitors-one-screen-displays-a-drone-and-the-other-shows-self-d.jpg?id=65098234&width=2000&height=1500&coordinates=416%2C0%2C417%2C0"/><br/><br/><p><a href="https://spectrum.ieee.org/self-driving-cars/missy-cummings" target="_blank">Self-driving cars often struggle</a> with situations that are commonplace for human drivers. When confronted with construction zones, school buses, power outages, or misbehaving pedestrians, these vehicles often behave unpredictably, leading to crashes or freezing events, causing significant disruption to local traffic and possibly blocking first responders from doing their jobs. Because self-driving cars cannot successfully handle such routine problems, self-driving companies use human babysitters to remotely supervise them and intervene when necessary.</p><p>This idea—humans supervising autonomous vehicles from a distance—is not new. The U.S. military has been doing it since the 1980s with unmanned aerial vehicles (UAVs). In those early years, the military experienced numerous accidents due to poorly designed control stations, lack of training, and communication delays.</p><p>As a Navy fighter pilot in the 1990s, I was one of the first researchers to examine how to improve the UAV remote supervision interfaces. The thousands of hours I and others have spent working on and observing these systems generated a deep body of knowledge about how to safely manage remote operations. With recent revelations that U.S. commercial self-driving car remote operations are handled by <a href="https://www.c-span.org/program/senate-committee/tesla-and-waymo-executives-others-testify-about-self-driving-cars/672835" rel="noopener noreferrer" target="_blank">operators in the Philippines</a>, it is clear that self-driving companies have not learned the hard-earned military lessons that would promote safer use of self-driving cars today.</p><p>While stationed in the Western Pacific during the Gulf War, I spent a significant amount of time in air operations centers, learning how military strikes were planned, implemented and then replanned when the original plan inevitably fell apart. After obtaining my PhD, I leveraged this experience to begin research on the remote control of UAVs for all three branches of the U.S. military. Sitting shoulder-to-shoulder in tiny trailers with operators flying UAVs in local exercises or from 4000 miles away, my job was to learn about the pain points for the remote operators as well as identify possible improvements as they executed supervisory control over UAVs that might be flying halfway around the world.</p><p>Supervisory control refers to situations where humans monitor and support autonomous systems, stepping in when needed. For self-driving cars, this oversight can take several forms. The first is teleoperation, where<strong> </strong>a human remotely controls the car’s speed and steering from afar. Operators sit at a console with a steering wheel and pedals, similar to a racing simulator. Because this method relies on real-time control, it is extremely sensitive to communication delays.</p><p>The second form of supervisory control is remote assistance. Instead of driving the car in real time, a human gives higher-level guidance. For example, an operator might click a path on a map (called laying “breadcrumbs”) to show the car where to go, or interpret information the AI cannot understand, such as hand signals from a construction worker. This method tolerates more delay than teleoperation but is still time-sensitive.</p><h2>Five Lessons From Military Drone Operations</h2><p>Over 35 years of UAV operations, the military consistently encountered five major challenges during drone operations which provide valuable lessons for self-driving cars.</p><h3>Latency</h3><p>Latency—delays in sending and receiving information due to distance or poor network quality—is the single most important challenge for remote vehicle control. Humans also have their own built-in delay: neuromuscular lag. Even under perfect conditions, people cannot reliably respond to new information in less than 200–500 milliseconds. In remote operations, where communication lag already exists, this makes real-time control even more difficult.</p><p>In early drone operations, U.S. Air Force pilots in Las Vegas (the primary U.S. UAV operations center) attempted to take off and land drones in the Middle East using teleoperation. With at least a two-second delay between command and response, the accident rate was <a href="https://dsiac.dtic.mil/articles/reliability-of-uavs-and-drones/" rel="noopener noreferrer" target="_blank">16 times that of fighter jets conducting the same missions</a> . The military switched to local line-of-sight operators and eventually to fully automated takeoffs and landings. When I interviewed the pilots of these UAVs, they all stressed how difficult it was to control the aircraft with significant time lag.</p><p>Self-driving car companies typically rely on cellphone networks to deliver commands. These networks are unreliable in cities and prone to delays. This is one reason many companies prefer remote assistance instead of full teleoperation. But even remote assistance can go wrong. In <a href="https://www.forbes.com/sites/bradtempleton/2024/03/26/waymo-runs-a-red-light-and-the-difference-between-humans-and-robots/" rel="noopener noreferrer" target="_blank">one incident</a>, a Waymo operator instructed a car to turn left when a traffic light appeared yellow in the remote video feed—but the network latency meant that the light had already turned red in the real world. After moving its remote operations center from the U.S. to the Philippines, Waymo’s latency increased even further. It is imperative that control not be so remote, both to resolve the latency issue but also increase oversight for security vulnerabilities.</p><h3>Workstation Design</h3><p>Poor interface design has caused many drone accidents. The military learned the hard way that confusing controls, difficult-to-read displays, and unclear autonomy modes can have disastrous consequences. Depending on the specific UAV platform, the FAA attributed between 20% and 100% of Army and Air Force UAV <a href="https://apps.dtic.mil/sti/pdfs/ADA460102.pdf" rel="noopener noreferrer" target="_blank">crashes caused by human error through 2004</a> to poor interface design.</p><h3>UAV crashes (1986-2004) caused by human factors problems, including poor interface and procedure design. These two categories do not sum to 100% because both factors could be present in an accident.</h3><br/><table border="0" style="white-space: unset;" width="100%"><tbody><tr><th align="left" scope="col" style="color: #ffffff; background-color: #000000;"></th><th align="left" scope="col" style="color: #ffffff; background-color: #000000;" width="25%">Human Factors</th><th align="left" scope="col" style="color: #ffffff; background-color: #000000;" width="25%"> Interface Design</th><th align="left" scope="col" style="color: #ffffff; background-color: #000000;" width="25%"> Procedure Design</th></tr><tr><th align="left" scope="col" style="color: #ffffff; background-color: #000000;"> Army Hunter</th><td align="left" style="background-color: #DFD5C1;"> 47%</td><td align="left" style="background-color: #DFD5C1;"> 20%</td><td align="left" style="background-color: #DFD5C1;"> 20%</td></tr><tr><th align="left" scope="col" style="color: #ffffff; background-color: #000000;"> Army Shadow</th><td align="left" style="background-color: #E9E3D6;"> 21%</td><td align="left" style="background-color: #E9E3D6;"> 80%</td><td align="left" style="background-color: #E9E3D6;"> 40%</td></tr><tr><th align="left" scope="col" style="color: #ffffff; background-color: #000000;">Air Force Predator</th><td align="left" style="background-color: #DFD5C1;"> 67%</td><td align="left" style="background-color: #DFD5C1;"> 38%</td><td align="left" style="background-color: #DFD5C1;"> 75%</td></tr><tr><th align="left" scope="col" style="color: #ffffff; background-color: #000000;" width="25%"> Air Force Global Hawk</th><td align="left" style="background-color: #E9E3D6;"> 33%</td><td align="left" style="background-color: #E9E3D6;"> 100%</td><td align="left" style="background-color: #E9E3D6;"> 0%</td></tr></tbody></table><p>Many UAV aircraft crashes have been caused by poor human control systems. In one case, buttons were placed on the controllers such that it was relatively easy to <a href="https://spectrum.ieee.org/review-djis-new-fpv-drone-is-effortless-exhilarating-fun" target="_self">accidentally shut off the engine</a> instead of firing a missile. This poor design led to the accidents where the remote operators <a href="https://dspace.mit.edu/handle/1721.1/84129" rel="noopener noreferrer" target="_blank">inadvertently shut the engine down instead of launching a missile</a>.</p><p> The self-driving industry reveals hints of comparable issues. Some autonomous shuttles use off-the-shelf gaming controllers, which—while inexpensive—were never designed for vehicle control. The off-label use of such controllers can lead to mode confusion, which was a factor in a <a href="https://www.govtech.com/transportation/after-crash-orlandos-self-driving-bus-back-on-the-road" rel="noopener noreferrer" target="_blank">recent shuttle crash</a>. Significant human-in-the-loop testing is needed to avoid such problems, not only prior to system deployment, but also after major software upgrades.</p><h3>Operator Workload</h3><p>Drone missions typically include long periods of surveillance and information gathering, occasionally ending with a missile strike. These missions can sometimes last for days; for example, while the military waits for the person of interest to emerge from a building. As a result, the remote operators experience extreme swings in workload: sometimes overwhelming intensity, sometimes crushing boredom. Both conditions can lead to errors.</p><p>When operators teleoperate drones, workload is high and fatigue can quickly set in. But when onboard autonomy handles most of the work, operators can become bored, complacent, and less alert. This pattern is <a href="https://www.airuniversity.af.edu/Wild-Blue-Yonder/Articles/Article-Display/Article/2144225/airmen-and-unmanned-aerial-vehicles-the-danger-of-generalization/" rel="noopener noreferrer" target="_blank">well documented in UAV research</a>.</p><p>Self-driving car operators are likely experiencing similar issues for tasks ranging from interpreting confusing signs to helping cars escape dead ends. In simple scenarios, operators may be bored; in emergencies—like driving into a flood zone or responding during a citywide power outage—they can become quickly overwhelmed.</p><p>The military has tried for years to have one person supervise many drones at once, because it is far more cost effective. However, cognitive switching costs (regaining awareness of a situation after switching control between drones) result in workload spikes and high stress. That coupled with increasingly complex interfaces and communication delays have made this extremely difficult.</p><p>Self-driving car companies likely face the same roadblocks. They will need to model operator workloads and be able to reliably predict what staffing should be and how many vehicles a single person can effectively supervise, especially during emergency operations. If every self-driving car turns out to need a dedicated human to pay close attention, such operations would no longer be cost-effective.</p><h3>Training</h3><p>Early drone programs lacked formal training requirements, with training programs designed by pilots, for pilots. Unfortunately, supervising a drone is more akin to air traffic control than actually flying an aircraft, so the military often placed drone operators in critical roles with inadequate preparation. This caused many accidents. Only years later did the military conduct <a href="https://www.researchgate.net/publication/238795397_Enhancing_Unmanned_Aerial_System_Training_A_Taxonomy_of_Knowledge_Skills_Attitudes_and_Methods" rel="noopener noreferrer" target="_blank">a proper analysis of the knowledge, skills, and abilities needed to conduct safe remote operations</a>, and changed their training program.</p><p>Self-driving companies do not publicly share their training standards, and no regulations currently govern the qualifications for remote operators. On-road safety depends heavily on these operators, yet very little is known about how they are selected or taught. If commercial aviation dispatchers are required to have formal training overseen by the FAA, which are very similar to self-driving remote operators, we should hold commercial self-driving companies to similar standards.</p><h3>Contingency Planning</h3><p>Aviation has strong protocols for emergencies including predefined procedures for lost communication, backup ground control stations, and highly reliable onboard behaviors when autonomy fails. In the military, drones may fly themselves to safe areas or land autonomously if contact is lost. Systems are designed with cybersecurity threats—like GPS spoofing—in mind.</p><p>Self-driving cars appear far less prepared. The <a href="https://waymo.com/blog/2025/12/autonomously-navigating-the-real-world" rel="noopener noreferrer" target="_blank">2025 San Francisco power outage</a> left Waymo vehicles frozen in traffic lanes, blocking first responders and creating hazards. These vehicles are supposed to perform “minimum-risk maneuvers” such as pulling to the side—but many of them didn’t. This suggests gaps in contingency planning and basic fail-safe design.</p><div class="horizontal-rule"></div><p><span>The history of military drone operations offers crucial lessons for the self-driving car industry. Decades of experience show that remote supervision demands extremely low latency, carefully designed control stations, manageable operator workload, rigorous, well-designed training programs, and strong contingency planning.</span></p><p>Self-driving companies appear to be repeating many of the early mistakes made in drone programs. Remote operations are treated as a support feature rather than a mission-critical safety system. But as long as AI struggles with uncertainty, which will be the case for the foreseeable future, remote human supervision will remain essential. The military learned these lessons through painful trial and error, yet the self-driving community appears to be ignoring them. The self-driving industry has the chance—and the responsibility—to learn from our mistakes in combat settings before it harms road users everywhere.</p>]]></description><pubDate>Mon, 02 Mar 2026 12:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/military-drones-self-driving-cars</guid><category>Drones</category><category>Military-robots</category><category>Self-driving-cars</category><dc:creator>Missy Cummings</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/silhouette-from-the-back-of-an-adults-head-as-they-look-at-two-monitors-one-screen-displays-a-drone-and-the-other-shows-self-d.jpg?id=65098234&amp;width=980"></media:content></item><item><title>Andrew Ng: Unbiggen AI</title><link>https://spectrum.ieee.org/andrew-ng-data-centric-ai</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/andrew-ng-listens-during-the-power-of-data-sooner-than-you-think-global-technology-conference-in-brooklyn-new-york-on-wednes.jpg?id=29206806&width=2000&height=1500&coordinates=0%2C0%2C0%2C0"/><br/><br/><p><strong><a href="https://en.wikipedia.org/wiki/Andrew_Ng" rel="noopener noreferrer" target="_blank">Andrew Ng</a> has serious street cred</strong> in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at <a href="https://stanfordmlgroup.github.io/" rel="noopener noreferrer" target="_blank">Stanford University</a>, cofounded <a href="https://research.google/teams/brain/" rel="noopener noreferrer" target="_blank">Google Brain</a> in 2011, and then served for three years as chief scientist for <a href="https://ir.baidu.com/" rel="noopener noreferrer" target="_blank">Baidu</a>, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told <em>IEEE Spectrum</em> in an exclusive Q&A.</p><hr/><p>
	Ng’s current efforts are focused on his company 
	<a href="https://landing.ai/about/" rel="noopener noreferrer" target="_blank">Landing AI</a>, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the <a href="https://www.youtube.com/watch?v=06-AZXmwHjo" target="_blank">data-centric AI movement</a>, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.
</p><p>
	Andrew Ng on...
</p><ul>
<li><a href="#big">What’s next for really big models</a></li>
<li><a href="#career">The career advice he didn’t listen to</a></li>
<li><a href="#defining">Defining the data-centric AI movement</a></li>
<li><a href="#synthetic">Synthetic data</a></li>
<li><a href="#work">Why Landing AI asks its customers to do the work</a></li>
</ul><p>
<strong>The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an <a href="https://spectrum.ieee.org/deep-learning-computational-cost" target="_self">unsustainable trajectory</a>. Do you agree that it can’t go on that way?</strong>
</p><p>
<strong>Andrew Ng: </strong>This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.
</p><p>
<strong>When you say you want a foundation model for computer vision, what do you mean by that?</strong>
</p><p>
<strong>Ng:</strong> This is a term coined by <a href="https://cs.stanford.edu/~pliang/" rel="noopener noreferrer" target="_blank">Percy Liang</a> and <a href="https://crfm.stanford.edu/" rel="noopener noreferrer" target="_blank">some of my friends at Stanford</a> to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, <a href="https://spectrum.ieee.org/open-ais-powerful-text-generating-tool-is-ready-for-business" target="_self">GPT-3</a> is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.
</p><p>
<strong>What needs to happen for someone to build a foundation model for video?</strong>
</p><p>
<strong>Ng:</strong> I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.
</p><p>
	Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.</strong>
</p><p>
<strong>Ng: </strong>Over a decade ago, when I proposed starting the <a href="https://research.google/teams/brain/" rel="noopener noreferrer" target="_blank">Google Brain</a> project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.
</p><p class="pull-quote">
	“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”<br/>
	—Andrew Ng, CEO & Founder, Landing AI
</p><p>
	I remember when my students and I published the first 
	<a href="https://nips.cc/" rel="noopener noreferrer" target="_blank">NeurIPS</a> workshop paper advocating using <a href="https://developer.nvidia.com/cuda-zone" rel="noopener noreferrer" target="_blank">CUDA</a>, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.
</p><p>
<strong>I expect they’re both convinced now.</strong>
</p><p>
<strong>Ng:</strong> I think so, yes.
</p><p>
	Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>How do you define data-centric AI, and why do you consider it a movement?</strong>
</p><p>
<strong>Ng:</strong> Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.
</p><p>
	When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.
</p><p>
	The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a 
	<a href="https://neurips.cc/virtual/2021/workshop/21860" rel="noopener noreferrer" target="_blank">data-centric AI workshop at NeurIPS</a>, and I was really delighted at the number of authors and presenters that showed up.
</p><p>
<strong>You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?</strong>
</p><p>
<strong>Ng: </strong>You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.
</p><p>
<strong>When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?</strong>
</p><p>
<strong>Ng: </strong>Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of <a href="https://developers.arcgis.com/python/guide/how-retinanet-works/" rel="noopener noreferrer" target="_blank">RetinaNet</a>. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.
</p><p class="pull-quote">
	“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”<br/>
	—Andrew Ng
</p><p>
	For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.
</p><p>
<strong>Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?</strong>
</p><p>
<strong>Ng:</strong> Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, <a href="https://www.cs.princeton.edu/~olgarus/" rel="noopener noreferrer" target="_blank">Olga Russakovsky</a> gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed <a href="https://neurips.cc/virtual/2021/invited-talk/22281" rel="noopener noreferrer" target="_blank">Mary Gray’s presentation,</a> which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like <a href="https://www.microsoft.com/en-us/research/project/datasheets-for-datasets/" rel="noopener noreferrer" target="_blank">Datasheets for Datasets</a> also seem like an important piece of the puzzle.
</p><p>
	One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.
</p><p>
<strong>When you talk about engineering the data, what do you mean exactly?</strong>
</p><p>
<strong>Ng: </strong>In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a <a href="https://jupyter.org/" rel="noopener noreferrer" target="_blank">Jupyter notebook</a> and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.
</p><p>
	For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>What about using synthetic data, is that often a good solution?</strong>
</p><p>
<strong>Ng: </strong>I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, <a href="https://tensorlab.cms.caltech.edu/users/anima/" rel="noopener noreferrer" target="_blank">Anima Anandkumar</a> gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.
</p><p>
<strong>Do you mean that synthetic data would allow you to try the model on more data sets?</strong>
</p><p>
<strong>Ng: </strong>Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.
</p><p class="pull-quote">
	“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”<br/>
	—Andrew Ng
</p><p>
	Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>To make these issues more concrete, can you walk me through an example? When a company approaches <a href="https://landing.ai/" rel="noopener noreferrer" target="_blank">Landing AI</a> and says it has a problem with visual inspection, how do you onboard them and work toward deployment?</strong>
</p><p>
<strong>Ng: </strong>When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the <a href="https://landing.ai/platform/" rel="noopener noreferrer" target="_blank">LandingLens</a> platform. We often advise them on the methodology of data-centric AI and help them label the data.
</p><p>
	One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.
</p><p>
<strong>How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?</strong>
</p><p>
<strong>Ng:</strong> It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.
</p><p>
	In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?
</p><p>
<strong>So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.</strong>
</p><p>
<strong>Ng: </strong>Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.
</p><p>
<strong>Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?</strong>
</p><p>
<strong>Ng: </strong>In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.
</p><p>
<a href="#top">Back to top</a>
</p><p><em>This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist</em><em>.”</em></p>]]></description><pubDate>Wed, 09 Feb 2022 15:31:12 +0000</pubDate><guid>https://spectrum.ieee.org/andrew-ng-data-centric-ai</guid><category>Deep-learning</category><category>Artificial-intelligence</category><category>Andrew-ng</category><category>Type-cover</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/andrew-ng-listens-during-the-power-of-data-sooner-than-you-think-global-technology-conference-in-brooklyn-new-york-on-wednes.jpg?id=29206806&amp;width=980"></media:content></item><item><title>How AI Will Change Chip Design</title><link>https://spectrum.ieee.org/ai-chip-design-matlab</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/layered-rendering-of-colorful-semiconductor-wafers-with-a-bright-white-light-sitting-on-one.jpg?id=29285079&width=2000&height=1500&coordinates=166%2C0%2C167%2C0"/><br/><br/><p>The end of <a href="https://spectrum.ieee.org/on-beyond-moores-law-4-new-laws-of-computing" target="_self">Moore’s Law</a> is looming. Engineers and designers can do only so much to <a href="https://spectrum.ieee.org/ibm-introduces-the-worlds-first-2nm-node-chip" target="_self">miniaturize transistors</a> and <a href="https://spectrum.ieee.org/cerebras-giant-ai-chip-now-has-a-trillions-more-transistors" target="_self">pack as many of them as possible into chips</a>. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.</p><p>Samsung, for instance, is <a href="https://spectrum.ieee.org/processing-in-dram-accelerates-ai" target="_self">adding AI to its memory chips</a> to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has <a href="https://spectrum.ieee.org/heres-how-googles-tpu-v4-ai-chip-stacked-up-in-training-tests" target="_self">doubled its processing power</a> compared with that of  its previous version.</p><p>But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with <a href="https://www.linkedin.com/in/heather-gorr-phd" rel="noopener noreferrer" target="_blank">Heather Gorr</a>, senior product manager for <a href="https://www.mathworks.com/" rel="noopener noreferrer" target="_blank">MathWorks</a>’ MATLAB platform.</p><p><strong>How is AI currently being used to design the next generation of chips?</strong></p><p><strong>Heather Gorr:</strong> AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-resized-container rm-resized-container-25 rm-float-left" data-rm-resized-container="25%" style="float: left;">
<img alt="Portrait of a woman with blonde-red hair smiling at the camera" class="rm-shortcode rm-resized-image" data-rm-shortcode-id="1f18a02ccaf51f5c766af2ebc4af18e1" data-rm-shortcode-name="rebelmouse-image" id="2dc00" loading="lazy" src="https://spectrum.ieee.org/media-library/portrait-of-a-woman-with-blonde-red-hair-smiling-at-the-camera.jpg?id=29288554&width=980" style="max-width: 100%"/>
<small class="image-media media-caption" placeholder="Add Photo Caption..." style="max-width: 100%;">Heather Gorr</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..." style="max-width: 100%;">MathWorks</small></p><p>Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see  something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.</p><p><strong>What are the benefits of using AI for chip design?</strong></p><p><strong>Gorr:</strong> Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a <a href="https://en.wikipedia.org/wiki/Model_order_reduction" rel="noopener noreferrer" target="_blank">reduced order model</a>, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your <a href="https://www.ibm.com/cloud/learn/monte-carlo-simulation" rel="noopener noreferrer" target="_blank">Monte Carlo simulations</a> using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.</p><p><strong>So it’s like having a digital twin in a sense?</strong></p><p><strong>Gorr:</strong> Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.</p><p><strong>So, it’s going to be more efficient and, as you said, cheaper?</strong></p><p><strong>Gorr:</strong> Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.</p><p><strong>We’ve talked about the benefits. How about the drawbacks?</strong></p><p><strong>Gorr: </strong>The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.</p><p>Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.</p><p>One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.</p><p><strong>How can engineers use AI to better prepare and extract insights from hardware or sensor data?</strong></p><p><strong>Gorr: </strong>We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.</p><p>One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on <a href="https://github.com/" rel="noopener noreferrer" target="_blank">GitHub</a> or <a href="https://www.mathworks.com/matlabcentral/" rel="noopener noreferrer" target="_blank">MATLAB Central</a>, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.</p><p><strong>What should engineers and designers consider wh</strong><strong>en using AI for chip design?</strong></p><p><strong>Gorr:</strong> Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.</p><p><strong>How do you think AI will affect chip designers’ jobs?</strong></p><p><strong>Gorr:</strong> It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.</p><p><strong>How do you envision the future of AI and chip design?</strong></p><p><strong>Gorr</strong><strong>:</strong> It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.</p>]]></description><pubDate>Tue, 08 Feb 2022 14:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/ai-chip-design-matlab</guid><category>Chip-fabrication</category><category>Matlab</category><category>Moores-law</category><category>Chip-design</category><category>Ai</category><category>Digital-twins</category><dc:creator>Rina Diane Caballar</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/layered-rendering-of-colorful-semiconductor-wafers-with-a-bright-white-light-sitting-on-one.jpg?id=29285079&amp;width=980"></media:content></item><item><title>Atomically Thin Materials Significantly Shrink Qubits</title><link>https://spectrum.ieee.org/2d-hbn-qubit</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-golden-square-package-holds-a-small-processor-sitting-on-top-is-a-metal-square-with-mit-etched-into-it.jpg?id=29281587&width=2000&height=1500&coordinates=166%2C0%2C167%2C0"/><br/><br/><p>Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.</p><p>IBM has adopted the superconducting qubit road map of <a href="https://spectrum.ieee.org/ibms-envisons-the-road-to-quantum-computing-like-an-apollo-mission" target="_self">reaching a 1,121-qubit processor by 2023</a>, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.</p><p>Now researchers at <a href="https://www.nature.com/articles/s41563-021-01187-w" rel="noopener noreferrer" target="_blank">MIT have been able to both reduce the size of the qubits</a> and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.</p><p>“We are addressing both qubit miniaturization and quality,” said <a href="https://equs.mit.edu/william-d-oliver/" rel="noopener noreferrer" target="_blank">William Oliver</a>, the director for the <a href="https://cqe.mit.edu/" target="_blank">Center for Quantum Engineering</a> at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”</p><p>The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.</p><p>Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-resized-container rm-resized-container-25 rm-float-left" data-rm-resized-container="25%" style="float: left;">
<img alt="Golden dilution refrigerator hanging vertically" class="rm-shortcode rm-resized-image" data-rm-shortcode-id="694399af8a1c345e51a695ff73909eda" data-rm-shortcode-name="rebelmouse-image" id="6c615" loading="lazy" src="https://spectrum.ieee.org/media-library/golden-dilution-refrigerator-hanging-vertically.jpg?id=29281593&width=980" style="max-width: 100%"/>
<small class="image-media media-caption" placeholder="Add Photo Caption..." style="max-width: 100%;">Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..." style="max-width: 100%;">Nathan Fiske/MIT</small></p><p>In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.</p><p>As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.</p><p>In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.</p><p>“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author <a href="https://equs.mit.edu/joel-wang/" rel="noopener noreferrer" target="_blank">Joel Wang</a>, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. </p><p>On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.</p><p>While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.</p><p>“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”</p><p>This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.</p><p>“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.</p><p>Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.</p>]]></description><pubDate>Mon, 07 Feb 2022 16:12:05 +0000</pubDate><guid>https://spectrum.ieee.org/2d-hbn-qubit</guid><category>Quantum-computing</category><category>2d-materials</category><category>Ibm</category><category>Qubits</category><category>Hexagonal-boron-nitride</category><category>Superconducting-qubits</category><category>Mit</category><dc:creator>Dexter Johnson</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-golden-square-package-holds-a-small-processor-sitting-on-top-is-a-metal-square-with-mit-etched-into-it.jpg?id=29281587&amp;width=980"></media:content></item></channel></rss>